使用docker编译打包sailfishos

开坑

  1. ubuntu HA_BUILD

用官方的ubuntu镜像即可,16.04或18.04都可以,不要用最新的20.04。一般来说启动之后的镜像除了手动指定的目录是持久化的,其他的会重启后失效,所以最好自己做一个镜像,把安卓编译环境安装上。

启动时映射本地目录,当作ANDROID_ROOT目录。

  1. mer MER_BUILD

  2. OBS

  3. gitlab ci

harbor跨大版本升级

注意:必须是用域名的方式(也就是有内网的dns),如果以前用ip,则本方法无效!

Harbor1.2之前的版本不能直接升级到新版本,想要升级到最新版并且业务不中断,可以采用如下方式。

大体流程如下:

B机器搭一个新harbor -> 手动将旧harbor的镜像push到新harbor -> 更改A域名指向到B主机ip ->

测试B的harbor服务是否正常 -> 铲掉A上的旧harbor -> 在A上重新搭建harbor -> B机器上的harbor同步到A上的harbor

测试A的harbor服务是否正常 -> 改回A域名指向A主机 -> 删掉B上的同步。

手动push旧harbor镜像到新harbor所用到的脚本:
pip install python_harborclient
get_all.py:

1
2
3
4
5
6
7
8
9
10
11
12
#!/usr/bin/python
from registry import RegistryApi
api = RegistryApi('admin', 'password', 'http://pk8stemp02.rmz.flamingo-inc.com:8888')
maxsize = 65536
repos = api.getRepositoryList(maxsize)
repositories = repos.get('repositories')
for repo in repositories:
tags = api.getTagList(repo).get('tags')
if tags:
for tag in tags:
print(repo + ":" +tag)
```

python get_all.py > all_repos.txt
allimages=$(cat all_repos.txt)
ORIGIN_HOST=”pk8snode01.rmz.flamingo-inc.com:8888” #旧harbor
BACK_HOST=”pk8stemp02.rmz.flamingo-inc.com:8888” #新harbor

#提前登录一下

#docker login $BACK_HOST
for image in ${allimages}; do
docker pull ${ORIGIN_HOST}/$image
docker tag ${ORIGIN_HOST}/$image ${BACK_HOST}/$image
docker push ${BACK_HOST}/$image
sleep 1
echo $image “done”
done
`

Docker on SailfishOS

How to install Docker on SailfishOS/如何将Docker安装到SailfishOS

This post will show you how to install Docker on SailfishOS, and some hacks need to do.

这篇文章将介绍如何将Docker安装到SailfishOS上,和需要做的一些hack。

Prerequisites/先决条件

https://docs.docker.com/install/linux/docker-ce/binaries/#install-daemon-and-client-binaries-on-linux

  • A 64-bit installation
  • Version 3.10 or higher of the Linux kernel. The latest version of the kernel available for you platform is recommended.
  • iptables version 1.4 or higher
  • git version 1.7 or higher
  • A ps executable, usually provided by procps or a similar package.
  • XZ Utils 4.9 or higher
  • A properly mounted cgroupfs hierarchy; a single, all-encompassing cgroup mount point is not sufficient. See Github issues #2683, #3485, #4568).

  • 64位系统

  • 3.10内核或更高
  • iptable版本至少是1.4
  • git版本至少1.7
  • 可以执行ps
  • xz工具版本至少4.9
  • 正确安装的cgroupfs层次结构; 一个单一的,无所不包的cgroup挂载点是不够的。

Check Kernel support/检查内核支持

Use this script check-config.sh
使用这个脚本 check-config.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
[nemo@Sailfish ~]$ ./check-config.sh 
info: reading kernel config from /proc/config.gz ...

Generally Necessary:
- cgroup hierarchy: properly mounted [/sys/fs/cgroup]
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: enabled
- CONFIG_IPC_NS: enabled
- CONFIG_UTS_NS: enabled
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_CPUACCT: enabled
- CONFIG_CGROUP_DEVICE: enabled
- CONFIG_CGROUP_FREEZER: enabled
- CONFIG_CGROUP_SCHED: enabled
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: enabled
- CONFIG_KEYS: enabled
- CONFIG_VETH: enabled
- CONFIG_BRIDGE: enabled
- CONFIG_BRIDGE_NETFILTER: enabled (as module)
- CONFIG_NF_NAT_IPV4: enabled
- CONFIG_IP_NF_FILTER: enabled
- CONFIG_IP_NF_TARGET_MASQUERADE: enabled
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled
- CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled
- CONFIG_NETFILTER_XT_MATCH_IPVS: enabled
- CONFIG_IP_NF_NAT: enabled
- CONFIG_NF_NAT: enabled
- CONFIG_NF_NAT_NEEDED: enabled
- CONFIG_POSIX_MQUEUE: enabled
- CONFIG_DEVPTS_MULTIPLE_INSTANCES: enabled

Optional Features:
- CONFIG_USER_NS: enabled
- CONFIG_SECCOMP: enabled
- CONFIG_CGROUP_PIDS: missing
- CONFIG_MEMCG_SWAP: enabled
- CONFIG_MEMCG_SWAP_ENABLED: enabled
(cgroup swap accounting is currently enabled)
- CONFIG_MEMCG_KMEM: enabled
- CONFIG_RESOURCE_COUNTERS: enabled
- CONFIG_BLK_CGROUP: enabled
- CONFIG_BLK_DEV_THROTTLING: missing
- CONFIG_IOSCHED_CFQ: enabled
- CONFIG_CFQ_GROUP_IOSCHED: missing
- CONFIG_CGROUP_PERF: enabled
- CONFIG_CGROUP_HUGETLB: missing
- CONFIG_NET_CLS_CGROUP: enabled
- CONFIG_CGROUP_NET_PRIO: enabled
- CONFIG_CFS_BANDWIDTH: missing
- CONFIG_FAIR_GROUP_SCHED: enabled
- CONFIG_RT_GROUP_SCHED: enabled
- CONFIG_IP_VS: enabled
- CONFIG_IP_VS_NFCT: enabled
- CONFIG_IP_VS_RR: enabled
- CONFIG_EXT3_FS: enabled
- CONFIG_EXT3_FS_XATTR: enabled
- CONFIG_EXT3_FS_POSIX_ACL: enabled
- CONFIG_EXT3_FS_SECURITY: enabled
- CONFIG_EXT4_FS: enabled
- CONFIG_EXT4_FS_POSIX_ACL: missing
- CONFIG_EXT4_FS_SECURITY: enabled
enable these ext4 configs if you are using ext4 as backing filesystem
- Network Drivers:
- "overlay":
- CONFIG_VXLAN: enabled
Optional (for encrypted networks):
- CONFIG_CRYPTO: enabled
- CONFIG_CRYPTO_AEAD: enabled
- CONFIG_CRYPTO_GCM: enabled
- CONFIG_CRYPTO_SEQIV: enabled
- CONFIG_CRYPTO_GHASH: enabled
- CONFIG_XFRM: enabled
- CONFIG_XFRM_USER: enabled
- CONFIG_XFRM_ALGO: enabled
- CONFIG_INET_ESP: enabled
- CONFIG_INET_XFRM_MODE_TRANSPORT: enabled
- "ipvlan":
- CONFIG_IPVLAN: missing
- "macvlan":
- CONFIG_MACVLAN: enabled
- CONFIG_DUMMY: missing
- "ftp,tftp client in container":
- CONFIG_NF_NAT_FTP: enabled
- CONFIG_NF_CONNTRACK_FTP: enabled
- CONFIG_NF_NAT_TFTP: enabled
- CONFIG_NF_CONNTRACK_TFTP: enabled
- Storage Drivers:
- "aufs":
- CONFIG_AUFS_FS: missing
- "btrfs":
- CONFIG_BTRFS_FS: enabled
- CONFIG_BTRFS_FS_POSIX_ACL: enabled
- "devicemapper":
- CONFIG_BLK_DEV_DM: enabled
- CONFIG_DM_THIN_PROVISIONING: missing
- "overlay":
- CONFIG_OVERLAY_FS: enabled
- "zfs":
- /dev/zfs: missing
- zfs command: missing
- zpool command: missing

Limits:
- /proc/sys/kernel/keys/root_maxkeys: 1000000

[nemo@Sailfish ~]$

Generally Necessary must be all enabled, if not enabled, you must enable it in your kernel defconfig, and rebuild kernel.
Generally Necessary 部分必须全部是enabled, 如果没有启用,必须启用然后重启编译内核。

Download the static binary archive/下载静态二进制文件

https://download.docker.com/linux/static/stable/aarch64/

Extract the archive and put them to /usr/bin/, 18.06 is a working version.

Add nemo to docker group/将nemo用户添加到docker组

1
2
groupadd docker
usermod -a -G docker nemo

Run Docker/启动Docker

Start docker daemon/ 启动docker守护进程
devel-su /usr/bin/dockerd

Or use systemd/ 或者使用systemd

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

Check version/检查版本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@Sailfish nemo]# docker version

Client:
Version: 18.06.1-ce
API version: 1.38
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:20:38 2018
OS/Arch: linux/arm64
Experimental: false

Server:
Engine:
Version: 18.06.1-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:27:20 2018
OS/Arch: linux/arm64
Experimental: false

Test/测试
devel-su docker run hello-world

This command downloads a test image and runs it in a container. When the container runs, it prints an informational message and exits. / 这个命令会下载一个测试镜像,如果执行成功会打印如下信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@Sailfish nemo]# docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
255483503861: Pull complete
Digest: sha256:4b8ff392a12ed9ea17784bd3c9a8b1fa3299cac44aca35a85c90c5e3c7afacdc
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(arm64v8)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/

Test network mapping /测试网络映射

On one terminal/在一个终端中执行

1
2
3
[root@Sailfish nemo]# docker run -it --rm -p 6080:80 nginx:latest        
172.17.0.1 - - [05/Sep/2018:08:54:52 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.58.0-DEV" "-"
172.17.0.1 - - [05/Sep/2018:08:55:51 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.58.0-DEV" "-"

Vist on another terminal/在另一个终端中访问

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[nemo@Sailfish ~]$ curl -s 127.0.0.1:6080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[nemo@Sailfish ~]$

TODO

Wayland forward /wayland转发

Reference/参考:

Have fun ;)

清理kubernetes中未正常退出的pod

长时间运行的k8s节点可能会存在某些pod不自动退出,一直处于Terminating的状态
于是我们可以用这个脚本定时进行清理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
#!/bin/bash
#############################
### clean terminated pods ###
### run at you own risk ! ###
#############################
export PATH=/usr/local/cfssl/bin:/usr/local/docker/:/usr/local/kubernetes/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
getns(){
namespaces=`kubectl get namespaces|grep -v "NAME"|awk '{print $1}'`
for n in ${namespaces};
do
pods_str=`kubectl get pods -n ${n}|grep "Terminating"`
IFS=$'\n' read -rd '' -a pods <<<"$pods_str"
if [ -n "$pods" ]; then
getpod ${n} $pods;
fi
done
}
getpod(){
ns=$1;
for podinfo in $2;
do
pod=`echo $podinfo|awk '{print $1}'`
delpod $pod $ns;
done
}
delpod(){
echo "kubectl delete pods $1 -n $2 --grace-period=0 --force"
kubectl delete pods $1 -n $2 --grace-period=0 --force
}
main(){
getns
}
main

自动清理k8s中的容器、卷、镜像

镜像源码 https://github.com/meltwater/docker-cleanup

注意:这个镜像会将所有已经退出的容器、未使用的镜像和data-only的容器,除非你将他们加到保存的变量中。注意正确配置docker api的版本,以免删除所有的镜像。小心挂载 /var/lib/docker,因为如果挂载后没有使用的话,也会被当作未使用的卷删掉。

支持的变量

  • CLEAN_PERIOD=1800 - Interval in seconds to sleep after completing a cleaning run. Defaults to 1800 seconds = 30 minutes.
  • DELAY_TIME=1800 - Seconds to wait before removing exited containers and unused images. Defaults to 1800 seconds = 30 minutes.
  • KEEP_IMAGES - List of images to avoid cleaning, e.g. “ubuntu:trusty, ubuntu:latest”. Defaults to clean all unused images.
  • KEEP_CONTAINERS - List of images for exited or dead containers to avoid cleaning, e.g. “ubuntu:trusty, ubuntu:latest”.
  • KEEP_CONTAINERS_NAMED - List of names for exited or dead containers to avoid cleaning, e.g. “my-container1, persistent-data”.
  • LOOP - Add the ability to do non-looped cleanups, run it once and exit. Options are true, false. Defaults to true to run it forever in loops.
  • DEBUG - Set to 1 to enable more debugging output on pattern matches
  • DOCKER_API_VERSION - The docker API version to use. This defaults to 1.20, but you can override it here in case the docker version on your host differs from the one that is installed in this container. You can find - this on your host system by running docker version --format '{{.Client.APIVersion}}'.

对于即使已经不运行了也不想清理的镜像,使用KEEP_IMAGES变量处理,此处我们添写:

vmware/harbor-*:*,*calico:*,*registry:*,*kubernetes-dashboard-amd64:*,*nginx-ingress-controller:*,*cvallance/mongo-k8s-sidecar:*

docker-cleanup-daemonset.yaml 配置如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
name: clean-up
name: clean-up
namespace: kube-system
spec:
updateStrategy:
type: "RollingUpdate"
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
app: clean-up
spec:
tolerations:
- key: "LB"
operator: "Exists"
effect: "NoExecute"
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
- name: docker-directory
hostPath:
path: /data/kubernetes/docker
containers:
- image: meltwater/docker-cleanup:latest
name: clean-up
env:
- name: CLEAN_PERIOD
value: "1800"
- name: DELAY_TIME
value: "60"
- name: DOCKER_API_VERSION
value: "1.29"
- name: KEEP_IMAGES
value: "vmware/harbor-*:*,*calico:*,*registry:*,*kubernetes-dashboard-amd64:*,*nginx-ingress-controller:*,*cvallance/mongo-k8s-sidecar:*"
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-sock
readOnly: false
- mountPath: /var/lib/docker
name: docker-directory
readOnly: false

使用DaemonSet+Taint/Tolerations+NodeSelector部署Nginx ingress controller

使用DaemonSet+NodeSelector+Tolerations的方式定义Nginx Ingress Controller,给专门节点打上Label+Taint,使得这些专门节点只运行Nginx Ingress Controller,而不会调度和运行其他业务容器,只用来做代理节点。

  • 在Kuberntes Cluster中准备N个节点,我们称之为代理节点。在这N个节点上只部署Nginx Ingress Controller(简称NIC)实例,不会跑其他业务容器。

  • 给代理节点打上NoExecute Taint,防止业务容器调度或运行在这些节点。

    kubectl taint nodes 10.8.8.234 LB=NIC:NoExecute

  • 给代理节点打上Label,让NIC只部署在打了对应Lable的节点上。
    kubectl label nodes 10.8.8.234 LB=NIC

  • 修改calico-node配置,让calico可以在NoExecute节点上运行

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
        spec:
    ...
    spec:
    tolerations:
    - key: "LB"
    operator: "Exists"
    effect: "NoExecute"
    ```

    - 定义DaemonSet Yaml文件,注意加上Tolerations和Node Selector。(注意先创建serviceAccount、role等)

    apiVersion: extensions/v1beta1
    kind: DaemonSet
    metadata:
    annotations:

    deployment.kubernetes.io/revision: "4"
    

    labels:

    k8s-app: nginx-ingress-controller
    

    name: nginx-ingress-controller
    namespace: kube-system
    spec:
    selector:

    matchLabels:
    k8s-app: nginx-ingress-controller
    

    template:

    metadata:
    annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    creationTimestamp: null
    labels:
        k8s-app: nginx-ingress-controller
    spec:
    # 加上对应的Node Selector
    nodeSelector:
        LB: NIC
    # 加上对应的Tolerations
    tolerations:
    - key: "LB"
        operator: "Equal"
        value: "NIC"
        effect: "NoExecute"
    containers:
    - args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
        - --tcp-services-configmap=$(POD_NAMESPACE)/nginx-tcp-ingress-configmap
        - --configmap=$(POD_NAMESPACE)/nginx-configuration
        env:
        - name: POD_NAME
        valueFrom:
            fieldRef:
            apiVersion: v1
            fieldPath: metadata.name
        - name: POD_NAMESPACE
        valueFrom:
            fieldRef:
            apiVersion: v1
            fieldPath: metadata.namespace
        image: dceph02.rmz.flamingo-inc.com:8888/mynginx/nginx-ingress-controller:0.9.0-beta.11
        imagePullPolicy: IfNotPresent
        livenessProbe:
        failureThreshold: 3
        httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
        initialDelaySeconds: 10
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 1
        name: nginx-ingress-controller
        ports:
        - containerPort: 80
        hostPort: 80
        protocol: TCP
        - containerPort: 443
        hostPort: 443
        protocol: TCP
        readinessProbe:
        failureThreshold: 3
        httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 1
        resources: {}
    hostNetwork: true
    serviceAccount: ingress
    serviceAccountName: ingress
    
    1
    2
    3
        
    - 创建default backend服务

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
    name: default-http-backend
    labels:

    k8s-app: default-http-backend
    

    namespace: kube-system
    spec:
    replicas: 1
    template:

    metadata:
    labels:
        k8s-app: default-http-backend
    spec:
    terminationGracePeriodSeconds: 60
    containers:
    - name: default-http-backend
        # Any image is permissable as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: gcr.io/google_containers/defaultbackend:1.0
        livenessProbe:
        httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
        initialDelaySeconds: 30
        timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
        limits:
            cpu: 10m
            memory: 20Mi
        requests:
            cpu: 10m
            memory: 20Mi
    

    apiVersion: v1
    kind: Service
    metadata:
    name: default-http-backend
    namespace: kube-system
    labels:

    k8s-app: default-http-backend
    

    spec:
    ports:

    • port: 80
      targetPort: 8080
      selector:
      k8s-app: default-http-backend
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      根据default-backend.yaml创建对应的Deployment和Service。 `kubectl create -f default-backend.yaml`

      - 根据DaemonSet Yaml创建NIC DaemonSet,启动NIC。

      `kubectl create -f nginx-ingress-daemonset.yaml`

      至此,NIC已经运行在代理节点上了,下面为测试内容。

      - (选择性)确认NIC启动成功后,创建测试用的服务。

      kubectl run echoheaders –image=gcr.io/google_containers/echoserver:1.8 –replicas=1 –port=8080
      kubectl expose deployment echoheaders –port=80 –target-port=8080 –name=echoheaders-x
      kubectl expose deployment echoheaders –port=80 –target-port=8080 –name=echoheaders-y
      1
      2
      创建测试用的Ingress Object

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
      name: echomap
      namespace: default
      spec:
      rules:
    • host: foo.bar.com
      http:
      paths:
      • backend:
        serviceName: echoheaders-x
        servicePort: 80
        path: /foo
    • host: bar.baz.com
      http:
      paths:
      • backend:
        serviceName: echoheaders-y
        servicePort: 80
        path: /bar
      • backend:
        serviceName: echoheaders-x
        servicePort: 80
        path: /foo
        1
        2
        3

        - (选择性)查看ingress的代理地址

        [root@host ~]# kubectl describe ing echomap
        Name: echomap
        Namespace: default
        Address: 10.8.8.234
        Default backend: default-http-backend:80 (172.254.109.193:8080)
        Rules:
        Host Path Backends

    foo.bar.com

    /foo    echoheaders-x:80 (<none>)
    

    bar.baz.com

    /bar    echoheaders-y:80 (<none>)
    /foo    echoheaders-x:80 (<none>)
    

    Annotations:
    Events:
    FirstSeen LastSeen Count From SubObjectPath Type Reason Message


    35m 35m 1 ingress-controller Normal CREATE Ingress default/echomap
    35m 35m 1 ingress-controller Normal UPDATE Ingress default/echomap

    1
    2
    3

    - 测试

    [root@host ~]# curl 10.8.8.234/foo -H ‘Host: foo.bar.com’

    Hostname: echoheaders-1076692255-p1ndv
    Pod Information:

    -no pod information available-
    

    Server values:

    server_version=nginx: 1.13.3 - lua: 10008
    

    Request Information:

    client_address=172.254.246.192
    method=GET
    real path=/foo
    query=
    request_version=1.1
    request_uri=http://foo.bar.com:8080/foo
    

    Request Headers:

    accept=*/*
    connection=close
    host=foo.bar.com
    user-agent=curl/7.29.0
    x-forwarded-for=10.8.8.234
    x-forwarded-host=foo.bar.com
    x-forwarded-port=80
    x-forwarded-proto=http
    x-original-uri=/foo
    x-real-ip=10.8.8.234
    x-scheme=http
    

    Request Body:

    -no body in request-
    

    [root@dceph04 ~]# curl 10.8.8.234/foo -H ‘Host: bar.baz.com’

    Hostname: echoheaders-1076692255-p1ndv
    Pod Information:

    -no pod information available-
    

    Server values:

    server_version=nginx: 1.13.3 - lua: 10008
    

    Request Information:

    client_address=172.254.246.192
    method=GET
    real path=/foo
    query=
    request_version=1.1
    request_uri=http://bar.baz.com:8080/foo
    

    Request Headers:

    accept=*/*
    connection=close
    host=bar.baz.com
    user-agent=curl/7.29.0
    x-forwarded-for=10.8.8.234
    x-forwarded-host=bar.baz.com
    x-forwarded-port=80
    x-forwarded-proto=http
    x-original-uri=/foo
    x-real-ip=10.8.8.234
    x-scheme=http
    

    Request Body:

    -no body in request-
    

    `

参考

https://my.oschina.net/jxcdwangtao/blog/1523812