文章目录
- 概要
- IP 地址配置
- 接口配置解析
- 结论
概要
接续前一章节,我们还是以这张图继续深入Cilium网络世界
IP 地址配置
通过检查 Kubernetes 集群的当前环境,可以获取实际的 IP 地址和配置信息。这些信息将被补充到之前的网络示意图中,以使示意图更加贴近实际情况。
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8smaster.pci.co.id Ready control-plane 52d v1.31.2 172.19.6.5 <none> Ubuntu 22.04.4 LTS 5.15.0-125-generic containerd://1.7.23
k8sworker1.pci.co.id Ready <none> 51d v1.31.2 172.19.6.8 <none> Ubuntu 22.04.4 LTS 5.15.0-125-generic containerd://1.7.23
$ kubectl get pod -n web -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
redis-77598f9f86-g4dqg 1/1 Running 0 55m 10.0.1.73 k8sworker1.pci.co.id <none> <none>
redis-77598f9f86-hpmsh 1/1 Running 0 62s 10.0.0.222 k8smaster.pci.co.id <none> <none>
service-python-7f7c9d4fc4-jhp4d 1/1 Running 0 7d21h 10.0.1.135 k8sworker1.pci.co.id <none> <none>
webserver-5f9579b5b5-4vj77 1/1 Running 0 20m 10.0.0.150 k8smaster.pci.co.id <none> <none>
webserver-5f9579b5b5-qw2m4 1/1 Running 0 20m 10.0.1.48 k8sworker1.pci.co.id <none> <none>
从当前配置可以明确观察到,Pod 的 IP 子网与节点的 IP 子网有所不同。此外,不同节点上的 Pod 也使用彼此独立的 IP 子网。如果对此现象的原因感到疑惑,这在此阶段是完全可以理解的,因为其机制尚未完全展现。接下来,我们将通过检查 Cilium 的配置对这一点进行进一步阐释和澄清。
在此前的讨论中提到,每个节点上都会运行一个 Cilium Agent。Cilium Agent 本质上是一个 Pod,主要负责该节点内的网络管理。在当前集群中,其具体配置如下:
$ kubectl get pod -n kube-system -o wide|grep cilium
cilium-envoy-5zhvb 1/1 Running 20 (31d ago) 47d 172.19.6.5 k8smaster.pci.co.id <none> <none>
cilium-envoy-bwxsc 1/1 Running 14 (31d ago) 46d 172.19.6.8 k8sworker1.pci.co.id <none> <none>
cilium-kbwq7 1/1 Running 0 20d 172.19.6.8 k8sworker1.pci.co.id <none> <none>
cilium-operator-54c7465577-v8tk5 1/1 Running 475 (2d18h ago) 47d 172.19.6.8 k8sworker1.pci.co.id <none> <none>
cilium-operator-54c7465577-ztn6h 1/1 Running 74 (2d18h ago) 47d 172.19.6.5 k8smaster.pci.co.id <none> <none>
cilium-sjj8k 1/1 Running 0 20d 172.19.6.5 k8smaster.pci.co.id <none> <none>
需关注以下两点:
- Cilium Agent 的部署形式:Cilium Agent 通过 DaemonSet 部署,从而确保集群中的每个节点都运行一个 Cilium Agent。作为一个 Pod,Cilium Agent 也会被分配一个 IP 地址。然而,其 IP 地址与节点的 IP 地址相同。这是一种特殊的 Pod IP 地址分配方式,通常适用于需要直接访问节点(宿主机)网络的系统级 Pod。查看 kube-system 命名空间中的 Pod 可以发现,大多数此类 Pod 使用的是节点的 IP 地址。
- Cilium Operator Pod 的职责:Cilium Operator Pod 负责集群内的 IP 地址管理,并为每个 Cilium Agent 分配其专用的可用 IP 地址范围。
为了确定每个节点所使用的 IP 地址范围,可以通过检查各节点上的 Cilium Agent 实例来获取相关信息。其具体名称已在前文中列出:
$ kubectl exec -it cilium-sjj8k -n kube-system -- cilium debuginfo|grep -i ipam
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
IPAM: IPv4: 3/254 allocated from 10.0.0.0/24
kubectl exec -it cilium-kbwq7 -n kube-system -- cilium debuginfo|grep -i ipam
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
IPAM: IPv4: 13/254 allocated from 10.0.1.0/24
当前可以清晰地观察到每个节点所属的不同 IP 子网。根据网络分组规则,一个 IP 地址通过子网掩码划分归属。本例中,子网掩码为 /24,意味着第一个节点中,所有以 10.0.0 开头的地址都属于同一组;而第二个节点中,以 10.0.1.0开头的地址则属于另一组。因此,这两个节点分别位于不同的组或 IP 子网。
接下来,将进一步检查网络示意图中所代表的"door"——各网络接口的具体配置情况。
接口配置解析
将对“大楼”进行全面检查,详细分析其网络配置。首先,从四个 Pod 的配置入手展开探讨:
$ kubectl get pod -n web -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
redis-77598f9f86-g4dqg 1/1 Running 0 55m 10.0.1.73 k8sworker1.pci.co.id <none> <none>
redis-77598f9f86-hpmsh 1/1 Running 0 62s 10.0.0.222 k8smaster.pci.co.id <none> <none>
service-python-7f7c9d4fc4-jhp4d 1/1 Running 0 7d21h 10.0.1.135 k8sworker1.pci.co.id <none> <none>
webserver-5f9579b5b5-4vj77 1/1 Running 0 20m 10.0.0.150 k8smaster.pci.co.id <none> <none>
webserver-5f9579b5b5-qw2m4 1/1 Running 0 20m 10.0.1.48 k8sworker1.pci.co.id <none> <none>
$ kubectl exec -it -n web redis-77598f9f86-g4dqg -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
1856: eth0@if1857: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ca:63:cd:36:9a:4d brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.0.1.73/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::c863:cdff:fe36:9a4d/64 scope link
valid_lft forever preferred_lft forever
$ kubeuser@k8smaster:~/yaml$ kubectl exec -it -n web redis-77598f9f86-hpmsh -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
20: eth0@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 12:10:31:5b:f8:81 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.0.0.222/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::1010:31ff:fe5b:f881/64 scope link
valid_lft forever preferred_lft forever
$ kubeuser@k8smaster:~/yaml$ kubectl exec -it -n web webserver-5f9579b5b5-4vj77 -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 12:5c:13:dd:e9:09 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.0.0.150/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::105c:13ff:fedd:e909/64 scope link
valid_lft forever preferred_lft forever
$ kubectl exec -it -n web webserver-5f9579b5b5-qw2m4 -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
1858: eth0@if1859: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether b6:14:d1:6e:33:44 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.0.1.48/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::b414:d1ff:fe6e:3344/64 scope link
valid_lft forever preferred_lft forever
每个容器除了本地回环接口外,通常只有一个网络接口。例如,接口格式为 1856: eth0@if1857,这表示容器内的网络接口编号为 1857,并与其所在节点上编号为1856 的对偶接口相连接。这可以对应于示意图中通过走廊连接的两扇“门”。
接下来,将检查节点的网络接口配置:
kubeuser@k8sworker1:~$ ip a
1857: lxc45ed99168f62@if1856: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 9a:a4:00:74:ef:bb brd ff:ff:ff:ff:ff:ff link-netns cni-635aef78-02ec-0461-5e37-830ca81c8812
inet6 fe80::98a4:ff:fe74:efbb/64 scope link
valid_lft forever preferred_lft forever
1859: lxc9a32fc44db3c@if1858: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether a6:bf:8b:47:ee:3e brd ff:ff:ff:ff:ff:ff link-netns cni-980c48cf-6b31-2ca3-4b39-fa0b409a1b66
inet6 fe80::a4bf:8bff:fe47:ee3e/64 scope link
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 4c:72:b9:4f:ac:9e brd ff:ff:ff:ff:ff:ff
inet 172.19.6.8/23 brd 172.19.7.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::4e72:b9ff:fe4f:ac9e/64 scope link
valid_lft forever preferred_lft forever
3: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 56:b0:da:67:29:a0 brd ff:ff:ff:ff:ff:ff
inet6 fe80::54b0:daff:fe67:29a0/64 scope link
valid_lft forever preferred_lft forever
4: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 3e:14:52:d3:3e:61 brd ff:ff:ff:ff:ff:ff
inet 10.0.1.70/32 scope global cilium_host
valid_lft forever preferred_lft forever
inet6 fe80::3c14:52ff:fed3:3e61/64 scope link
valid_lft forever preferred_lft forever
5: cilium_vxlan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether a6:e8:c2:92:d6:e7 brd ff:ff:ff:ff:ff:ff
inet6 fe80::a4e8:c2ff:fe92:d6e7/64 scope link
valid_lft forever preferred_lft forever
......
kubeuser@k8smaster:~$ ip a
......
21: lxcfc273d878e56@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 12:b5:68:c0:13:36 brd ff:ff:ff:ff:ff:ff link-netns cni-57a58836-f5d4-9f78-921a-ac0d54e32c75
inet6 fe80::10b5:68ff:fec0:1336/64 scope link
valid_lft forever preferred_lft forever
19: lxce4df0ab23bb6@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 7a:93:b3:e1:6e:37 brd ff:ff:ff:ff:ff:ff link-netns cni-812a7b03-c987-6a9f-04f7-e1c58b0aebf9
inet6 fe80::7893:b3ff:fee1:6e37/64 scope link
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether f4:4d:30:f6:89:ce brd ff:ff:ff:ff:ff:ff
inet 172.19.6.5/23 brd 172.19.7.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f64d:30ff:fef6:89ce/64 scope link
valid_lft forever preferred_lft forever
3: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 02:2c:a2:90:5c:e2 brd ff:ff:ff:ff:ff:ff
inet6 fe80::2c:a2ff:fe90:5ce2/64 scope link
valid_lft forever preferred_lft forever
4: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 2a:55:f4:bf:84:09 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.169/32 scope global cilium_host
valid_lft forever preferred_lft forever
inet6 fe80::2855:f4ff:febf:8409/64 scope link
valid_lft forever preferred_lft forever
5: cilium_vxlan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether a6:6c:f6:f6:4f:d3 brd ff:ff:ff:ff:ff:ff
inet6 fe80::a46c:f6ff:fef6:4fd3/64 scope link
valid_lft forever preferred_lft forever
......
eth0:这是节点主机的接口,该接口可以视为“大楼”的主入口。
1857: lxc45ed99168f62@if1856:编号为 1856 的配对接口,与上方提到的左侧容器相连接。
1859: lxc9a32fc44db3c@if1858:编号为 1858 的配对接口,与上方提到的右侧容器相连接。
cilium_host@cilium_net:在示意图中,这个接口以圆形表示,用于在集群中实现节点之间的路由功能。
cilium_vxlan:在示意图中,这个接口以三角形表示,是一个隧道接口,负责集群中节点之间的数据传输。
现在,通过将这些信息补充到我们的示意图中,来获取完整的全貌:
结论
通过网络接口的分析,可以明确节点与容器、节点与节点之间的网络拓扑,为理解 Cilium 在 Kubernetes 网络中的数据传输和路由机制奠定了基础。