15.kubernetes笔记|15.kubernetes笔记 CNI网络插件(一) Flannel

前言

  • CNI是Container Network Interface的是一个标准的,通用的接口。现在容器平台:docker,kubernetes,mesos,容器网络解决方案:flannel,calico,weave。只要提供一个标准的接口,就能为同样满足该协议的所有容器平台提供网络功能,而CNI正是这样的一个标准接口协议。
  • CNI用于连接容器管理系统和网络插件。提供一个容器所在的network namespace,将network interface插入该network namespace中(比如veth的一端),并且在宿主机做一些必要的配置(例如将veth的另一端加入bridge中),最后对namespace中的interface进行IP和路由的配置
Kubernetes主要存在4种类型的通信:
  1. container-to-container:发生在Pod内部,借助于lo实现;
  2. Pod-to-Pod: Pod间的通信,k8s自身并未解决该类通信,而是借助于CNI接口,交给第三方解决方案; CNI之前的接口叫kubenet;
  3. Service-to-Pod:借助于kube-proxy生成的iptables或ipvs规则完成;
  4. ExternalClients-to-Service:引入集群外部流量 hostPort、hostNletwork、nodeport/service,、loadbalancer/service、 exteralP/service、 Ingres;
Flannel简介
  • Flannel是CoreOS团队针对Kubernetes设计的一个网络规划服务,简单来说,它的功能是让集群中的不同节点主机创建的Docker容器都具有全集群唯一的虚拟IP地址。
  • 在默认的Docker配置中,每个节点上的Docker服务会分别负责所在节点容器的IP分配。这样导致的一个问题是,不同节点上容器可能获得相同的内外IP地址。并使这些容器之间能够之间通过IP地址相互找到,也就是相互ping通。
  • Flannel的设计目的就是为集群中的所有节点重新规划IP地址的使用规则,从而使得不同节点上的容器能够获得“同属一个内网”且”不重复的”IP地址,并让属于不同节点上的容器能够直接通过内网IP通信。
  • Flannel实质上是一种“覆盖网络(overlaynetwork)”,也就是将TCP数据包装在另一种网络包里面进行路由转发和通信,目前已经支持udp、vxlan、host-gw、aws-vpc、gce和alloc路由等数据转发方式,默认的节点间数据通信方式是UDP转发。
简单总结Flannel特点
  1. 使集群中的不同Node主机创建的Docker容器都具有全集群唯一的虚拟IP地址。
  2. 建立一个覆盖网络(overlay network),通过这个覆盖网络,将数据包原封不动的传递到目标容器。覆盖网络是建立在另一个网络之上并由其基础设施支持的虚拟网络。覆盖网络通过将一个分组封装在另一个分组内来将网络服务与底层基础设施分离。在将封装的数据包转发到端点后,将其解封装。
  3. 创建一个新的虚拟网卡flannel0接收docker网桥的数据,通过维护路由表,对接收到的数据进行封包和转发(vxlan)。
  4. etcd保证了所有node上flanned所看到的配置是一致的。同时每个node上的flanned监听etcd上的数据变化,实时感知集群中node的变化。
Flannel支持三种Pod网络模型,每个模型在flannel中称为一种"backend":
  • vxlan: Pod与Pod经由隧道封装后通信,各节点彼此间能通信就行,不要求在同一个二层网络; 缺点:由于经过2次封装,吞吐量相对变低,优点:不要求节点处于同一个2层网络
  • vwlan directrouting:位于同一个二层网络上的、但不同节点上的Pod间通信,无须隧道封装; 但非同一个二层网络上的节点上的Pod间通信,仍须隧道封装; 最优的方案
  • host-gw: Pod与Pod不经隧道封装而直接通信,要求各节点位于同一个二层网络; #吞吐量最大 但需要在同个2层网络中
【15.kubernetes笔记|15.kubernetes笔记 CNI网络插件(一) Flannel】Flannel 下载安装地址
https://github.com/flannel-io...
示例1: 部署flannel 以Vxlan类型运行
#查看flannel部署清单yaml文件中有关于网络类型的描述 [root@k8s-master plugin]# cat kube-flannel.yml kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel",#完成虚拟网络 "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap",#端口映射 如:NodePort "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan"#默认为vxlan模式 } }[root@k8s-master plugin]# kubectl apply -fkube-flannel.yml

  • vxlan模式下 路由表Pod地址指向flannel.1
[root@k8s-master plugin]# route -n Kernel IP routing table DestinationGatewayGenmaskFlags Metric RefUse Iface 0.0.0.0192.168.54.20.0.0.0UG10100 eth4 10.244.0.00.0.0.0255.255.255.0U000 cni0#本机虚拟网络接口 10.244.1.010.244.1.0255.255.255.0UG000 flannel.1 10.244.2.010.244.2.0255.255.255.0UG000 flannel.1 10.244.3.010.244.3.0255.255.255.0UG000 flannel.1 172.17.0.00.0.0.0255.255.0.0U000 docker0 192.168.4.00.0.0.0255.255.255.0U10000 eth0 192.168.54.00.0.0.0255.255.255.0U10100 eth4[root@k8s-node1 ~]# route -n Kernel IP routing table DestinationGatewayGenmaskFlags Metric RefUse Iface 0.0.0.0192.168.54.20.0.0.0UG10100 eth4 10.244.0.010.244.0.0255.255.255.0UG000 flannel.1 10.244.1.00.0.0.0255.255.255.0U000 cni0#本机虚拟网络接口 10.244.2.010.244.2.0255.255.255.0UG000 flannel.1 10.244.3.010.244.3.0255.255.255.0UG000 flannel.1 172.17.0.00.0.0.0255.255.0.0U000 docker0 192.168.4.00.0.0.0255.255.255.0U10000 eth0 192.168.54.00.0.0.0255.255.255.0U10100 eth4[root@k8s-master plugin]# ip neighbour|grep flannel.1#生成的永久neighbour信息 提高路由效率 10.244.1.0 dev flannel.1 lladdr ba:98:1c:fa:3a:51 PERMANENT 10.244.3.0 dev flannel.1 lladdr da:29:42:38:29:55 PERMANENT 10.244.2.0 dev flannel.1 lladdr fa:48:c1:29:0b:dd PERMANENT[root@k8s-master plugin]# bridge fdb show flannel.1|grep flannel.1 ba:98:1c:fa:3a:51 dev flannel.1 dst 192.168.54.171 self permanent 22:85:29:77:e1:00 dev flannel.1 dst 192.168.54.173 self permanent fa:48:c1:29:0b:dd dev flannel.1 dst 192.168.54.172 self permanent da:29:42:38:29:55 dev flannel.1 dst 192.168.54.173 self permanent#抓包flannel网络 其中 udp 8472为flannel网络默认端口[root@k8s-node3 ~]# tcpdump -i eth4 -nn udp port 8472 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth4, link-type EN10MB (Ethernet), capture size 262144 bytes 17:08:15.113389 IP 192.168.54.172.46879 > 192.168.54.173.8472: OTV, flags [I] (0x08), overlay 0, instance 1 IP 10.244.2.9 > 10.244.3.92: ICMP echo request, id 2816, seq 61, length 64 17:08:15.113498 IP 192.168.54.173.55553 > 192.168.54.172.8472: OTV, flags [I] (0x08), overlay 0, instance 1 IP 10.244.3.92 > 10.244.2.9: ICMP echo reply, id 2816, seq 61, length 64 17:08:16.114359 IP 192.168.54.172.46879 > 192.168.54.173.8472: OTV, flags [I] (0x08), overlay 0, instance 1 IP 10.244.2.9 > 10.244.3.92: ICMP echo request, id 2816, seq 62, length 64 17:08:16.114447 IP 192.168.54.173.55553 > 192.168.54.172.8472: OTV, flags [I] (0x08), overlay 0, instance 1 IP 10.244.3.92 > 10.244.2.9: ICMP echo reply, id 2816, seq 62, length 64 17:08:17.115558 IP 192.168.54.172.46879 > 192.168.54.173.8472: OTV, flags [I] (0x08), overlay 0, instance 1 IP 10.244.2.9 > 10.244.3.92: ICMP echo request, id 2816, seq 63, length 64 17:08:17.115717 IP 192.168.54.173.55553 > 192.168.54.172.8472: OTV, flags [I] (0x08), overlay 0, instance 1 IP 10.244.3.92 > 10.244.2.9: ICMP echo reply, id 2816, seq 63, length 6417:08:18.117498 IP 192.168.54.172.46879 > 192.168.54.173.8472: OTV, flags [I] (0x08), overlay 0, instance 1 IP 10.244.2.9 > 10.244.3.92: ICMP echo request, id 2816, seq 64, length 64

  • 可以看到10.244.2.9 > 10.244.3.92 Pod间的传输 通过封装从节点192.168.54.172.46879 传输到节点 192.168.54.173.8472 经过一层数据封装
示例2: 添加flannel网络类型DirectRouting
  • 添加DirectRouting后,2层网络节点会使用宿主机网络接口直接通信,3层网络的节点会使用Vxlan 隧道封装后通信,组合使用是flannel最理想的网络类型
  • 因为测试环境所有节点都处于同一2层网络,所以从路由表无法看到和flannel.1接口同时存在
[root@k8s-master ~]# kubectl get cm -n kube-system NAMEDATAAGE coredns157d extension-apiserver-authentication657d kube-flannel-cfg257d kube-proxy257d kubeadm-config257d kubelet-config-1.19157d [root@k8s-master ~]# kubectl edit cm kube-flannel-cfg-n kube-systemnet-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan", "DirectRouting": true#添加 } }

  • 重启Pod 正式环境用蓝绿更新
[root@k8s-master ~]# kubectl get pod -n kube-system--show-labels NAMEREADYSTATUSRESTARTSAGELABELS coredns-f9fd979d6-l9zck1/1Running1657dk8s-app=kube-dns,pod-template-hash=f9fd979d6 coredns-f9fd979d6-s8fp51/1Running1557dk8s-app=kube-dns,pod-template-hash=f9fd979d6 etcd-k8s-master1/1Running1257dcomponent=etcd,tier=control-plane kube-apiserver-k8s-master1/1Running1657dcomponent=kube-apiserver,tier=control-plane kube-controller-manager-k8s-master1/1Running4057dcomponent=kube-controller-manager,tier=control-plane kube-flannel-ds-6sppx1/1Running17d23happ=flannel,controller-revision-hash=585c88d56b,pod-template-generation=2,tier=node kube-flannel-ds-j5g9s1/1Running37d23happ=flannel,controller-revision-hash=585c88d56b,pod-template-generation=2,tier=node kube-flannel-ds-nfz771/1Running17d23happ=flannel,controller-revision-hash=585c88d56b,pod-template-generation=2,tier=node kube-flannel-ds-sqhq21/1Running17d23happ=flannel,controller-revision-hash=585c88d56b,pod-template-generation=2,tier=node kube-proxy-42vln1/1Running425dcontroller-revision-hash=565786c69c,k8s-app=kube-proxy,pod-template-generation=1 kube-proxy-98gfb1/1Running321dcontroller-revision-hash=565786c69c,k8s-app=kube-proxy,pod-template-generation=1 kube-proxy-nlnnw1/1Running417dcontroller-revision-hash=565786c69c,k8s-app=kube-proxy,pod-template-generation=1 kube-proxy-qbsw21/1Running425dcontroller-revision-hash=565786c69c,k8s-app=kube-proxy,pod-template-generation=1 kube-scheduler-k8s-master1/1Running3857dcomponent=kube-scheduler,tier=control-plane metrics-server-6849f98b-fsvf81/1Running158dk8s-app=metrics-server,pod-template-hash=6849f98b [root@k8s-master ~]# kubectl delete pod -n kube-system -l app=flannel pod "kube-flannel-ds-6sppx" deleted pod "kube-flannel-ds-j5g9s" deleted pod "kube-flannel-ds-nfz77" deleted pod "kube-flannel-ds-sqhq2" deleted [root@k8s-master ~]#

  • 再次查看master、node路由表
[root@k8s-master ~]# route -n Kernel IP routing table DestinationGatewayGenmaskFlags Metric RefUse Iface 0.0.0.0192.168.54.20.0.0.0UG10100 eth4 10.244.0.00.0.0.0255.255.255.0U000 cni0 10.244.1.010.244.1.0255.255.255.0UG000 eth4 10.244.2.0192.168.54.172255.255.255.0UG000 eth4 10.244.3.0192.168.54.173255.255.255.0UG000 eth4 172.17.0.00.0.0.0255.255.0.0U000 docker0 192.168.4.00.0.0.0255.255.255.0U10000 eth0 192.168.54.00.0.0.0255.255.255.0U10100 eth4

[root@k8s-node1 ~]# route -n Kernel IP routing table DestinationGatewayGenmaskFlags Metric RefUse Iface 0.0.0.0192.168.54.20.0.0.0UG10100 eth4 10.244.0.0192.168.54.170255.255.255.0UG000 eth4 10.244.1.00.0.0.0255.255.255.0U000 cni0 10.244.2.0192.168.54.172255.255.255.0UG000 eth4 10.244.3.0192.168.54.173255.255.255.0UG000 eth4 172.17.0.00.0.0.0255.255.0.0U000 docker0 192.168.4.00.0.0.0255.255.255.0U10000 eth0 192.168.54.00.0.0.0255.255.255.0U10100 eth4#网络相关的Pod的IP会直接通过宿主机网络接口地址[root@k8s-master ~]# kubectl get pod -n kube-system -o wide NAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATES coredns-f9fd979d6-l9zck1/1Running1657d10.244.0.42k8s-master coredns-f9fd979d6-s8fp51/1Running1557d10.244.0.41k8s-master etcd-k8s-master1/1Running1257d192.168.4.170k8s-master kube-apiserver-k8s-master1/1Running1657d192.168.4.170k8s-master kube-controller-manager-k8s-master1/1Running4057d192.168.4.170k8s-master kube-flannel-ds-d79nx1/1Running02m12s192.168.4.170k8s-master kube-flannel-ds-m48m71/1Running02m14s192.168.4.172k8s-node2 kube-flannel-ds-pxmnf1/1Running02m14s192.168.4.171k8s-node1 kube-flannel-ds-vm9kt1/1Running02m19s192.168.4.173k8s-node3 kube-proxy-42vln1/1Running425d192.168.4.172k8s-node2#使用宿主机网络接口 kube-proxy-98gfb1/1Running321d192.168.4.173k8s-node3 kube-proxy-nlnnw1/1Running417d192.168.4.171k8s-node1 kube-proxy-qbsw21/1Running425d192.168.4.170k8s-master kube-scheduler-k8s-master1/1Running3857d192.168.4.170k8s-master metrics-server-6849f98b-fsvf81/1Running158d10.244.2.250k8s-node2

  • 抓包查看数据封装
[root@k8s-master plugin]# kubectl get pod -o wide NAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATES client-16391/1Running052s10.244.1.222k8s-node1 replicaset-demo-v1.1-lgf6b1/1Running059m10.244.1.221k8s-node1 replicaset-demo-v1.1-mvvfq1/1Running059m10.244.3.169k8s-node3 replicaset-demo-v1.1-tn49t1/1Running059m10.244.2.136k8s-node2root@k8s-master plugin]# kubectl exec replicaset-demo-v1.1-tn49t -it -- /bin/sh#访问node3 [root@replicaset-demo-v1 /]# curl 10.244.3.169 iKubernetes demoapp v1.1 !! ClientIP: 10.244.2.136, ServerName: replicaset-demo-v1.1-mvvfq, ServerIP: 10.244.3.169! [root@replicaset-demo-v1 /]# curl 10.244.3.169#node3上抓包 [root@k8s-node3 ~]# tcpdump -i eth4 -nn tcp port 80tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth4, link-type EN10MB (Ethernet), capture size 262144 bytes 11:03:57.508877 IP 10.244.2.136.49656 > 10.244.3.169.80: Flags [S], seq 1760692242, win 64860, options [mss 1410,sackOK,TS val 4266124446 ecr 0,nop,wscale 7], length 0 11:03:57.509245 IP 10.244.3.169.80 > 10.244.2.136.49656: Flags [S.], seq 3150629627, ack 1760692243, win 64308, options [mss 1410,sackOK,TS val 1453973317 ecr 4266124446,nop,wscale 7], length 0 11:03:57.510198 IP 10.244.2.136.49656 > 10.244.3.169.80: Flags [.], ack 1, win 507, options [nop,nop,TS val 4266124447 ecr 1453973317], length 0 11:03:57.510373 IP 10.244.2.136.49656 > 10.244.3.169.80: Flags [P.], seq 1:77, ack 1, win 507, options [nop,nop,TS val 4266124447 ecr 1453973317], length 76: HTTP: GET / HTTP/1.1 11:03:57.510427 IP 10.244.3.169.80 > 10.244.2.136.49656: Flags [.], ack 77, win 502, options [nop,nop,TS val 1453973318 ecr 4266124447], length 0 11:03:57.713241 IP 10.244.3.169.80 > 10.244.2.136.49656: Flags [P.], seq 1:18, ack 77, win 502, options [nop,nop,TS val 1453973521 ecr 4266124447], length 17: HTTP: HTTP/1.0 200 OK 11:03:57.713821 IP 10.244.2.136.49656 > 10.244.3.169.80: Flags [.], ack 18, win 507, options [nop,nop,TS val 4266124651 ecr 1453973521], length 0 11:03:57.733459 IP 10.244.3.169.80 > 10.244.2.136.49656: Flags [P.], seq 18:155, ack 77, win 502, options [nop,nop,TS val 1453973541 ecr 4266124651], length 137: HTTP 11:03:57.733720 IP 10.244.3.169.80 > 10.244.2.136.49656: Flags [FP.], seq 155:271, ack 77, win 502, options [nop,nop,TS val 1453973541 ecr 4266124651], length 116: HTTP 11:03:57.735862 IP 10.244.2.136.49656 > 10.244.3.169.80: Flags [.], ack 155, win 506, options [nop,nop,TS val 4266124671 ecr 1453973541], length 0 11:03:57.735883 IP 10.244.2.136.49656 > 10.244.3.169.80: Flags [F.], seq 77, ack 272, win 506, options [nop,nop,TS val 4266124672 ecr 1453973541], length 0 11:03:57.736063 IP 10.244.3.169.80 > 10.244.2.136.49656: Flags [.], ack 78, win 502, options [nop,nop,TS val 1453973543 ecr 4266124672], length 0 11:03:58.650891 IP 10.244.2.136.49662 > 10.244.3.169.80: Flags [S], seq 3494935965, win 64860, options [mss 1410,sackOK,TS val 4266125588 ecr 0,nop,wscale 7], length 0

  • 可以看到数据的传输没有再经过封装 直接通过Pod IP flannel网络传输
示例3: 修改flannel网络类型host-gw 需要注意host-gw只支持2层网络
  • 因为所有节点都处在2层网络中,理论上和前面添加DirectRouting 效果是一样的 就不累述
    [root@k8s-master plugin]# vim kube-flannel.yml ... net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "host-gw"#修改类型为host-gw } } ...#查看路由表 [root@k8s-master plugin]# kubectl apply -f kube-flannel.yml Kernel IP routing table DestinationGatewayGenmaskFlags Metric RefUse Iface 0.0.0.0192.168.54.20.0.0.0UG10100 eth4 10.244.0.00.0.0.0255.255.255.0U000 cni0 10.244.1.0192.168.54.171255.255.255.0UG000 eth4 10.244.2.0192.168.54.172255.255.255.0UG000 eth4 10.244.3.0192.168.54.173255.255.255.0UG000 eth4 172.17.0.00.0.0.0255.255.0.0U000 docker0 192.168.4.00.0.0.0255.255.255.0U10200 eth0 192.168.54.00.0.0.0255.255.255.0U10100 eth4

    推荐阅读