在部署Kubernetes的同时将VXLAN、ClusterIP、NodePort和LoadBalancer绘制到图表中

你好,我是Class Act Infrastructure Business Division的大塚先生。

这次我试着将k8s的网络部分以图形的形式呈现出来,虽然不完全,但希望能提高我对k8s网络环境的理解。最近我觉得,只要了解到大致的4个网络组件,就能对k8s的网络有一个基本的概念。我认为在VXLAN之前或者说底层可能有类似于LinuxBridge的组件,但没能确认,很抱歉。※因为我还在学习的过程中试图将其形象化,所以精确度可能不太高。※我注意到図中有ClusterIP、NodePort和LoadBalancer等,但印象中并没有将这些之前的Pod网络绘制在图中,因此我希望将这一部分也融入到图中。

环境

我们使用一台主节点和三台工作节点共四台机器来构建一个由k8s集群组成的系统。
这些节点的操作系统都是Ubuntu 22.04,我们安装的是MicroK8s,版本是v1.26.4。

root@k8s-master:~/yaml# kubectl get node -o wide
NAME           STATUS   ROLES    AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k8s-worker01   Ready    <none>   83m   v1.26.4   192.168.2.31   <none>        Ubuntu 22.04.2 LTS   5.15.0-71-generic   containerd://1.6.15
k8s-master     Ready    <none>   91m   v1.26.4   192.168.2.30   <none>        Ubuntu 22.04.2 LTS   5.15.0-71-generic   containerd://1.6.15
k8s-worker02   Ready    <none>   73m   v1.26.4   192.168.2.32   <none>        Ubuntu 22.04.2 LTS   5.15.0-71-generic   containerd://1.6.15
k8s-worker03   Ready    <none>   83m   v1.26.4   192.168.2.33   <none>        Ubuntu 22.04.2 LTS   5.15.0-71-generic   containerd://1.6.15

用词

集群IP

在集群的内部IP上公开服务。这种类型的服务只能在集群内部进行通信。这是默认的服务类型。

节点端口

通过各节点的IP,在静态的端口(NodePort)上发布服务。NodePort服务会自动创建转发到该NodePort的ClusterIP服务。可以通过”NodeIP:NodePort”来访问NodePort服务。

负载均衡器

使用云服务提供商的负载均衡器,将服务公开给外部。负载均衡器位于集群之外,会自动创建传输到NodePort和ClusterIP服务的目标。

 

狸花猫

Calico は MicroK8 のデフォルトの CNI であり、VXLAN オーバーレイ ネットワークはポッド ネットワークの構成に使用されます。
Calico は、コンテナー、仮想マシン、およびネイティブ ホスト ベースのワークロード向けのオープン ソース ネットワーキングおよびネットワーク セキュリティ ソリューションです。Calico は、Kubernetes、OpenShift、Mirantis Kubernetes Engine (MKE)、OpenStack、ベアメタル サービスなど、幅広いプラットフォームをサポートしています。

看起来像是在k8s(microk8s)中提供了IPAM和VXLAN环境。

 

守护程序集

DaemonSet 确保了所有(或部分)节点运行了 Pod 的副本。当节点添加到集群时,会添加 Pod。当节点从集群中删除时,这些 Pod 将被垃圾回收。删除 DaemonSet 时,其创建的 Pod 将被清理。

 

部署DeamonSet并检查与pod相关的网络环境。

这次我们准备了3个DS。一个用于nginx,一个用于apache2,另一个用于redis。yaml文件如下所示。请不要在意宽容性设置。

apiVersion: apps/v1 
kind: DaemonSet 
metadata: 
  name: nginx-ds 
spec: 
  selector: 
    matchLabels: 
      name: nginx 
  template: 
    metadata: 
      labels: 
        name: nginx 
    spec: 
      tolerations: 
      - key: "env" 
        operator: "Equal" 
        value: "master" 
        effect: NoSchedule 
      containers: 
      - name: nginx-container 
        image: nginx:latest
apiVersion: apps/v1 
kind: DaemonSet 
metadata: 
  name: apache-ds 
spec: 
  selector: 
    matchLabels: 
      name: apache 
  template: 
    metadata: 
      labels: 
        name: apache 
    spec: 
      tolerations: 
      - key: "env" 
        operator: "Equal" 
        value: "master" 
        effect: NoSchedule 
      containers: 
      - name: apache-container 
        image: shotaohtsuka/my-httpd-image
apiVersion: apps/v1 
kind: DaemonSet 
metadata: 
  name: redis-ds 
spec: 
  selector: 
    matchLabels: 
      name: redis 
  template: 
    metadata: 
      labels: 
        name: redis 
    spec: 
      tolerations: 
      - key: "env" 
        operator: "Equal" 
        value: "master" 
        effect: NoSchedule 
      containers: 
      - name: redis-container 
        image: redis

我将开始部署这个。
可以看出每个节点的IP地址大致上是连续的。没有连续的地方可能是由于部署失败而进行了2、3次重试。

root@k8s-master:~/yaml# kubectl get pod -o wide | grep -i k8s-master 
nginx-ds-tkhgg    1/1     Running   0          23m   10.1.235.211   k8s-master     <none>           <none> 
apache-ds-t6th6   1/1     Running   0          23m   10.1.235.212   k8s-master     <none>           <none> 
redis-ds-zw869    1/1     Running   0          10m   10.1.235.215   k8s-master     <none>           <none>

root@k8s-master:~/yaml# kubectl get pod -o wide | grep -i k8s-worker01 
nginx-ds-6nw9w    1/1     Running   0          23m   10.1.79.75     k8s-worker01   <none>           <none> 
apache-ds-xv25j   1/1     Running   0          23m   10.1.79.76     k8s-worker01   <none>           <none> 
redis-ds-bv2t9    1/1     Running   0          11m   10.1.79.79     k8s-worker01   <none>           <none>

root@k8s-master:~/yaml# kubectl get pod -o wide | grep -i k8s-worker02 
nginx-ds-k4ndq    1/1     Running   0          24m   10.1.69.201    k8s-worker02   <none>           <none> 
apache-ds-bmtwp   1/1     Running   0          24m   10.1.69.202    k8s-worker02   <none>           <none> 
redis-ds-d2ps6    1/1     Running   0          11m   10.1.69.205    k8s-worker02   <none>           <none>

root@k8s-master:~/yaml# kubectl get pod -o wide | grep -i k8s-worker03 
nginx-ds-gvx8m    1/1     Running   0          24m   10.1.39.199    k8s-worker03   <none>           <none> 
apache-ds-kzpns   1/1     Running   0          24m   10.1.39.200    k8s-worker03   <none>           <none> 
redis-ds-pfrpl    1/1     Running   0          11m   10.1.39.203    k8s-worker03   <none>           <none>

关于分配给k8s的pod的IP地址范围,似乎可以在”/var/snap/microk8s/current/args/cni-network/cni.yaml”文件的以下部分找到。每个节点似乎都被均匀分配了一个范围…

# The default IPv4 pool to create on startup if none exists. Pod IPs will be 
# chosen from this range. Changing this value after installation will have 
# no effect. This should fall within `--cluster-cidr`. 
- name: CALICO_IPV4POOL_CIDR 
  value: "10.1.0.0/16"

另外,我们将检查每个节点的VXLAN接口。

root@k8s-master:~# ip a
11: vxlan.calico: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 66:d5:cf:d3:6b:7e brd ff:ff:ff:ff:ff:ff 
    inet 10.1.235.192/32 scope global vxlan.calico 
       valid_lft forever preferred_lft forever 
    inet6 fe80::64d5:cfff:fed3:6b7e/64 scope link 
       valid_lft forever preferred_lft forever
root@k8s-worker01:~# ip a
5: vxlan.calico: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 66:c1:17:77:92:d1 brd ff:ff:ff:ff:ff:ff 
    inet 10.1.79.64/32 scope global vxlan.calico 
       valid_lft forever preferred_lft forever 
    inet6 fe80::64c1:17ff:fe77:92d1/64 scope link 
       valid_lft forever preferred_lft forever
root@k8s-worker02:~# ip a
25: vxlan.calico: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 66:ff:2b:bc:03:a3 brd ff:ff:ff:ff:ff:ff 
    inet 10.1.69.192/32 scope global vxlan.calico 
       valid_lft forever preferred_lft forever 
    inet6 fe80::64ff:2bff:febc:3a3/64 scope link 
       valid_lft forever preferred_lft forever
root@k8s-worker03:~# ip a
5: vxlan.calico: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 66:fc:06:ba:9d:0e brd ff:ff:ff:ff:ff:ff 
    inet 10.1.39.192/32 scope global vxlan.calico 
       valid_lft forever preferred_lft forever 
    inet6 fe80::64fc:6ff:feba:9d0e/64 scope link 
       valid_lft forever preferred_lft forever
aaaq2023050601

部署ClusterIP

下面是预先准备好的yaml文件:
port字段代表ClusterIP监听的端口,targetPort字段代表Pod正在监听的端口。

apiVersion: v1 
kind: Service 
metadata: 
  name: clusterip-nginx-ds 
spec: 
  selector: 
    name: nginx 
  type: ClusterIP 
  ports: 
  - name: nginx 
    port: 80 
    protocol: TCP 
    targetPort: 80
apiVersion: v1 
kind: Service 
metadata: 
  name: clusterip-apache2-ds 
spec: 
  selector: 
    name: apache 
  type: ClusterIP 
  ports: 
  - name: apache 
    port: 90 
    protocol: TCP 
    targetPort: 90
apiVersion: v1 
kind: Service 
metadata: 
  name: clusterip-redis-ds 
spec: 
  selector: 
    name: redis 
  type: ClusterIP 
  ports: 
  - name: redis 
    port: 6379 
    protocol: TCP 
    targetPort: 6379

デプロイしていきます。
kubectl get svc -o wideのCLUSTER-IPに振られているIPアドレスがClusterIP serviceに実際に割り当てられているIPアドレスになります。

root@k8s-master:~/yaml# kubectl create -f clusterip-nginx-ds.yaml
service/clusterip-nginx-ds created 
root@k8s-master:~/yaml# kubectl create -f clusterip-apache2-ds.yaml 
service/clusterip-apache2-ds created
root@k8s-master:~/yaml# kubectl create -f clusterip-redis-ds.yaml 
service/clusterip-redis-ds created

root@k8s-master:~/yaml# kubectl get svc -o wide 
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE     SELECTOR 
kubernetes             ClusterIP   10.152.183.1     <none>        443/TCP    3d2h    <none> 
clusterip-nginx-ds     ClusterIP   10.152.183.123   <none>        80/TCP     84m     name=nginx 
clusterip-apache2-ds   ClusterIP   10.152.183.109   <none>        90/TCP     4m51s   name=apache 
clusterip-redis-ds     ClusterIP   10.152.183.179   <none>        6379/TCP   8s      name=redis 
aaaq2023050602

例えばnginxのClusterIPのIPアドレスにポートを紐づけてcurlをすることで以下の様に負荷分散しつつpodに接続をします。ただClusterIPはあくまで内部用のものになりますので、k8sクラスタ外から同様のことは出来ません。

root@k8s-master:~/yaml# curl 10.152.183.123:80 
<!DOCTYPE html> 
<html> 
<head> 
<title>Welcome to nginx!</title> 
<style> 
html { color-scheme: light dark; } 
body { width: 35em; margin: 0 auto; 
font-family: Tahoma, Verdana, Arial, sans-serif; } 
</style> 
</head> 
<body> 
<h1>Welcome to nginx!</h1> 
<p>If you see this page, the nginx web server is successfully installed and 
working. Further configuration is required.</p> 
<p>For online documentation and support please refer to 
<a href="http://nginx.org/">nginx.org</a>.<br/> 
Commercial support is available at 
<a href="http://nginx.com/">nginx.com</a>.</p> 
<p><em>Thank you for using nginx.</em></p> 
</body> 
</html>

部署 NodePort

用意的yaml文件如下。
port表示ClusterIP监听的端口,targetPort表示pod监听的端口。nodePort表示每个节点监听的端口。一旦部署NodePort服务,就会自动部署相关的内部ClusterIP服务。

apiVersion: v1 
kind: Service 
metadata: 
  name: nodeport-nginx-ds 
spec: 
  type: NodePort 
  selector: 
    name: nginx 
  ports: 
    - port: 80 
      targetPort: 80 
      nodePort: 30080
apiVersion: v1 
kind: Service 
metadata: 
  name: nodeport-apache2-ds 
spec: 
  type: NodePort 
  selector: 
    name: apache 
  ports: 
    - port: 90 
      targetPort: 90 
      nodePort: 30090
apiVersion: v1 
kind: Service 
metadata: 
  name: nodeport-redis-ds 
spec: 
  type: NodePort 
  selector: 
    name: redis 
  ports: 
    - port: 6379 
      targetPort: 6379 
      nodePort: 30379

我会部署它。
kubectl get svc -o wide中的CLUSTER-IP为所分配的 ClusterIP 服务的实际 IP 地址。NodePort 服务在部署时分配的 CLUSTER-IP 与 ClusterIP 服务在部署时分配的 CLUSTER-IP 是不同的,但这可能表示 NodePort 部署时相关的 ClusterIP 也会被部署。

root@k8s-master:~/yaml# kubectl create -f nodeport-nginx-ds.yaml 
service/nodeport-nginx-ds created 
root@k8s-master:~/yaml# kubectl create -f nodeport-apache2-ds.yaml 
service/nodeport-apache2-ds created 
root@k8s-master:~/yaml# kubectl create -f nodeport-redis-ds.yaml 
service/nodeport-redis-ds created 

root@k8s-master:~/yaml# kubectl get svc -o wide 
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE     SELECTOR 
kubernetes             ClusterIP   10.152.183.1     <none>        443/TCP          3d3h    <none> 
clusterip-nginx-ds     ClusterIP   10.152.183.123   <none>        80/TCP           166m    name=nginx 
clusterip-apache2-ds   ClusterIP   10.152.183.109   <none>        90/TCP           86m     name=apache 
clusterip-redis-ds     ClusterIP   10.152.183.179   <none>        6379/TCP         81m     name=redis 
nodeport-nginx-ds      NodePort    10.152.183.23    <none>        80:30080/TCP     17m     name=nginx 
nodeport-apache2-ds    NodePort    10.152.183.61    <none>        90:30090/TCP     3m45s   name=apache 
nodeport-redis-ds      NodePort    10.152.183.165   <none>        6379:30379/TCP   8s      name=redis 
aaaq2023050603

部署负载均衡器.

在部署负载均衡器(LB)之前,验证环境的内存似乎达到了极限,所以我将删除上述部署的服务。

root@k8s-master:~/yaml# kubectl delete svc clusterip-nginx-ds clusterip-apache2-ds clusterip-redis-ds nodeport-nginx-ds nodeport-apache2-ds nodeport-redis-ds 
service "clusterip-nginx-ds" deleted 
service "clusterip-apache2-ds" deleted 
service "clusterip-redis-ds" deleted 
service "nodeport-nginx-ds" deleted 
service "nodeport-apache2-ds" deleted 
service "nodeport-redis-ds" deleted 

root@k8s-master:~/yaml# kubectl get svc 
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE 
kubernetes   ClusterIP   10.152.183.1   <none>        443/TCP   3d4h

为了部署lb而准备的yaml文件如下所示。
port用于设置LoadBalancer和ClusterIP监听的端口,targetPort用于pod监听的端口。而nodePort是每个节点上监听的端口。一旦部署LoadBalancer服务,似乎也会部署相关的NodePort和ClusterIP。

apiVersion: v1 
kind: Service 
metadata: 
  name: lb-nginx-ds 
spec: 
  type: LoadBalancer 
  selector: 
    name: nginx 
  ports: 
    - name: nginx 
      port: 8080 
      targetPort: 80 
      nodePort: 30800
apiVersion: v1
kind: Service
metadata:
  name: lb-apache-ds
spec:
  type: LoadBalancer
  selector:
    name: apache
  ports:
    - name: apache
      port: 9090
      targetPort: 90
      nodePort: 30900
apiVersion: v1
kind: Service
metadata:
  name: lb-redis-ds
spec:
  type: LoadBalancer
  selector:
    name: redis
  ports:
    - name: redis
      port: 6379
      targetPort: 6379
      nodePort: 30379

将进行部署。
kubectl get svc -o wide的CLUSTER-IP所分配的IP地址将成为ClusterIP服务实际分配的IP地址。每个节点的NodePort IP地址将相应地是各个节点的IP地址。EXTERNAL-IP应该是分配给LoadBalancer的IP地址,但目前状态为pending。这是因为MetalLB被禁用了。接下来将启用MetalLB。

root@k8s-master:~/yaml# kubectl create -f lb-nginx-ds.yaml 
service/lb-apache-ds created
root@k8s-master:~/yaml# kubectl create -f lb-apache2-ds.yaml
service/lb-apache-ds created
root@k8s-master:~/yaml# kubectl create -f lb-redis-ds.yaml
service/lb-redis-ds created

root@k8s-master:~/yaml# kubectl get svc -o wide
NAME           TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE    SELECTOR
kubernetes     ClusterIP      10.152.183.1     <none>        443/TCP          41m    <none>
lb-nginx-ds    LoadBalancer   10.152.183.130   <pending>     8080:30800/TCP   6m3s   name=nginx
lb-apache-ds   LoadBalancer   10.152.183.172   <pending>     9090:30900/TCP   88s    name=apache
lb-redis-ds    LoadBalancer   10.152.183.250   <pending>     6379:30379/TCP   7s     name=redis

启用MetalLB。
这次我们为MetalLB分配了192.168.2.36-40范围的IP地址。

root@k8s-master:~/yaml# microk8s enable metallb
Infer repository core for addon metallb
Enabling MetalLB
Enter each IP address range delimited by comma (e.g. '10.64.140.43-10.64.140.49,192.168.0.105-192.168.0.111'): 192.168.2.36-192.168.2.40
Applying Metallb manifest
customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
namespace/metallb-system created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
secret/webhook-server-cert created
service/webhook-service created
rolebinding.rbac.authorization.k8s.io/pod-lister created
daemonset.apps/speaker created
deployment.apps/controller created
validatingwebhookconfiguration.admissionregistration.k8s.io/validating-webhook-configuration created
Waiting for Metallb controller to be ready.
error: timed out waiting for the condition on deployments/controller
MetalLB controller is still not ready
deployment.apps/controller condition met
ipaddresspool.metallb.io/default-addresspool created
l2advertisement.metallb.io/default-advertise-all-pools created
MetalLB is enabled

激活MetalLB后,再次运行kubectl get svc -o wide命令,原来是待处理状态的内容已经改变,现在分配了IP地址,是吗?

root@k8s-master:~/yaml# kubectl get svc -o wide
NAME           TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)          AGE     SELECTOR
kubernetes     ClusterIP      10.152.183.1     <none>         443/TCP          44m     <none>
lb-apache-ds   LoadBalancer   10.152.183.172   192.168.2.36   9090:30900/TCP   3m58s   name=apache
lb-redis-ds    LoadBalancer   10.152.183.250   192.168.2.37   6379:30379/TCP   2m37s   name=redis
lb-nginx-ds    LoadBalancer   10.152.183.130   192.168.2.38   8080:30800/TCP   8m33s   name=nginx
aaaq2023050604
广告
将在 10 秒后关闭
bannerAds