使用 kubeadm 平台搭建 CRI-O 版本的 k8s 集群

从Kubernetes 1.24版本开始,Dockershim被废止,需要使用cri来取而代之。
我认为可以继续使用Docker来构建覆盖在上面的容器,但集群需要使用cri-docker或cri-o来构建。
顺便提一下,在更新使用containerd的1.23版本时,我尝试同时引入cri-o,但没有成功。(不知道原因)

这次,我们在Rockey Linux 9上进行了新的设置,安装了Kubernetes v1.25.5和cri-o。

不使用minikube等,而是使用kubeadm来在服务器上设置OreOre(自定义)的k8s环境,我们将按照官方指南进行操作。
https://kubernetes.io/ja/docs/setup/independent/install-kubeadm/

[root@localhost tmp]# uname -a
Linux localhost.localdomain 5.14.0-162.6.1.el9_1.0.1.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Nov 28 18:44:09 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost tmp]# cat /etc/os-release 
NAME="Rocky Linux"
VERSION="9.1 (Blue Onyx)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="9.1"
PLATFORM_ID="platform:el9"
PRETTY_NAME="Rocky Linux 9.1 (Blue Onyx)"
ANSI_COLOR="0;32"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:rocky:rocky:9::baseos"
HOME_URL="https://rockylinux.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
ROCKY_SUPPORT_PRODUCT="Rocky-Linux-9"
ROCKY_SUPPORT_PRODUCT_VERSION="9.1"
REDHAT_SUPPORT_PRODUCT="Rocky Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="9.1"

简而言之

建立我的K8s集群(一个节点)以便可以使用kubectl apply命令。

操作系统的准备

我们按照公式的步骤来进行。

关闭交换

在开始之前,确保将swap关闭,因为这样指南中提到的。为了保证kubelet正常运行,必须始终将swap关闭。

[root@localhost tmp]# sudo swapoff -a
[root@localhost tmp]# free -h
               total        used        free      shared  buff/cache   available
Mem:           7.5Gi       1.4Gi       3.9Gi        10Mi       2.5Gi       6.1Gi
Swap:             0B          0B          0B

确认通过 sudo swapoff -a,交换空间是否已经被关闭了,即可。

其他

在中国只需要一个选项,将以下内容用中文进行本地化:此外,MAC地址和主机名应该是唯一的,并且应该确认product_uuid与其他服务器不重复!请允许防火墙使用指定端口。暂时为了测试,我已经执行了systemctl stop firewalld命令。(对于外部服务器,绝对不要这样做,务必注意)

安装CRI-O

我将参考此链接,并进行推荐。

https://kubernetes.io/ja/docs/setup/production-environment/container-runtimes/#cri-o

网络桥接的设置

按照书上所写,进行复制粘贴。使网络数据包可以通过桥接和路由进行转发。
k8s会利用iptables来转发数据包。

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system

添加存储库

因为Rockey没有相关的仓库,所以将使用CentOS_8_Stream作为替代。(也许最好进行构建)

 

export OS=CentOS_8_Stream
export VERSION=1.25
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/CentOS_8_Stream/devel:kubic:libcontainers:stable.repo
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo

dnf install cri-o
dnf install containernetworking-plugins

修改crio.conf文件

将以下内容添加到/etc/crio/crio.conf(可以删除注释)

[crio.runtime.runtimes.runc]
runtime_path = "" 
runtime_type = "oci" 
runtime_root = "/run/runc" 

运行Cri-o。

我会使用systemctl来启动它。

sudo systemctl daemon-reload
sudo systemctl enable crio
sudo systemctl start crio

[root@k8s-node01 crio]# systemctl status crio
● crio.service - Container Runtime Interface for OCI (CRI-O)
     Loaded: loaded (/usr/lib/systemd/system/crio.service; enabled; vendor preset: disabled)
     Active: active (running) since Thu 2022-12-15 03:55:24 EST; 5s ago
       Docs: https://github.com/cri-o/cri-o
   Main PID: 12123 (crio)
      Tasks: 28
     Memory: 38.2M
        CPU: 351ms
     CGroup: /system.slice/crio.service
             └─12123 /usr/bin/crio

Dec 15 03:55:24 k8s-node01.ceres.local crio[12123]: time="2022-12-15 03:55:24.006674834-05:00" level=info msg="RDT not available in the host system" 
Dec 15 03:55:24 k8s-node01.ceres.local crio[12123]: time="2022-12-15 03:55:24.011036842-05:00" level=info msg="Conmon does support the --sync option" 
Dec 15 03:55:24 k8s-node01.ceres.local crio[12123]: time="2022-12-15 03:55:24.011088556-05:00" level=info msg="Conmon does support the --log-global-size-max option" 
Dec 15 03:55:24 k8s-node01.ceres.local crio[12123]: time="2022-12-15 03:55:24.018631810-05:00" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge>
Dec 15 03:55:24 k8s-node01.ceres.local crio[12123]: time="2022-12-15 03:55:24.025073153-05:00" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/>
Dec 15 03:55:24 k8s-node01.ceres.local crio[12123]: time="2022-12-15 03:55:24.025121761-05:00" level=info msg="Updated default CNI network name to crio" 
Dec 15 03:55:24 k8s-node01.ceres.local crio[12123]: time="2022-12-15 03:55:24.044838541-05:00" level=info msg="Serving metrics on :9537 via HTTP" 
Dec 15 03:55:24 k8s-node01.ceres.local crio[12123]: time="2022-12-15 03:55:24.045259409-05:00" level=error msg="Writing clean shutdown supported file: open /var/lib/crio/clean.shutd>
Dec 15 03:55:24 k8s-node01.ceres.local crio[12123]: time="2022-12-15 03:55:24.045361471-05:00" level=error msg="Failed to sync parent directory of clean shutdown file: open /var/lib>
Dec 15 03:55:24 k8s-node01.ceres.local systemd[1]: Started Container Runtime Interface for OCI (CRI-O).

安装

参考Kubeadm、Kubelet和Kubectl的安装,添加Yum仓库。

还是按照原公式来吧。

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

修改SELinux并进行安装。尽管官方推荐使用yum命令,但作为Fedora的特点,我们选择使用dnf进行安装。

# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# dnf install -y kubelet-1.25.5-0 kubeadm-1.25.5-0 kubectl-1.25.5-0 --disableexcludes=kubernetes
systemctl enable --now kubelet

安装完 package 后,请先重启一次。因为我想要将 selinux 关闭。

使用kubeadm进行初始设置。

使用kubeadm创建一个单一的控制面平面集群。

由于已经进行了dnf更新等操作,我们将跳过它。

执行kubeadm init 。 中可以输入选项。由于将使用Calico容器网络插件,因此添加–pod-network-cidr=10.244.0.0/16。也可以更改后面提到的Calico方面的清单文件。

 

[root@localhost ~]# kubeadm init --pod-network-cidr=10.244.0.0/16
I0119 02:55:04.007666   12858 version.go:256] remote version is much newer: v1.26.1; falling back to: stable-1.25
[init] Using Kubernetes version: v1.25.6
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost.localdomain] and IPs [10.96.0.1 172.xx.12.62]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [172.xx.12.62 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [172.xx.12.62 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.501654 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 815j1e.wy5xkrhs0fkwkkcx
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.xx.12.62:6443 --token 815j1e.wy5xkrhs0fkwkkcx \
        --discovery-token-ca-cert-hash sha256:88abcd03f98035ef780d5a2455d89c6a3c8fc860bf2baf285396a78e349499f2 

如上所述,将配置文件移到主目录。

# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config
# cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ながい
    server: https://172.xx.12.62:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: ながい
    client-key-data: ながい

我希望这样你就可以通过kubectl来看了。

[root@localhost lib]# kubectl get all -A
NAMESPACE     NAME                                                READY   STATUS    RESTARTS   AGE
kube-system   pod/coredns-565d847f94-b9p5t                        0/1     Pending   0          38s
kube-system   pod/coredns-565d847f94-tm6lw                        0/1     Pending   0          38s
kube-system   pod/etcd-localhost.localdomain                      1/1     Running   2          52s
kube-system   pod/kube-apiserver-localhost.localdomain            1/1     Running   2          54s
kube-system   pod/kube-controller-manager-localhost.localdomain   1/1     Running   2          53s
kube-system   pod/kube-proxy-64t26                                1/1     Running   0          38s
kube-system   pod/kube-scheduler-localhost.localdomain            1/1     Running   2          52s

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  54s
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   53s

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   53s

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   0/2     2            0           53s

NAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-565d847f94   2         2         0       39s

目前阶段,coredns的pod无法运行。

CNI的设置

为了将Pod之间的网络连接起来,需要使用插件安装Calico。有几种类型的插件可以选择,其中一种是容器网络接口(CNI)的缩写。

 

如果按照这个步骤来做,就可以了。
https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises
有两种方法可以加载manifest,一种是直接读取,一种是由operator的pod执行,但这次使用前者方法。

※2023/04/11 使用 tigera-operator.yaml 文件的方法也没有问题。我们将在 custom-resources.yaml 文件中设置 ipPools。calico 版本为 v3.25.1,k8s 版本为 v1.26.3。

有一种名为”canal”的组合,它结合了calico和Flannel。但是由于calico已经内置了VXLAN功能,除非有特殊理由,否则不需要使用它。

卡里科.yaml

需要下载 calico.yaml 并编辑 CIDR 部分。
我记得以前的例子中写的是 10.244.0.0/16,但现在变成了 192.168.0.0/16,所以我要下载并编辑它。

# curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O

取消注释中的CALICO_IPV4POOL_CIDR,并将其从192.168.0.0/16更改为10.244.0.0/16。

            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
            # - name: CALICO_IPV4POOL_CIDR
            #   value: "192.168.0.0/16"

将其按照以下方式进行。

            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"

应用编辑过的宣言。

[root@localhost ~]# kubectl apply -f ./calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

正在制作中。

[root@localhost ~]# kubectl get all -A
NAMESPACE     NAME                                                READY   STATUS              RESTARTS   AGE
kube-system   pod/calico-kube-controllers-74677b4c5f-7mm8p        0/1     ContainerCreating   0          17s
kube-system   pod/calico-node-bpznj                               0/1     Init:2/3            0          17s
kube-system   pod/coredns-565d847f94-b9p5t                        0/1     ContainerCreating   0          8m51s
kube-system   pod/coredns-565d847f94-tm6lw                        0/1     ContainerCreating   0          8m51s
kube-system   pod/etcd-localhost.localdomain                      1/1     Running             2          9m5s
kube-system   pod/kube-apiserver-localhost.localdomain            1/1     Running             2          9m7s
kube-system   pod/kube-controller-manager-localhost.localdomain   1/1     Running             2          9m6s
kube-system   pod/kube-proxy-64t26                                1/1     Running             0          8m51s
kube-system   pod/kube-scheduler-localhost.localdomain            1/1     Running             2          9m5s

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  9m7s
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   9m6s

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   1         1         0       1            0           kubernetes.io/os=linux   17s
kube-system   daemonset.apps/kube-proxy    1         1         1       1            1           kubernetes.io/os=linux   9m6s

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/calico-kube-controllers   0/1     1            0           17s
kube-system   deployment.apps/coredns                   0/2     2            0           9m6s

NAMESPACE     NAME                                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/calico-kube-controllers-74677b4c5f   1         1         0       17s
kube-system   replicaset.apps/coredns-565d847f94                   2         2         0       8m52s

所有的pod都会变为running状态。

[root@localhost ~]# kubectl get all -A
NAMESPACE     NAME                                                READY   STATUS    RESTARTS   AGE
kube-system   pod/calico-kube-controllers-74677b4c5f-7mm8p        1/1     Running   0          45s
kube-system   pod/calico-node-bpznj                               1/1     Running   0          45s
kube-system   pod/coredns-565d847f94-b9p5t                        1/1     Running   0          9m19s
kube-system   pod/coredns-565d847f94-tm6lw                        1/1     Running   0          9m19s
kube-system   pod/etcd-localhost.localdomain                      1/1     Running   2          9m33s
kube-system   pod/kube-apiserver-localhost.localdomain            1/1     Running   2          9m35s
kube-system   pod/kube-controller-manager-localhost.localdomain   1/1     Running   2          9m34s
kube-system   pod/kube-proxy-64t26                                1/1     Running   0          9m19s
kube-system   pod/kube-scheduler-localhost.localdomain            1/1     Running   2          9m33s

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  9m35s
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   9m34s

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   1         1         1       1            1           kubernetes.io/os=linux   45s
kube-system   daemonset.apps/kube-proxy    1         1         1       1            1           kubernetes.io/os=linux   9m34s

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/calico-kube-controllers   1/1     1            1           45s
kube-system   deployment.apps/coredns                   2/2     2            2           9m34s

NAMESPACE     NAME                                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/calico-kube-controllers-74677b4c5f   1         1         1       45s
kube-system   replicaset.apps/coredns-565d847f94                   2         2         2       9m20s

做好了。

当运行”ip a”命令时,可以看到NIC增加了。其中的”tunl0″和”cali”是它们。

[root@localhost etc]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:ae:20:df brd ff:ff:ff:ff:ff:ff
    inet 172.xx.12.62/24 brd 172.xx.12.255 scope global noprefixroute enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feae:20df/64 scope link 
       valid_lft forever preferred_lft forever
3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.244.102.128/32 scope global tunl0
       valid_lft forever preferred_lft forever
6: cali9cc271e60ca@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns 38b565cd-5099-43f6-a21e-82ccd68eda6c
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
7: cali015e1fca632@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns 86d781d2-7ad6-4f84-9a1b-febd17732b48
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
8: cali2db521aeade@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns c1ac37af-9a8d-45a0-93ae-4f6a0cdda07f
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever

如果要在控制平面上运行Pod,需要清除节点的污点,以便可以调度Pod。(默认情况下,控制平面上无法运行Pod。)

[root@localhost ~]# kubectl taint nodes --all node-role.kubernetes.io/control-plane-
node/localhost.localdomain untainted

确认动作

我将尝试部署Nginx。

cat <<EOF | kubectl apply -f -
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nginx-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-test
  template:
    metadata:
      labels:
        app: nginx-test
    spec:
      containers:
        - args:
          image: nginx:latest
          imagePullPolicy: IfNotPresent
          name: nginx-test
          ports:
            - containerPort: 80
              protocol: TCP

---
kind: Service
apiVersion: v1
metadata:
  name: nginx-test-svc
spec:
  ports:
  - name: "http-port"
    protocol: TCP
    port: 8080
    targetPort: 80
  selector:
    app: nginx-test

EOF
deployment.apps/nginx-test created
service/nginx-test-svc created

完成了。

[root@localhost ~]# kubectl get all 
NAME                              READY   STATUS    RESTARTS   AGE
pod/nginx-test-54cdc496f7-zbg6p   1/1     Running   0          40s

NAME                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/kubernetes       ClusterIP   10.96.0.1      <none>        443/TCP    17m
service/nginx-test-svc   ClusterIP   10.96.28.133   <none>        8080/TCP   5m45s

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-test   1/1     1            1           5m45s

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-test-54cdc496f7   1         1         1       40s
replicaset.apps/nginx-test-64d5bd95d7   0         0         0       5m45s

确认

[murata@localhost ~]$ curl 10.96.28.133:8080 -I
HTTP/1.1 200 OK
Server: nginx/1.23.3
Date: Thu, 19 Jan 2023 09:41:05 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 13 Dec 2022 15:53:53 GMT
Connection: keep-alive
ETag: "6398a011-267"
Accept-Ranges: bytes

能够正确地接收到响应。

結論。

由于还没有设置Ingress(负载均衡器),所以无法从外部进行通信,但使用kubeadm进行构建会是这样的。当向集群添加节点(服务器)时,只需运行kubeadm join命令,它会自动连接上。

广告
将在 10 秒后关闭
bannerAds