我尝试自动化 Kubernetes 的环境建设

首先

最近我是一个在Kubernetes环境搭建中挣扎的新人SE。嗯,我认为最近容器技术很流行,而Kubernetes作为容器编排系统也逐渐成为一项标准技术。然而,即使我想尝试一下Kubernetes,也很难仅仅使用一台笔记本电脑进行测试。所以,在这篇文章中,我将介绍如何使用Vagrant和Ansible在VirtualBox上快速搭建一个包含3台虚拟机的Kubernetes环境。配置将包括1台Master节点和2台Worker节点。
需要注意的是,这篇文章中的代码已经放在GitHub上了。

环境

Mac OS Catalina 10.15.2
Vagrant 2.2.6
VirtualBox 6.0.12

macOS Catalina 10.15.2
Vagrant 2.2.6
VirtualBox 6.0.12

目录结构

在这篇文章中,要创建的文件的目录结构如下所示。(sync是一个目录)

.
├── Vagrantfile
├── ansible-playbook
│   ├── hosts
│   ├── k8s_master.yml
│   ├── k8s_workers.yml
│   ├── kubernetes.yml
├── sync
├── ansible.cfg

安装必需的插件。

为了在vagrant中实现与虚拟机的目录共享,必须安装必要的vagrant插件。

$ vagrant plugin install vagrant-vbguest
# プロキシ設定が必要な場合のみ
$ vagrant plugin install vagrant-proxyconf
$ vagrant reload 

创建Vagrantfile

首先,进入工作目录并创建Vagrantfile的模板。

$ vagrant init

然后,会生成一个Vagrantfile,但首先要将其内容全部清空。使用编辑器或其他工具将其置为空白状态。
在编写配置之前,先将其设置为以下的雏形状态。

# coding: utf-8
# -*- mode: ruby -*-
Vagrant.configure(2) do |config|

end

主节点的设置

首先,我们会写出Master Node的配置。首先展示完成形式。

# coding: utf-8
# -*- mode: ruby -*-
Vagrant.configure(2) do |config|
  # マスタ 仮想マシンの起動
  config.vm.define 'master' do |machine|
    machine.vm.box = "centos/7"
    machine.vm.hostname = 'master'
    machine.vm.network :private_network,ip: "172.16.20.11"
    machine.vm.provider "virtualbox" do |vbox|
      vbox.gui = false
      vbox.cpus = 2
      vbox.memory = 1024
    end
    machine.vm.synced_folder "./sync", "/home/vagrant/sync", owner: "vagrant",
      group: "vagrant", mount_options: ["dmode=777", "fmode=777"]

    # マスタ docker & k8sのインストール
    machine.vm.provision "ansible_local" do |ansible|
      ansible.playbook = "ansible-playbook/kubernetes.yml"
      ansible.version = "latest"
      ansible.verbose = false
      ansible.limit = "master"
      ansible.inventory_path = "ansible-playbook/hosts"
    end

    # マスタのセットアップ
    machine.vm.provision "ansible_local" do |ansible|
      ansible.playbook = "ansible-playbook/k8s_master.yml"
      ansible.version = "latest"
      ansible.verbose = false
      ansible.install = false
      ansible.limit = "master"
      ansible.inventory_path = "ansible-playbook/hosts"
    end
  end
end

首先是关于以下描述了创建虚拟机的基本设置部分。

  # マスタ 仮想マシンの起動
  config.vm.define 'master' do |machine|
    machine.vm.box = "centos/7"
    machine.vm.hostname = 'master'
    machine.vm.network :private_network,ip: "172.16.20.11"
    machine.vm.provider "virtualbox" do |vbox|
      vbox.gui = false
      vbox.cpus = 2
      vbox.memory = 1024
    end
    machine.vm.synced_folder "./sync", "/home/vagrant/sync", owner: "vagrant",
      group: "vagrant", mount_options: ["dmode=777", "fmode=777"]

我会对每一行进行解释。

    • machine.vm.box

 

    • 使用するbox名。Dockerでいうところのイメージ名のようなもの。

 

    • machine.vm.hostname

 

    • VMのhostname。

 

    • machine.vm.network、ip

 

    • プライベートネットワークでのVMのIPアドレスを指定。

 

    • machine.vm.provider

 

    • 使用する仮想化ソフトの指定。このブロックの中に仮想化ソフトでの設定を記述する。

 

    • machine.vm.synced_folder

 

    ここでホストOSとゲストOSのディレクトリ共有を設定。ownerやグループも指定できる。デフォルトはSSHユーザー。

接下来,将进行在虚拟机上安装docker和k8s的配置。

    # マスタ docker & k8sのインストール
    machine.vm.provision "ansible_local" do |ansible|
      ansible.playbook = "ansible-playbook/kubernetes.yml"
      ansible.version = "latest"
      ansible.verbose = false
      ansible.limit = "master"
      ansible.inventory_path = "ansible-playbook/hosts"
    end
    • machine.vm.provision

 

    • ここで”ansible_local”を指定することでVMにAnsibleがインストールされる。

 

    • ansible.playbook

 

    • VM内で実行するplaybookを指定。

 

    • ansible.version

 

    • Ansibleのバージョン指定。

 

    • ansible.verbose

 

    • ansible-playbook 実行時に詳細出力するかどうか。出力する場合はv,vv,vvvと指定する。(vの数が多いほど詳細に出力)

 

    • ansible.limit

 

    • ansible-playbook を実行するホストやグループ指定。

 

    • ansible.inventory_path

 

    hostsファイルのパス。

设置主机的步骤也会以类似的方式描述。

创建工作节点

我会写出两个工作节点的设置。下面是完成的形式。

# coding: utf-8
# -*- mode: ruby -*-
Vagrant.configure(2) do |config|
...
省略(Master Nodeの設定)
...
  # Worker Node1 仮想マシンの起動
  config.vm.define 'worker1' do |machine|
    machine.vm.box = "centos/7"
    machine.vm.hostname = 'worker1'
    machine.vm.network :private_network,ip: "172.16.20.12"
    machine.vm.provider "virtualbox" do |vbox|
      vbox.gui = false
      vbox.cpus = 1
      vbox.memory = 1024
    end
    machine.vm.synced_folder "./sync", "/home/vagrant/sync", owner: "vagrant",
      group: "vagrant", mount_options: ["dmode=777", "fmode=777"]

    # Worker Node1 docker & k8sのインストール
    machine.vm.provision "ansible_local" do |ansible|
      ansible.playbook = "ansible-playbook/kubernetes.yml"
      ansible.version = "latest"
      ansible.verbose = false
      ansible.limit = "worker1"
      ansible.inventory_path = "ansible-playbook/hosts"
    end

    # ノードのセットアップ
    machine.vm.provision "ansible_local" do |ansible|
      ansible.playbook = "ansible-playbook/k8s_workers.yml"
      ansible.version = "latest"
      ansible.verbose = false
      ansible.install = false
      ansible.limit = "worker1"
      ansible.inventory_path = "ansible-playbook/hosts"
    end
  end

  # Worker Node2 仮想マシンの起動
  config.vm.define 'worker2' do |machine|
    machine.vm.box = "centos/7"
    machine.vm.hostname = 'worker2'
    machine.vm.network :private_network,ip: "172.16.20.13"
    machine.vm.provider "virtualbox" do |vbox|
      vbox.gui = false
      vbox.cpus = 1
      vbox.memory = 1024
    end
    machine.vm.synced_folder "./sync", "/home/vagrant/sync", owner: "vagrant",
      group: "vagrant", mount_options: ["dmode=777", "fmode=777"]

    # Worker Node2 docker & k8sのインストール
    machine.vm.provision "ansible_local" do |ansible|
      ansible.playbook = "ansible-playbook/kubernetes.yml"
      ansible.version = "latest"
      ansible.verbose = false
      ansible.limit = "worker2"
      ansible.inventory_path = "ansible-playbook/hosts"
    end

    # ノードのセットアップ
    machine.vm.provision "ansible_local" do |ansible|
      ansible.playbook = "ansible-playbook/k8s_workers.yml"
      ansible.version = "latest"
      ansible.verbose = false
      ansible.install = false
      ansible.limit = "worker2"
      ansible.inventory_path = "ansible-playbook/hosts"
    end
  end
end

工作节点的设置与主节点的语法没有区别。

已完成

Vagrantfile的最终版本如下所示。

# coding: utf-8
# -*- mode: ruby -*-
Vagrant.configure(2) do |config|
  # プロキシ設定
  #if Vagrant.has_plugin?("vagrant-proxyconf")
  # config.proxy.enabled  = true  # => true; all applications enabled, false; all applications disabled
  # config.proxy.http     = ""
  # config.proxy.https    = ""
  #  config.proxy.no_proxy = "localhost,127.0.0.1,172.16.20.11,172.16.20.12,172.16.20.13,10.96.0.0/12,10.244.0.0/16,10.32.0.10" 
  #end

  #if Vagrant.has_plugin?("vagrant-vbguest")
  #  config.vbguest.auto_update = true
  #end

  # マスタ 仮想マシンの起動
  config.vm.define 'master' do |machine|
    machine.vm.box = "centos/7"
    machine.vm.hostname = 'master'
    machine.vm.network :private_network,ip: "172.16.20.11"
    machine.vm.provider "virtualbox" do |vbox|
      vbox.gui = false
      vbox.cpus = 2
      vbox.memory = 1024
    end
    machine.vm.synced_folder "./sync", "/home/vagrant/sync", owner: "vagrant",
      group: "vagrant", mount_options: ["dmode=777", "fmode=777"]

    # マスタ docker & k8sのインストール
    machine.vm.provision "ansible_local" do |ansible|
      ansible.playbook = "ansible-playbook/kubernetes.yml"
      ansible.version = "latest"
      ansible.verbose = false
      ansible.limit = "master"
      ansible.inventory_path = "ansible-playbook/hosts"
    end

    # マスタのセットアップ
    machine.vm.provision "ansible_local" do |ansible|
      ansible.playbook = "ansible-playbook/k8s_master.yml"
      ansible.version = "latest"
      ansible.verbose = false
      ansible.install = false
      ansible.limit = "master"
      ansible.inventory_path = "ansible-playbook/hosts"
    end
  end

  # Worker Node1 仮想マシンの起動
  config.vm.define 'worker1' do |machine|
    machine.vm.box = "centos/7"
    machine.vm.hostname = 'worker1'
    machine.vm.network :private_network,ip: "172.16.20.12"
    machine.vm.provider "virtualbox" do |vbox|
      vbox.gui = false
      vbox.cpus = 1
      vbox.memory = 1024
    end
    machine.vm.synced_folder "./sync", "/home/vagrant/sync", owner: "vagrant",
      group: "vagrant", mount_options: ["dmode=777", "fmode=777"]

    # Worker Node1 docker & k8sのインストール
    machine.vm.provision "ansible_local" do |ansible|
      ansible.playbook = "ansible-playbook/kubernetes.yml"
      ansible.version = "latest"
      ansible.verbose = false
      ansible.limit = "worker1"
      ansible.inventory_path = "ansible-playbook/hosts"
    end

    # ノードのセットアップ
    machine.vm.provision "ansible_local" do |ansible|
      ansible.playbook = "ansible-playbook/k8s_workers.yml"
      ansible.version = "latest"
      ansible.verbose = false
      ansible.install = false
      ansible.limit = "worker1"
      ansible.inventory_path = "ansible-playbook/hosts"
    end
  end

  # Worker Node2 仮想マシンの起動
  config.vm.define 'worker2' do |machine|
    machine.vm.box = "centos/7"
    machine.vm.hostname = 'worker2'
    machine.vm.network :private_network,ip: "172.16.20.13"
    machine.vm.provider "virtualbox" do |vbox|
      vbox.gui = false
      vbox.cpus = 1
      vbox.memory = 1024
    end
    machine.vm.synced_folder "./sync", "/home/vagrant/sync", owner: "vagrant",
      group: "vagrant", mount_options: ["dmode=777", "fmode=777"]

    # Worker Node2 docker & k8sのインストール
    machine.vm.provision "ansible_local" do |ansible|
      ansible.playbook = "ansible-playbook/kubernetes.yml"
      ansible.version = "latest"
      ansible.verbose = false
      ansible.limit = "worker2"
      ansible.inventory_path = "ansible-playbook/hosts"
    end

    # ノードのセットアップ
    machine.vm.provision "ansible_local" do |ansible|
      ansible.playbook = "ansible-playbook/k8s_workers.yml"
      ansible.version = "latest"
      ansible.verbose = false
      ansible.install = false
      ansible.limit = "worker2"
      ansible.inventory_path = "ansible-playbook/hosts"
    end
  end
end

创建Playbook

接下来,我将对Playbook进行说明。在Vagrantfile中指定了”machine.vm.provision “ansible_local””,因此在虚拟机上安装了ansible,并在虚拟机上执行。因此,不需要在主机操作系统上安装ansible。

首先,按照以下方式创建配置文件和hosts文件。

[defaults]
inventory= /vagrant/ansible-playbook/hosts

host_key_checking = no

[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes
master     ansible_connection=local
worker1    ansible_connection=local
worker2    ansible_connection=local

[workers]
worker[1:2]

hosts文件是一个用于设置执行ansible的远程主机的清单文件,在清单文件中,可以将远程主机按组进行分类。

安装 Docker 和 Kubernetes

我将写下安装Docker和Kubernetes的配置。

- hosts: all
  become: yes
  gather_facts: True

  tasks:
  ## yum repositoryの設定
  - name: Install yum-utils
    yum:
      name: "{{ item }}"
      state: latest
    with_list:
      - yum-utils
      - device-mapper-persistent-data
      - lvm2

  - name: Add Docker repo
    get_url:
      url: https://download.docker.com/linux/centos/docker-ce.repo
      dest: /etc/yum.repos.d/docker-ce.repo
    become: yes

  - name: Add kubernetes repo
    yum_repository:
      name: kubernetes
      description: kubernetes repo
      baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
      gpgcheck: no
      enabled: yes

  ## Dockerのインストール
  - name: Install docker
    yum:
      name: docker-ce
      state: present

  - name: Docker setting mkdir
    file:
      path: /etc/systemd/system/docker.service.d
      state: directory
      owner: root
      group: root
      mode: 0755

  - name: Docker setting option file
    copy:
      src: option.conf
      dest: /etc/systemd/system/docker.service.d/option.conf

  - name: Start service docker, and enable service  docker
    systemd:
      name: docker.service
      state: started
      daemon-reload: yes
      enabled: yes

  ## ユーザーvagrantをdockerグループにいれる
  - name: Usermod -aG docker vagrant
    user:
      name: vagrant
      groups: docker

  ## カーネル設定変更
  - name: Modprebe br_netfilter
    command: modprobe br_netfilter

  - name: Set sysctl
    sysctl:
      name: net.bridge.bridge-nf-call-iptables
      value: "1"
      sysctl_set: yes
      sysctl_file: /etc/sysctl.conf
      state: present
      reload: yes

  ## 事前設定
  - name: Disable SELinux
    command: setenforce 0

  - name: Disable SELinux on reboot
    selinux:
      state: disabled

  - name: Ensure net.bridge.bridge-nf-call-iptables is set to 1
    sysctl:
     name: net.bridge.bridge-nf-call-iptables
     value: 1
     state: present

  - name: Swap off
    command: swapoff -a

  - name: Diable firewalld
    systemd:
      name: firewalld
      state: stopped
      enabled: no

  ## Kubernetes のインストール
  - name: Install kubelet kubeadm
    yum:
      name: "{{ packages }}"
    vars:
      packages:
        - kubelet
        - kubeadm
      state: present

  - name: Start kubelet
    systemd:
      name: kubelet.service
      state: started
      enabled: yes

  - name: Install kubectl
    yum:
      name: kubectl
      state: present
      allow_downgrade: yes

  ## HostOnly Interface の IPアドレス取得
  - name: Install net-tools
    yum:
      name: net-tools
      state: present

  - name: Getting hostonly ip address
    command: ifconfig eth1
    register: ip
  - debug: var=ip.stdout_lines[1].split('inet')[1].split(' ')[1]

  ## 10-kubeadmin.conf に --node-ipを追加
  - name: Oopy /usr/lib/systemd/system/kubelet.service.d to /etc/systemd/system
    copy:
      src: /usr/lib/systemd/system/kubelet.service.d/
      dest: /etc/systemd/system/kubelet.service.d/
      owner: root
      group: root
      mode: 0755

  - name: Change 10-kubeadm.conf for v1.11 or later
    replace:
      dest: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
      regexp: 'KUBELET_EXTRA_ARGS$'
      replace: KUBELET_EXTRA_ARGS --node-ip={{ ip.stdout_lines[1].split('inet')[1].split(' ')[1] }} --cluster-dns=10.32.0.10

  ## 変更を反映
  - name: Daemon-reload and restart kubelet
    systemd:
      name: kubelet.service
      state: restarted
      daemon_reload: yes

  ## ホストとの共有ディレクトリ作成
  - name: Make sync directory
    file:
      path: /home/vagrant/sync
      state: directory
      owner: vagrant
      group: vagrant
      mode: '0755'

主节点的设置

我將寫下設置以設定主控程序。

- hosts: master
  become: yes
  become_user: root
  gather_facts: True

  tasks:
    ## k8sマスタを初期化
    - name: Kubeadm reset v1.11 or later
      command: kubeadm reset -f

    ## HostOnly Interface の IPアドレス取得
    - name: Getting hostonly ip address
      command: ifconfig eth1
      register: ip

    - name: Kubeadm init
      command: kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address={{ ip.stdout_lines[1].split('inet')[1].split(' ')[1] }} --service-cidr=10.32.0.0/24
      register: join

    ## k8sノード用コマンドを保存
    - name: Generate join command
      command: kubeadm token create --print-join-command
      register: join_command

    - name: Copy join command to sync directory
      copy:
        content: "{{ join_command.stdout_lines[0] }}"
        dest: /home/vagrant/sync/join-command.sh
        mode: 0777

    ## kubeletのDNSのIPアドレスを変更
    - name: Change config.yaml
      replace:
        dest: /var/lib/kubelet/config.yaml
        regexp: '10.96.0.10'
        replace: 10.32.0.10

    ## kubeletをリスタートして変更を反映
    - name: Daemon-reload and restart kubelet
      systemd:
        state: restarted
        daemon_reload: yes
        name: kubelet

    ## kubeconfigディレクトリ作成
    - name: Mkdir kubeconfig
      file:
        path:  /home/vagrant/.kube
        state: directory
        owner: vagrant
        group: vagrant
        mode:  '0755'

    ## configファイルのコピー
    - name: Chmod admin.conf
      file:
        path:  /etc/kubernetes/admin.conf
        owner: vagrant
        group: vagrant
        mode:  '0600'

    - name: Copy config to home dir
      copy:
        src:  /etc/kubernetes/admin.conf
        dest: /home/vagrant/.kube/config
        owner: vagrant
        group: vagrant
        mode:  '0600'

    ## Wgetのインストール
    - name: Install wget
      yum:
        name: wget
        state: latest

    ## Flannelのインストール
    - name: Install flannel
      get_url:
        url: "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml"
        dest: /home/vagrant/kube-flannel.yml

    ## Flannelのデプロイ
    - name: Deploy Flannel
      become_user: vagrant
      command: kubectl apply -f /home/vagrant/kube-flannel.yml

    ## Gitのインストール
    - name: Install git
      yum:
        name: git
        state: latest

    ## Metrics Server インストール
    - name: Install Metrics Server
      git:
        repo: 'https://github.com/kubernetes-sigs/metrics-server'
        dest: /home/vagrant/metrics-server

    - name: add a new line
      blockinfile:
        path: /home/vagrant/metrics-server/deploy/kubernetes/metrics-server-deployment.yaml
        insertafter: '        args:'
        block: |
           # added lines
                     - --kubelet-insecure-tls
                     - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - name: Deploy Metrics Server
      become_user: vagrant
      command: kubectl apply -f /home/vagrant/metrics-server/deploy/kubernetes

    ## Dashboard UI インストール
    - name: Download Dashboard Manifest
      get_url:
        url: https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml
        dest: /home/vagrant/
        mode: '0644'
        owner: vagrant
        group: vagrant

    - name: Change Dashboard RBAC
      replace:
        path: /home/vagrant/recommended.yaml
        after: '  kind: ClusterRole'
        regexp: '^  name: kubernetes-dashboard'
        replace: '  name: cluster-admin'

    - name: Deploy Dashboard UI
      become_user: vagrant
      command: kubectl apply -f /home/vagrant/recommended.yaml

    - name: Setup kubeconfig
      become_user: vagrant
      shell: |
        TOKEN=$(kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret |grep kubernetes-dashboard-token-* | awk '{print $1}') |awk '$1=="token:"{print $2}')
        kubectl config set-credentials kubernetes-admin --token="${TOKEN}"

节点的配置

我們將開始撰寫設置節點的設定。

- hosts: workers
  become: yes
  become_user: root
  gather_facts: True

  tasks:
    ## k8sノードをリセット
    - name: Kubeadm reset v1.11 or later
      command: kubeadm reset -f

    ## k8sクラスタへ参加
    - name: Join the node to cluster
      command: sh /home/vagrant/sync/join-command.sh

    ## kubeletのDNSのIPアドレスを変更
    - name: Change config.yaml
      replace:
        dest: /var/lib/kubelet/config.yaml
        regexp: '10.96.0.10'
        replace: 10.32.0.10

    ## kubeletをリスタートして変更を反映
    - name: Daemon-reload and restart kubelet
      systemd:
        state: restarted
        daemon_reload: yes
        name: kubelet

执行

我已经完成了到目前为止所需的文件。按照一开始的目录结构放置文件,然后进入包含Vagrantfile的目录并执行”vagrant up”,即可创建Kubernetes环境!(第一次执行大约需要1个小时)
由于它会消耗相当多的电池电量,所以不使用时应使用”vagrant halt”进行关闭。

请留意以下事项

    • CPU高負荷で失敗することがあるので、他のアプリケーションは起動させずに実行することをお勧めします。(ハイスペックマシンなら問題ありませんが笑)

 

    何かしらの理由で失敗したり中断した場合、syncディレクトリ内のjoin-command.shを削除してから、再実行するようにしてください。

重启时的步骤 qǐ shí de bù

如果从”vagrant halt”命令将虚拟机关闭,然后使用”vagrant up”命令启动,为了再次使用Kubernetes集群,需要执行一些操作。

1: 登录至主页面

$ vagrant ssh master

将swap禁用

[vagrant@master ~]$ sudo swapoff -a

3: 等待除了coredns以外的管理Pod启动。

[vagrant@master ~]$ kubectl get pods -n kube-system -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP             NODE      NOMINATED NODE   READINESS GATES
coredns-6955765f44-2qmjs         0/1     Completed   0          23m     10.244.0.10    master    <none>           <none>
coredns-6955765f44-dtmr8         0/1     Completed   0          3d12h   10.244.0.8     master    <none>           <none>
etcd-master                      1/1     Running   8          3d12h   172.16.20.11   master    <none>           <none>
kube-apiserver-master            1/1     Running   7          3d12h   172.16.20.11   master    <none>           <none>
kube-controller-manager-master   1/1     Running   11         3d12h   172.16.20.11   master    <none>           <none>
kube-flannel-ds-amd64-9p8wd      1/1     Running   7          3d12h   172.16.20.11   master    <none>           <none>
kube-flannel-ds-amd64-d6tqx      1/1     Running   0          3d11h   172.16.20.13   worker2   <none>           <none>
kube-flannel-ds-amd64-hwbcp      1/1     Running   0          3d12h   172.16.20.12   worker1   <none>           <none>
kube-proxy-j88r7                 1/1     Running   0          3d11h   172.16.20.13   worker2   <none>           <none>
kube-proxy-kmlpr                 1/1     Running   4          3d12h   172.16.20.11   master    <none>           <none>
kube-proxy-ptrp2                 1/1     Running   0          3d12h   172.16.20.12   worker1   <none>           <none>
kube-scheduler-master            1/1     Running   10         3d12h   172.16.20.11   master    <none>           <none>
metrics-server-988549d7f-bn92x   1/1     Running   0          12m     10.244.1.2     worker1   <none>           <none>

Pod将被重启多次,直到成功启动为止,因此程序会变得非常缓慢。您需要等待大约15分钟。

获取用于加入集群的命令。

[vagrant@master ~]$ kubeadm token create --print-join-command
kubeadm join 172.16.20.11:6443 --token dvucb2.lekv9gr0xdppi8gj     --discovery-token-ca-cert-hash sha256:4eec307c5a512ef4fb88e11d90ee99fdcab18b52cfe945ba607fc15f64562358 
[vagrant@master ~]$ exit

稍后使用,所以请复制到某个地方备份。

5:将工作节点加入到Kubernetes集群中。

这个步骤在所有的工作节点上执行。以worker1为例。

$ vagrant ssh worker1
[vagrant@worker1 ~]$ sudo swapoff -a
[vagrant@worker1 ~]$ sudo kubeadm reset -f
[vagrant@worker1 ~]$ sudo kubeadm join 172.16.20.11:6443 --token dvucb2.lekv9gr0xdppi8gj     --discovery-token-ca-cert-hash sha256:4eec307c5a512ef4fb88e11d90ee99fdcab18b52cfe945ba607fc15f64562358
[vagrant@worker1 ~]$ exit

6: 确认Kubernetes集群是否正常

$ vagrant ssh master
[vagrant@master ~]$ kubectl get nodes
NAME      STATUS   ROLES    AGE     VERSION
master    Ready    master   3d13h   v1.17.2
worker1   Ready    <none>   3d12h   v1.17.2
worker2   Ready    <none>   3d12h   v1.17.2
[vagrant@master ~]$ kubectl get pods -n kube-system -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP             NODE      NOMINATED NODE   READINESS GATES
coredns-6955765f44-2qmjs         1/1     Running   1          57m     10.244.0.11    master    <none>           <none>
coredns-6955765f44-dtmr8         1/1     Running   2          3d13h   10.244.0.13    master    <none>           <none>
etcd-master                      1/1     Running   11         3d13h   172.16.20.11   master    <none>           <none>
kube-apiserver-master            1/1     Running   10         3d13h   172.16.20.11   master    <none>           <none>
kube-controller-manager-master   1/1     Running   14         3d13h   172.16.20.11   master    <none>           <none>
kube-flannel-ds-amd64-9p8wd      1/1     Running   10         3d13h   172.16.20.11   master    <none>           <none>
kube-flannel-ds-amd64-d6tqx      1/1     Running   0          3d12h   172.16.20.13   worker2   <none>           <none>
kube-flannel-ds-amd64-hwbcp      1/1     Running   0          3d12h   172.16.20.12   worker1   <none>           <none>
kube-proxy-j88r7                 1/1     Running   0          3d12h   172.16.20.13   worker2   <none>           <none>
kube-proxy-kmlpr                 1/1     Running   6          3d13h   172.16.20.11   master    <none>           <none>
kube-proxy-ptrp2                 1/1     Running   0          3d12h   172.16.20.12   worker1   <none>           <none>
kube-scheduler-master            1/1     Running   12         3d13h   172.16.20.11   master    <none>           <none>
metrics-server-988549d7f-bn92x   1/1     Running   0          46m     10.244.1.2     worker1   <none>           <none>

Kubernetes集群的重新启动需要时间,但相比重新创建VM,所需时间较少。(希望也能自动化集群的重新启动)

bannerAds