使用Kubernetes 1.15.1 二进制文件进行安装(worker部分)

先行发布Worker版而不是Master版。
将Kubernetes安装在工作节点(以下简称为节点)上。
不使用Kubeadm。

编辑Node的Host文件

在每个节点的/etc/hosts文件中添加与主节点的/etc/hosts文件相同的条目。

我将Master 的/etc/hosts文件内容添加到node。

[master3]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.201 master1
192.168.1.202 master2
192.168.1.203 master3

192.168.1.201 k8svip.com

因为127.0.0.1和::1应该已经被写入了Node的Host文件中,所以我们需要复制192.168.1.201、2、3和VIP。

简而言之,如果在添加后查看Node的Host文件,应该是以下状态。

[root@node1 ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.201 master1
192.168.1.202 master2
192.168.1.203 master3

192.168.1.201 k8svip.com

从老师那儿复印证书。

在Master上创建SSH证书,并取消rsync的认证(通过Master进行操作)。

[master]#ssh-keygen
#すべてEnter

[master]#ssh-copy-id node1
#node1のパスワードを入力。node1は各自の環境、IPに変更

请(worker)创建文件夹。

mkdir -p /etc/kubernetes/

(由Master操作)(node01使用各自的节点名称或IP地址)

[master]# rsync -av /etc/kubernetes/ssl/ca* node01:/etc/kubernetes/ssl/
[master]# rsync -av /etc/kubernetes/config node01:/etc/kubernetes/
[master]# rsync -av /etc/kubernetes/bootstrap.kubeconfig node01:/etc/kubernetes/

Docker的安装和启动

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce

systemctl enable docker
systemctl daemon-reload
systemctl start docker

三色猫

注册和启动calico service

请注意以下事项:
将 Environment=ETCD_ENDPOINTS= 替换为自己的主机名等。

cat >/usr/lib/systemd/system/calico-node.service <<"EOF"
[Unit]
Description=calico-node
After=docker.service
Requires=docker.service
[Service]
User=root
Environment=ETCD_ENDPOINTS=http://master1:2379,http://master2:2379,http://master3:2379
PermissionsStartOnly=true
ExecStart=/usr/bin/docker run --net=host --privileged --name=calico-node \
  -e ETCD_ENDPOINTS=${ETCD_ENDPOINTS} \
  -e NODENAME=${HOSTNAME} \
  -e IP= \
  -e IP6= \
  -e AS= \
  -e NO_DEFAULT_POOLS= \
  -e CALICO_NETWORKING_BACKEND=bird \
  -e FELIX_DEFAULTENDPOINTTOHOSTACTION=ACCEPT \
  -e CALICO_LIBNETWORK_ENABLED=true \
  -v /lib/modules:/lib/modules \
  -v /run/docker/plugins:/run/docker/plugins \
  -v /var/run/calico:/var/run/calico \
  -v /var/log/calico:/var/log/calico \
  -v /var/lib/calico:/var/lib/calico \
  calico/node:v3.8.0
ExecStop=/usr/bin/docker rm -f calico-node
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF


systemctl daemon-reload
systemctl start calico-node
systemctl enable calico-node
systemctl status calico-node

查看多次 systemctl status calico-node,确保进程没有崩溃。
在代理环境下,无法进行 Docker pull,导致崩溃。
因此,在代理环境下需要进行 Docker 的代理设置。

在Docker中设置代理(仅适用于代理环境下)。

mkdir -p /etc/systemd/system/docker.service.d

cat >/etc/systemd/system/docker.service.d/http-proxy.conf <<"EOF"

[Service]
Environment="HTTP_PROXY=http://<user>:<pass>@<proxy_host>:<proxy_port>" "HTTPS_PROXY=http://<user>:<pass>@<proxy_host>:<proxy_port>" "NO_PROXY=localhost"
EOF

systemctl daemon-reload
systemctl restart docker
systemctl start calico-node
systemctl status calico-node

环境是我的一个例子。

Environment="HTTP_PROXY=http://10.0.0.254:8080" "HTTPS_PROXY=http://"10.0.0.254:8080 "NO_PROXY=localhost"

卡利科的配置设置

“etcd_endpoints”: “http://master1:2379,http://master2:2379,http://master3:2379″,需要根据个人环境进行相应调整。

[ -d /opt/cni/bin ] || mkdir -p /opt/cni/bin && cd /opt/cni/bin
wget -N https://github.com/projectcalico/cni-plugin/releases/download/v3.8.0/calico-amd64
wget -N https://github.com/projectcalico/cni-plugin/releases/download/v3.8.0/calico-ipam-amd64
mv calico-ipam-amd64 calico-ipam
mv calico-amd64 calico
chmod +x /opt/cni/bin/calico /opt/cni/bin/calico-ipam


[ -d /var/src ] || mkdir /var/src && cd /var/src
wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz
mkdir cni-plugins-amd64-v0.7.1
tar -xzvf cni-plugins-amd64-v0.7.1.tgz -C cni-plugins-amd64-v0.7.1
rsync -av cni-plugins-amd64-v0.7.1/* /opt/cni/bin/
mkdir -p /etc/cni/net.d
rm -f /etc/cni/net.d/*


cat >/etc/cni/net.d/10-calico.conflist <<EOF
{
  "name": "calico-k8s-network",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "calico",
      "log_level": "info",
      "etcd_endpoints": "http://master1:2379,http://master2:2379,http://master3:2379",
      "ipam": {
        "type": "calico-ipam"
      },
      "policy": {
        "type": "k8s"
      },
      "kubernetes": {
        "kubeconfig": "/etc/kubernetes/kubelet.kubeconfig"
      }
    },
    {
      "type": "portmap",
      "capabilities": {"portMappings": true}
    }
  ]
}
EOF

将/etc/kubernetes/kubelet.kubeconfig指定的文件从Master复制。(由Master操作)

[master]# rsync -av /root/.kube/config node1:/etc/kubernetes/kubelet.kubeconfig

kubelet的引入

准备文件并创建档案

cd /var/src
wget https://dl.k8s.io/v1.15.2/kubernetes-node-linux-amd64.tar.gz -O kubernetes-node-linux-amd64-v1.15.2.tar.gz
tar -xzvf kubernetes-node-linux-amd64-v1.15.2.tar.gz
mv kubernetes /usr/local/

cat > /etc/profile.d/kubernetes.sh << EOF
export PATH=/usr/local/kubernetes/node/bin:\$PATH
EOF

source /etc/profile

mkdir -p /var/lib/kubelet

Kubelet服务注册

cat > /usr/lib/systemd/system/kubelet.service <<"EOF"
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
User=root
Group=root
EnvironmentFile=/etc/kubernetes/kubelet.conf
ExecStart=/usr/local/kubernetes/node/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

創建多個文件夾

mkdir /var/log/kubernetes
mkdir -p /var/lib/kubelet

创建kubelet.conf文件

–hostname-override=node1 请根据自己的节点名称进行调整。

cat > /etc/kubernetes/kubelet.conf <<"EOF"
KUBELET_OPTS="\
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=4 \
  --hostname-override=node1 \
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
  --config=/etc/kubernetes/kubelet.yaml \
  --cert-dir=/etc/kubernetes/ssl \
  --network-plugin=cni \
  --container-runtime=docker \
  --container-runtime-endpoint=unix:///var/run/dockershim.sock \
  --pod-infra-container-image=gcr.io/google-containers/pause:3.1"
EOF

kubelet.yaml作成

clusterDNS: [“10.254.0.2”]应根据个人环境进行调整吗?

cat > /etc/kubernetes/kubelet.yaml <<"EOF"
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:  ["10.254.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
hairpin-mode: promiscuous-bridge
authentication:
  anonymous:
    enabled: false
  webhook:
    enbaled: false
  x509:
    clientCAFile: "/etc/kubernetes/ssl/ca.pem"
EOF

ここでKubeletを起動しても起動できないので一旦保留。

Kube-proxy的引入

下準備

yum install conntrack-tools -y

从Master复制证书等文件(从Master执行操作)

将node1替换为每个节点的名称

[master]# rsync -av /etc/kubernetes/kube-proxy.kubeconfig node1:/etc/kubernetes/
[master]# rsync -av /etc/kubernetes/ssl/kube-proxy* node1:/etc/kubernetes/ssl/
[master]# rsync -av /etc/kubernetes/config node1:/etc/kubernetes/

创建 Kube-proxy 服务。

cat > /usr/lib/systemd/system/kube-proxy.service <<"EOF"
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy.conf
ExecStart=/usr/local/kubernetes/node/bin/kube-proxy \
        --config=/etc/kubernetes/proxy.yaml \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOGDIR \
        $KUBE_LOG_LEVEL \
        $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

创建Kube-proxy.yaml文件

hostnameOverride:将 node1 替换为每个节点的实际节点名。

cat > /etc/kubernetes/proxy.yaml <<"EOF"
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  burst: 200
  kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
  qps: 100
bindAddress: 0.0.0.0
healthzBindAddress: 0.0.0.0:10256
metricsBindAddress: 0.0.0.0:10249
clusterCIDR: 192.168.0.0/16
hostnameOverride: node1
mode: ipvs
kubeProxyIPVSConfiguration:
  syncPeriod: 5s
  minSyncPeriod: 1s
  scheduler: lc
EOF

顺便说一句,v1alpha1的API版本是什么鬼。
我对这一点不太了解,因为我不知道代理设置是参考下面哪个。
所以我也会进行以下的设置。

创建代理配置文件。

KUBE_PROXY_ARGS=”–hostname-override=node1″ 应将其替换为各自节点的名称。

cat >/etc/kubernetes/proxy.conf <<"EOF"
KUBE_PROXY_ARGS="--hostname-override=node1 \
                 --cluster-cidr=192.168.0.0/16 \
                 --ipvs-min-sync-period=5s \
                 --ipvs-sync-period=5s \
                 --ipvs-scheduler=rr"
EOF

引入ipvs(NAT,负载均衡器)。

为了连接容器内外的网络,并提供负载均衡,进行了引入。

创建ipvs.module

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack_ipv4"
for kernel_module in \${ipvs_modules}; do
    /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
    if [ $? -eq 0 ]; then
        /sbin/modprobe \${kernel_module}
    fi
done
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

创建ipvs.conf文件

cat > /etc/modprobe.d/ip_vs.conf << EOF
options ip_vs conn_tab_bits=20
EOF

modprobe -r ip_vs

战斗开始

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

在Master身上

(Master)kube-proxy-rbac.yamlの作成(Masterのうち1台で実行)

[master]# cat >kube-proxy-rbac.yaml <<"EOF"
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-proxy
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:kube-proxy
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
  - kind: ServiceAccount
    name: kube-proxy
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: system:node-proxier
  apiGroup: rbac.authorization.k8s.io
EOF

kube-proxy-rbac部署(Master)

[master]# kubectl apply -f kube-proxy-rbac.yaml

使用Nginx进行测试(主服务器)

让我们试着运行一下常用的nginx。

[master]# cat > nginx-ds.yml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: nginx-ds
  labels:
    app: nginx-ds
spec:
  type: NodePort
  selector:
    app: nginx-ds
  ports:
  - name: http
    port: 80
    targetPort: 80

---

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ds
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80

EOF

[master]# kubectl create -f nginx-ds.yml
[master]# kubectl get pods

[master]#  kubectl get pods
#NAME             READY   STATUS    RESTARTS   AGE
#nginx-ds-bqc6t   1/1     Running   0          1m

广告
将在 10 秒后关闭
bannerAds