使用Kind(Kubernetes IN Docker)来设置 Feature Gates 的方法
太长没读 dú)
请将FeatureGateName=true替换为任意值。
此外,还请从清单中删除无需配置的组件。
$ cat config.yaml
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
kubeadmConfigPatches:
- |
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
name: config
apiServer:
extraArgs:
"feature-gates": "FeatureGateName=true"
scheduler:
extraArgs:
"feature-gates": "FeatureGateName=true"
controllerManager:
extraArgs:
"feature-gates": "FeatureGateName=true"
- |
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
metadata:
name: config
nodeRegistration:
kubeletExtraArgs:
"feature-gates": "FeatureGateName=true"
$ kind create cluster --config config.yaml
首先
在这里我们会介绍如何在Kind(Kubernetes IN Docker)中设置Feature Gates。你可以在这个链接里找到更多相关信息:https://github.com/kubernetes-sigs/kind
确认动作
创建一个用于配置将设置传递给 kube-apiserver 和 kubelet 的文件。本次示例将设置 CSIInlineVolume=true 的 Feature Gates。
$ cat config.yaml
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
kubeadmConfigPatches:
- |
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
name: config
apiServer:
extraArgs:
"feature-gates": "CSIInlineVolume=true"
- |
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
metadata:
name: config
nodeRegistration:
kubeletExtraArgs:
"feature-gates": "CSIInlineVolume=true"
使用设定文件并在kind中创建集群。
$ kind create cluster --config config.yaml
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.15.0) ?
✓ Preparing nodes ?
✓ Creating kubeadm config ?
✓ Starting control-plane ?️
✓ Installing CNI ?
✓ Installing StorageClass ?
Cluster creation complete. You can now use the cluster with:
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl cluster-info
确认kind容器的名称。
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0588eb52ef30 kindest/node:v1.15.0 "/usr/local/bin/entr…" 12 minutes ago Up 12 minutes 62050/tcp, 127.0.0.1:62050->6443/tcp kind-control-plane
查看在kind容器中执行命令时传递给kube-apiserver和kubelet的参数。
$ docker exec -it kind-control-plane ps -efww | grep kubelet
root 220 1 3 16:01 ? 00:00:24 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock --fail-swap-on=false --feature-gates=CSIInlineVolume=true --node-ip=172.17.0.2 --fail-swap-on=false
root 614 597 6 16:02 ? 00:00:43 kube-apiserver --advertise-address=172.17.0.2 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=CSIInlineVolume=true --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
在Qiita上滚动很累,所以当我执行简单的Shell命令时,可以确认预期的功能选项已传递。
$ docker exec -it kind-control-plane ps -efww | grep kubelet | tr ' ' '\n'
...
/usr/bin/kubelet
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
--kubeconfig=/etc/kubernetes/kubelet.conf
--config=/var/lib/kubelet/config.yaml
--container-runtime=remote
--container-runtime-endpoint=/run/containerd/containerd.sock
--fail-swap-on=false
--feature-gates=CSIInlineVolume=true # <--- 想定通り Feature Gates の引数が渡っている
--node-ip=172.17.0.2
--fail-swap-on=false
...
kube-apiserver
--advertise-address=172.17.0.2
--allow-privileged=true
--authorization-mode=Node,RBAC
--client-ca-file=/etc/kubernetes/pki/ca.crt
--enable-admission-plugins=NodeRestriction
--enable-bootstrap-token-auth=true
--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
--etcd-servers=https://127.0.0.1:2379
--feature-gates=CSIInlineVolume=true # <--- 想定通り Feature Gates の引数が渡っている
--insecure-port=0
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
--requestheader-allowed-names=front-proxy-client
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--secure-port=6443
--service-account-key-file=/etc/kubernetes/pki/sa.pub
--service-cluster-ip-range=10.96.0.0/12
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key
备考
这次我们按照设定文件中的指示,在 kube-apiserver 和 kubelet 上配置了功能开关。通过配置 kube-apiserver、kube-scheduler、kube-controller-manager 和 kubelet 的各自的功能开关,可以根据需要进行任意设置。
如果您想选择多个选项,请使用逗号分隔的方式指定,例如:”feature-gates”: “CSIInlineVolume=true,ServerSideApply=true”。
参考资料:
– 参考资料供您参考
– 下列为参考资料
– 非常参考资料
– 参考资料如下
– 供参考参考资料
– 以下为参考资料
– 可供参考的资料如下
– 供您参考的资料如下
– 可供参考的资料
– 下面是一些参考资料
-
- https://github.com/kubernetes-sigs/kind/issues/563
- https://github.com/kubernetes-sigs/kind/blob/master/site/content/docs/user/quick-start.md#enable-feature-gates-in-your-cluster