【番外篇】DigitalOcean的托管Kubernetes101
自从那之后,我尝试过调整Manifest文件的资源设置,也试过尝试了各种minikube start命令的选项,但都不成功,所以作为额外的尝试,我想尝试设置DigitalOcean的Managed Kubernetes。
因为我也从未使用过,所以有点紧张。
让我们开始吧。
注意:由于不是免费服务,则执行该操作将产生费用。
开始
从DigitalOcean的官方网站首页(https://www.digitalocean.com/products/kubernetes/)开始,我们进行Get Started。
创建集群
在 “Create Cluster” 页面上进行位置和节点等设置。

当最后点击屏幕最下方的”创建集群”绿色按钮后,将开始创建集群的过程。
设置集群
首先,有一个安装管理工具的步骤。除了kubectl,似乎也可以使用DigitalOcean提供的doctl工具,但这次我们将使用kubectl。
接下来是配置文件的下载。
虽然推荐使用doctl命令进行自动更新,但由于这次是试用,所以我选择只是下载下来(证书需要手动更新)。

完成下载后,我们可以使用以下命令来确认是否可以访问集群。
$ kubectl --kubeconfig="<file name>" get nodes
NAME STATUS ROLES AGE VERSION
pool-y2hks24e6-l3t2 Ready <none> 2m58s v1.16.2
pool-y2hks24e6-l3tl Ready <none> 2m39s v1.16.2
pool-y2hks24e6-l3tt Ready <none> 2m59s v1.16.2
部署
我们来部署吧。
$ kubectl --kubeconfig="$HOME/.kube/k8s-1-16-2-do-0-lon1-1575482940993-kubeconfig.yaml" apply -f kubernetes/namespace.yaml
namespace/qiita created
$ kubectl --kubeconfig="$HOME/.kube/k8s-1-16-2-do-0-lon1-1575482940993-kubeconfig.yaml" get namespaces
NAME STATUS AGE
default Active 28m
kube-node-lease Active 28m
kube-public Active 28m
kube-system Active 28m
qiita Active 15s
感觉不错。其他资源也会按这个方式进行kubectl apply。
所有资源都apply完之后是这种感觉。
$ kubectl --kubeconfig="$HOME/.kube/k8s-1-16-2-do-0-lon1-1575482940993-kubeconfig.yaml" -n qiita get pods
NAME READY STATUS RESTARTS AGE
cockroach-init-fwjld 0/1 Completed 0 14h
cockroachdb-0 1/1 Running 1 14h
cockroachdb-1 1/1 Running 80 14h
cockroachdb-2 1/1 Running 0 14h
kafka-0 2/2 Running 0 14h
kafka-1 2/2 Running 0 14h
kafka-2 1/2 Running 0 112s
kafka-etcd-0 1/1 Running 29 14h
kafka-etcd-1 1/1 Running 0 14h
kafka-etcd-2 1/1 Running 1 14h
qiita-advent-calendar-2019-7d7d7b44b6-rgf9g 1/1 Running 0 14h
qiita-advent-calendar-2019-r6pkv 0/1 Completed 0 14h
zetcd-cc957f748-7nlxg 1/1 Running 0 14h
zetcd-cc957f748-lksmf 1/1 Running 0 14h
嗯……?感觉还没准备好..
$ kubectl --kubeconfig="$HOME/.kube/k8s-1-16-2-do-0-lon1-1575482940993-kubeconfig.yaml" -n qiita logs kafka-0 broker
...
[2019-12-05 09:16:00,372] WARN [RequestSendThread controllerId=0] Controller 0's connection to broker kafka-1.broker.qiita.svc.cluster.local:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.io.IOException: Connection to kafka-1.broker.qiita.svc.cluster.local:9092 (id: 1 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70)
at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:279)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:233)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
[2019-12-05 09:16:00,433] WARN [Controller id=0, targetBrokerId=2] Error connecting to node kafka-2.broker.qiita.svc.cluster.local:9092 (id: 2 rack: null) (org.apache.kafka.clients.NetworkClient)
java.io.IOException: Can't resolve address: kafka-2.broker.qiita.svc.cluster.local:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235)
at org.apache.kafka.common.network.Selector.connect(Selector.java:214)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265)
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64)
at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:279)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:233)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
Caused by: java.nio.channels.UnresolvedAddressException
at java.base/sun.nio.ch.Net.checkAddress(Net.java:112)
at java.base/sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233)
... 7 more
...
这是……?
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/path: /metrics
prometheus.io/port: "5555"
prometheus.io/scrape: "true"
name: broker
namespace: qiita
spec:
clusterIP: None
publishNotReadyAddresses: true
ports:
- port: 9092
protocol: TCP
targetPort: 9092
selector:
app: kafka
$ kubectl --kubeconfig="$HOME/.kube/k8s-1-16-2-do-0-lon1-1575482940993-kubeconfig.yaml" -n qiita apply -f kubernetes/kafka.yaml
service/broker created
service/kafka unchanged
poddisruptionbudget.policy/kafka unchanged
statefulset.apps/kafka configured
configmap/kafka-configmap unchanged
configmap/jmx-exporter-configmap unchanged
$ kubectl --kubeconfig="$HOME/.kube/k8s-1-16-2-do-0-lon1-1575482940993-kubeconfig.yaml" -n qiita get pods
NAME READY STATUS RESTARTS AGE
cockroach-init-fwjld 0/1 Completed 0 14h
cockroachdb-0 1/1 Running 1 14h
cockroachdb-1 1/1 Running 80 14h
cockroachdb-2 1/1 Running 0 14h
kafka-0 2/2 Running 0 6m7s
kafka-1 2/2 Running 0 5m30s
kafka-2 2/2 Running 0 5m16s
kafka-etcd-0 1/1 Running 29 14h
kafka-etcd-1 1/1 Running 0 14h
kafka-etcd-2 1/1 Running 1 14h
qiita-advent-calendar-2019-7d7d7b44b6-rgf9g 1/1 Running 0 14h
qiita-advent-calendar-2019-r6pkv 0/1 Completed 0 14h
zetcd-cc957f748-7nlxg 1/1 Running 0 14h
zetcd-cc957f748-lksmf 1/1 Running 0 14h
唉……。
所以,這是由於服務資源設置錯誤所導致的。真是抱歉。
最近,我试用了一个被称为KaaS(Kubernetes作为服务)的流行词之一,由DigitalOcean提供的服务。
虽然只是简单尝试了一下,但是它的设置非常简单,当我需要一个小环境的时候,我觉得它非常便利。
因为今天好好放松了一下,所以明天我打算继续用应用程序使用Kafka。好的!