我們將Confluent Platform在Azure Kubernetes Service (AKS)上運行了一次

Summary

image.png

我将用以下两个步骤逐步解释上述内容。
第一步:将容器镜像推送到 ACR。
第二步:在 AKS 上运行容器应用。

image.png

本地环境

苹果操作系统的最新版本是macOS Big Sur 11.3,而Python的版本是3.8.3。

事前准备

这篇文章已经完成,具有使用 Azure CNI 的 AKS 集群环境。


将Kubernetes的清单文件进行转换。

Kompose的安装。

$ curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-darwin-amd64 -o kompose
$ chmod +x kompose
$ sudo mv ./kompose /usr/local/bin/kompose

编辑 docker-compose.yml 文件

以下是要针对的docker-compose.yml文件。

    • 編集箇所は以下:

コンバート時のエラー回避のため ‘depends_on’ のところはコメントアウト
image 定義は acr0ituru.azurecr.io を追加して定義(ただし、control-center / schema-registry を除く)
ksqldb-cli はサービスでないのでコメントアウトしておく
rabbitmq / grafana は外部からの接続があるので LoadBalancer の定義を追加

---
version: '2'
services:
  zookeeper:
    image: acr0ituru.azurecr.io/cp-zookeeper:6.0.0     <-- 変更
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  broker:
    image: acr0ituru.azurecr.io/cp-server:6.0.0     <-- 変更
    hostname: broker
    container_name: broker
    # depends_on:
    #   - zookeeper
    ports:
      - "29092:29092"
      - "9092:9092"
      - "9101:9101"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_JMX_PORT: 9101
      KAFKA_JMX_HOSTNAME: localhost
      KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
      CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
      CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
      CONFLUENT_METRICS_ENABLE: 'true'
      CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'

  schema-registry:
    image: confluentinc/cp-schema-registry:6.0.0
    # image: acr0ituru.azurecr.io/cp-schema-registry:6.0.0
    hostname: schema-registry
    container_name: schema-registry
    # depends_on:
    #   - broker
    ports:
      - "8081:8081"
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:29092'
      SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081

  connect:
    image: acr0ituru.azurecr.io/cp-connect-base:6.0.0     <-- 変更
    hostname: connect
    container_name: connect
    # depends_on:
    #   - broker
    #   - schema-registry
    ports:
      - "8083:8083"
    environment:
      CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'
      CONNECT_REST_ADVERTISED_HOST_NAME: connect
      CONNECT_REST_PORT: 8083
      CONNECT_GROUP_ID: compose-connect-group
      CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
      CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
      CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
      CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
      CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-6.0.0.jar
      CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
      CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
      CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
      CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.org.apache.kafka.connect.runtime.rest=WARN,reflections=ERROR

  control-center:
    image: confluentinc/cp-enterprise-control-center:6.0.0
    hostname: control-center
    container_name: control-center
    # depends_on:
    #   - broker
    #   - schema-registry
    #   - connect
    #   - ksqldb-server
    ports:
      - "9021:9021"
    environment:
      CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092'
      CONTROL_CENTER_CONNECT_CLUSTER: 'connect:8083'
      CONTROL_CENTER_KSQL_KSQLDB1_URL: "http://ksqldb-server:8088"
      CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: "http://localhost:8088"
      CONTROL_CENTER_REPLICATION_FACTOR: 1
      CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
      CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
      CONFLUENT_METRICS_TOPIC_REPLICATION: 1
      PORT: 9021

  ksqldb-server:
    image: acr0ituru.azurecr.io/cp-ksqldb-server:6.0.0      <-- 変更
    hostname: ksqldb-server
    container_name: ksqldb-server
    # depends_on:
    #   - broker
    #   - connect
    ports:
      - "8088:8088"
    environment:
      KSQL_CONFIG_DIR: "/etc/ksql"
      KSQL_BOOTSTRAP_SERVERS: "broker:29092"
      KSQL_HOST_NAME: ksqldb-server
      KSQL_LISTENERS: "http://0.0.0.0:8088"
      KSQL_CACHE_MAX_BYTES_BUFFERING: 0
      KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
      KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
      KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
      KSQL_KSQL_CONNECT_URL: "http://connect:8083"
      KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR: 1
      KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: 'true'
      KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: 'true'

  # ksqldb-cli:
  #   image: acr0ituru.azurecr.io/cp-ksqldb-cli:6.0.0
  #   container_name: ksqldb-cli
  #   # depends_on:
  #   #   - broker
  #   #   - connect
  #   #   - ksqldb-server
  #   entrypoint: /bin/sh
  #   tty: true

  rabbitmq:
    image: acr0ituru.azurecr.io/rabbitmq:3.8.17     <-- 変更
    restart: always
    ports:
      - '5672:5672'
      - '15672:15672'
    hostname: rabbitmq
    container_name: rabbitmq
    environment:
      RABBITMQ_DEFAULT_USER: guest
      RABBITMQ_DEFAULT_PASS: guest
    labels:                                        <-- 追加
      kompose.service.type: LoadBalancer           <-- 追加

  influxdb:
    image: acr0ituru.azurecr.io/influxdb:1.8.6     <-- 変更
    ports:
      - 8086:8086
    hostname: influxdb
    container_name: influxdb

  grafana:
    image: acr0ituru.azurecr.io/grafana:8.0.6      <-- 変更
    ports:
      - 3000:3000
    hostname: grafana
    container_name: grafana
    environment:
      - GF_SERVER_ROOT_URL=http://grafana:3000
      - GF_INSTALL_PLUGINS=grafana-polystat-panel,bessler-pictureit-panel,marcuscalidus-svg-panel
      - GF_SECURITY_ADMIN_PASSWORD=admin
    # depends_on:
    #   - influxdb
    labels:                                        <-- 追加
      kompose.service.type: LoadBalancer           <-- 追加

创建Kubernetes清单(执行转换)

$ kompose convert -f docker-compose.yml   

INFO Kubernetes file "broker-service.yaml" created 
INFO Kubernetes file "connect-service.yaml" created 
INFO Kubernetes file "control-center-service.yaml" created 
INFO Kubernetes file "grafana-service.yaml" created 
INFO Kubernetes file "influxdb-service.yaml" created 
INFO Kubernetes file "ksqldb-server-service.yaml" created 
INFO Kubernetes file "rabbitmq-service.yaml" created 
INFO Kubernetes file "schema-registry-service.yaml" created 
INFO Kubernetes file "zookeeper-service.yaml" created 
INFO Kubernetes file "broker-deployment.yaml" created 
INFO Kubernetes file "connect-deployment.yaml" created 
INFO Kubernetes file "control-center-deployment.yaml" created 
INFO Kubernetes file "grafana-deployment.yaml" created 
INFO Kubernetes file "influxdb-deployment.yaml" created 
INFO Kubernetes file "ksqldb-server-deployment.yaml" created 
INFO Kubernetes file "rabbitmq-deployment.yaml" created 
INFO Kubernetes file "schema-registry-deployment.yaml" created 
INFO Kubernetes file "zookeeper-deployment.yaml" created 

编辑宣言

编辑 schema-registry 的清单文件

添加逃避已知问题的方法(命令:的部分)。

apiVersion: apps/v1
kind: Deployment
metadata:
     
    中略
     
spec:
  replicas: 1
     
    中略
     
    spec:
      containers:
        - command:                                                    <-- 追加
            - bash                                                    <-- 追加
            - -c                                                      <-- 追加
            - unset SCHEMA_REGISTRY_PORT; /etc/confluent/docker/run   <-- 追加
          env:
             
            省略
             

编辑的是RabbitMQ / Grafana的清单文件。

添加内部负载均衡器的定义

apiVersion: v1
kind: Service
metadata:
  annotations:
      
      中略
      
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"    <-- 追加
  creationTimestamp: null
  labels:
    io.kompose.service: rabbitmq
  name: rabbitmq
       
      省略
       
apiVersion: v1
kind: Service
metadata:
  annotations:
      
      中略
      
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"    <-- 追加
  creationTimestamp: null
  labels:
    io.kompose.service: grafana
  name: grafana
       
      省略
       

在 AKS 上启动 Pod

image.png

在集群上验证运行的节点。

$ kubectl get node -o wide

aks-nodepool1-83327242-vmss000000   Ready    agent   29h   v1.20.9   10.0.1.4      <none>        Ubuntu 18.04.5 LTS   5.4.0-1055-azure   containerd://1.4.8+azure
aks-nodepool1-83327242-vmss000001   Ready    agent   29h   v1.20.9   10.0.1.35     <none>        Ubuntu 18.04.5 LTS   5.4.0-1055-azure   containerd://1.4.8+azure
aks-nodepool1-83327242-vmss000002   Ready    agent   29h   v1.20.9   10.0.1.66     <none>        Ubuntu 18.04.5 LTS   5.4.0-1055-azure   containerd://1.4.8+azure

命名空间的定义 ɡ de ɡyì)

$ kubectl create namespace akscp02

创建Pod

kubectl apply -f rabbitmq-service.yaml,rabbitmq-deployment.yaml -n akscp02
kubectl apply -f influxdb-service.yaml,influxdb-deployment.yaml -n akscp02
kubectl apply -f grafana-service.yaml,grafana-deployment.yaml -n akscp02
kubectl apply -f zookeeper-service.yaml,zookeeper-deployment.yaml -n akscp02
kubectl apply -f broker-service.yaml,broker-deployment.yaml -n akscp02
kubectl apply -f schema-registry-service.yaml,schema-registry-deployment.yaml -n akscp02
kubectl apply -f connect-service.yaml,connect-deployment.yaml -n akscp02
kubectl apply -f ksqldb-server-service.yaml,ksqldb-server-deployment.yaml -n akscp02
kubectl apply -f control-center-service.yaml,control-center-deployment.yaml -n akscp02

确认Pod

$ kubectl get pods -o wide

NAME                               READY   STATUS    RESTARTS   AGE     IP          NODE                                NOMINATED NODE   READINESS GATES
broker-84b88df749-4gnwc            1/1     Running   0          3m28s   10.0.1.45   aks-nodepool1-83327242-vmss000001   <none>           <none>
connect-6457f5587c-9qj5w           1/1     Running   0          3m27s   10.0.1.11   aks-nodepool1-83327242-vmss000000   <none>           <none>
control-center-7f6fdd6957-kvk5p    1/1     Running   0          3m27s   10.0.1.74   aks-nodepool1-83327242-vmss000002   <none>           <none>
grafana-54d5cf546c-kmx9k           1/1     Running   0          3m29s   10.0.1.12   aks-nodepool1-83327242-vmss000000   <none>           <none>
influxdb-79cb78575-vhhmn           1/1     Running   0          3m29s   10.0.1.77   aks-nodepool1-83327242-vmss000002   <none>           <none>
ksqldb-server-5df9df4f54-dtv27     1/1     Running   0          3m27s   10.0.1.55   aks-nodepool1-83327242-vmss000001   <none>           <none>
rabbitmq-85f7fddfb-z5sxj           1/1     Running   0          3m30s   10.0.1.37   aks-nodepool1-83327242-vmss000001   <none>           <none>
schema-registry-65c9fbbc4b-rmmks   1/1     Running   0          3m28s   10.0.1.93   aks-nodepool1-83327242-vmss000002   <none>           <none>
zookeeper-69655fbfcf-p5hgw         1/1     Running   0          3m29s   10.0.1.38   aks-nodepool1-83327242-vmss000001   <none>           <none>

确认服务

$ kubectl get svc -n akscp02 -o wide

broker            ClusterIP      10.1.0.60    <none>        29092/TCP,9092/TCP,9101/TCP      3m34s   io.kompose.service=broker
connect           ClusterIP      10.1.0.106   <none>        8083/TCP                         3m33s   io.kompose.service=connect
control-center    ClusterIP      10.1.0.123   <none>        9021/TCP                         3m33s   io.kompose.service=control-center
grafana           LoadBalancer   10.1.0.119   10.0.1.98     3000:32300/TCP                   3m35s   io.kompose.service=grafana
influxdb          ClusterIP      10.1.0.61    <none>        8086/TCP                         3m35s   io.kompose.service=influxdb
ksqldb-server     ClusterIP      10.1.0.83    <none>        8088/TCP                         3m33s   io.kompose.service=ksqldb-server
rabbitmq          LoadBalancer   10.1.0.157   10.0.1.97     5672:31096/TCP,15672:31606/TCP   3m36s   io.kompose.service=rabbitmq
schema-registry   ClusterIP      10.1.0.175   <none>        8081/TCP                         3m34s   io.kompose.service=schema-registry
zookeeper         ClusterIP      10.1.0.112   <none>        2181/TCP                         3m35s   io.kompose.service=zookeeper

查看容器的详细信息

## Service の詳細確認(例:grafana)
$ kubectl describe svc grafana -n akscp02

## Pod の詳細確認(例:grafana)
$ kubectl describe pod grafana -n akscp02 

检查与Pod的连接

image.png

连接到同一子网上的虚拟机

按照这篇文章的方法,在同一个子网上创建一个 Windows 10 专业版的虚拟机,并从本地设备进行 RDP 连接。

从虚拟机进行确认

打开控制台,确认网络信息后,使用curl命令连接到grafana(IP:10.0.1.98,端口:3000)进行确认。

## ネットワーク情報
C:\Users\nmcadmin> ipconfig
Windows IP Configuration
Ethernet adapter Ethernet 2:
   Connection-specific DNS Suffix  . : hogehogehogehogehoge.lx.internal.cloudapp.net
   Link-local IPv6 Address . . . . . : ff88::a55a:e4e4:9363:4a4a%7
   IPv4 Address. . . . . . . . . . . : 10.0.1.99
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Default Gateway . . . . . . . . . : 10.0.1.1

## grafanaへの接続確認
C:\Users\nmcadmin> curl http://10.0.1.98:3000
<a href="/login">Found</a>.

总结

通过将docker-compose.yml文件转换为kompose,生成了部署和服务的清单。通过在AKS with Azure CNI环境中使用kubectl apply命令,确认容器可以正常运行。


這是一個附錄。

查看Pod的日志

kubectl logs schema-registry-65c9fbbc4b-rmmks -n akscp02

删除Pod

kubectl delete -f control-center-service.yaml,control-center-deployment.yaml -n akscp02
kubectl delete -f ksqldb-server-service.yaml,ksqldb-server-deployment.yaml -n akscp02
kubectl delete -f connect-service.yaml,connect-deployment.yaml -n akscp02
kubectl delete -f schema-registry-service.yaml,schema-registry-deployment.yaml -n akscp02
kubectl delete -f broker-service.yaml,broker-deployment.yaml -n akscp02
kubectl delete -f zookeeper-service.yaml,zookeeper-deployment.yaml -n akscp02
kubectl delete -f grafana-service.yaml,grafana-deployment.yaml -n akscp02
kubectl delete -f influxdb-service.yaml,influxdb-deployment.yaml -n akscp02
kubectl delete -f rabbitmq-service.yaml,rabbitmq-deployment.yaml -n akscp02

停止、启动和删除 AKS 集群。

az aks stop --name $AKS_CLUSTER_NAME -g $AKS_RES_GROUP
az aks start --name $AKS_CLUSTER_NAME -g $AKS_RES_GROUP
az aks delete --name $AKS_CLUSTER_NAME -g $AKS_RES_GROUP

删除 AKS 资源组

az group delete --name $AKS_RES_GROUP --yes --no-wait
广告
将在 10 秒后关闭
bannerAds