使用Podman pods引入Confluent Platform

首先。

作为背景,我们决定将Confluent Platform引入RHEL(Red Hat Enterprise Linux),详细的经过不在此赘述。我们使用了Confluent提供的Docker容器镜像来进行引入工作。为了考虑运维和可移植性,我们希望能够使用Docker Compose或者Podman pods来集中管理多个容器。由于我们选择在RHEL8中废弃了对Docker的支持,并采用了Podman的原因,因此我们决定使用Podman pods。

在本文中,为了备忘录的目的,记录了使用 Podman pods 引入 Confluent Platform 的日志。虽然你可能不使用 Confluent Platform,但其中关于使用 Podman pods 的内容可能对你也有参考价值。

参考资料

    • Moving from docker-compose to Podman pods

 

    Podmanのポッドとコンテナ作成手順の覚書

环境信息

    • Red Hat Enterprise Linux release 8.5 (Ootpa)

 

    Confluent Platform 7.0.1

引入日志

Note: “導入ログ” is a mix of Japanese and Chinese characters. The Chinese equivalent for “log” is “日志” (rì zhì).

预备工作

确认Port的使用情况

一般来说,我会确认每个组件中计划使用的端口是否可用。

    例) ZooKeeperのデフォルトPort:2181の確認
$ curl http://localhost:2181
curl: (7) Failed to connect to localhost port 2181: Connection refused

创建外部卷(用于容纳主机上的ZooKeeper和Kafka数据的目录)。

起初我们是没有安装外部卷的,在进行了一系列操作后,当需要改变组件参数时,发现在Podman中无法直接在创建Pod后进行手动修改,而需要重新创建Pod。在重新创建Pod时,如果没有挂载外部卷,之前的所有数据都将丢失。因此,我认为最好是为了验证目的,提前挂载外部卷。有关创建方法,请参考这里。这篇文章的底部也提供了参考信息。

引入工作

Podman pods的步骤如下所示。

podman pod createで空のPodを作成

podman runでコンテナを先に作成したPod内で起動

podman generate kubeでPodの構成をYAMLファイルに出力

YAMLファイルに出力しておくと、次回からPodとしてすぐに起動できますし、別環境への導入にも流用できます。

podman play kubeでYAMLファイルを指定してPodの実行ができます。

YAMLファイルに慣れている方は、podman runなどを使わずに、初めからYAMLファイルを作成してPodを実行した方が早いかもしれません。

使用podman命令创建一个空的Pod

    • Pod: confluentを作成

コンテナで使用予定のPortはここで全て指定しないといけません。

$ sudo podman pod create --name confluent -p 2181:2181/tcp,9092:9092/tcp,8081:8081/tcp,8088:8088/tcp,9021:9021/tcp,8083:8083/tcp,10091:10091/tcp

使用 Podman run 命令在 Pod 内启动每个组件的容器。

使用Confluent发布的Docker容器映像,在容器中启动时,我们单独创建了包含JDBC Source/Sink Connector和Elasticsearch Sink Connector安装命令的容器,以便使用Kafka Connect。

可以动态地将每个组件的参数指定为环境变量,因此请指定所需的参数。有关参数的详细信息,请参考此处。

动物园管理员
$ sudo podman run -d --pod=confluent -e ZOOKEEPER_CLIENT_PORT="2181" -e KAFKA_OPTS="-Dlog4j.configuration=file:/etc/kafka/log4j.properties" --name=ice-zookeeper docker.io/confluentinc/cp-zookeeper:7.0.1
卡夫卡
$ sudo podman run -d --pod confluent -e KAFKA_ZOOKEEPER_CONNECT="localhost:2181" -e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP="PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT" -e KAFKA_ADVERTISED_LISTENERS="PLAINTEXT://localhost:9092,EXTERNAL://xx.xx.xx.xx:10091" -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR="1" -e KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR="1" -e KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR="1" -e KAFKA_TRANSACTION_STATE_LOG_MIN_ISR="1" -e KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR="1" -e CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS="1" -e KAFKA_OPTS="-Dlog4j.configuration=file:/etc/kafka/log4j.properties" --name ice-broker docker.io/confluentinc/cp-server:7.0.1
卡夫卡连接
$ sudo podman run -d --name ice-kafka-connect --pod confluent -e CONNECT_BOOTSTRAP_SERVERS="localhost:9092" -e CONNECT_LISTENERS="http://0.0.0.0:8083" -e CONNECT_GROUP_ID="ice-connect-cluster" -e CONNECT_CONFIG_STORAGE_TOPIC="connect-configs" -e CONNECT_OFFSET_STORAGE_TOPIC="connect-offsets" -e CONNECT_STATUS_STORAGE_TOPIC="connect-statuses" -e CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR="1" -e CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR="1" -e CONNECT_STATUS_STORAGE_REPLICATION_FACTOR="1" -e CONNECT_KEY_CONVERTER="io.confluent.connect.avro.AvroConverter" -e CONNECT_VALUE_CONVERTER="io.confluent.connect.avro.AvroConverter" -e CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL="http://localhost:8081" -e CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL="http://localhost:8081" -e CONNECT_INTERNAL_KEY_CONVERTER="io.confluent.connect.avro.AvroConverter" -e CONNECT_INTERNAL_VALUE_CONVERTER="io.confluent.connect.avro.AvroConverter" -e CONNECT_REST_ADVERTISED_HOST_NAME="connect" -e CONNECT_PLUGIN_PATH="/usr/share/confluent-hub-components" -e KAFKA_OPTS="-Dlog4j.configuration=file:/etc/kafka/connect-log4j.properties" --name ice-kafka-connect docker.io/xxxxxxxxx/cp-connector-jdbc-elasticsearch:latest
    (参考)今回作成したDockerfile(docker.io/xxxxxxxxx/cp-connector-jdbc-elasticsearch:latest)
FROM docker.io/confluentinc/cp-server-connect:7.0.1
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:10.3.3 \
  && confluent-hub install --no-prompt confluentinc/kafka-connect-elasticsearch:11.1.8
注册表
$ sudo podman run -d --pod confluent -e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS="PLAINTEXT://localhost:9092" -e SCHEMA_REGISTRY_LISTENERS="http://0.0.0.0:8081" -e SCHEMA_REGISTRY_HOST_NAME="schema-registry" -e KAFKA_OPTS="-Dlog4j.configuration=file:/etc/schema-registry/log4j.properties" --name ice-schema-registry docker.io/confluentinc/cp-schema-registry:7.0.1
ksqlDB => ksqlDB
$ sudo podman run -d --name ice-ksqldb-server --pod confluent -e KSQL_BOOTSTRAP_SERVERS="localhost:9092" -e KSQL_HOST_NAME="ksqldb-server" -e KSQL_LISTENERS="http://0.0.0.0:8088" -e KSQL_KSQL_SCHEMA_REGISTRY_URL="http://localhost:8081" -e KSQL_KSQL_CONNECT_URL="http://localhost:8083" -e KAFKA_OPTS="-Dlog4j.configuration=file:/etc/ksqldb-server/log4j.properties" docker.io/confluentinc/cp-ksqldb-server:7.0.1
控制中心
$ sudo podman run -d --name ice-control-center --pod confluent -e CONTROL_CENTER_BOOTSTRAP_SERVERS="localhost:9092" -e CONTROL_CENTER_REPLICATION_FACTOR="1" -e CONTROL_CENTER_CONNECT_CONNECT1_CLUSTER="http://localhost:8083" -e CONTROL_CENTER_KSQL_KSQLDB1_URL="http://localhost:8088" -e CONTROL_CENTER_SCHEMA_REGISTRY_URL="http://localhost:8081" -e KAFKA_OPTS="-Dlog4j.configuration=file:/etc/confluent-control-center/log4j.properties" docker.io/confluentinc/cp-enterprise-control-center:7.0.1

通过查看容器的运行状态(podman ps)和日志(podman logs <containerID/container名称>),如果没有特别的问题,安装工作就已经完成了。

我尝试输出一个YAML文件。

使用Podman generate kube命令将Pod配置导出为YAML文件。

    Pod: confluentのYAMLファイルを出力
$ sudo podman generate kube confluent > ice-confluent.yaml
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-3.4.2
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2022-03-07T13:18:53Z"
  labels:
    app: confluent
  name: confluent
spec:
  containers:
  - image: docker.io/confluentinc/cp-zookeeper:7.0.1
    name: confluent-ice-zookeeper
    ports:
    - containerPort: 2181
      hostPort: 2181
    - containerPort: 9092
      hostPort: 9092
    - containerPort: 10091
      hostPort: 10091
    - containerPort: 8083
      hostPort: 8083
    - containerPort: 8081
      hostPort: 8081
    - containerPort: 8088
      hostPort: 8088
    - containerPort: 9021
      hostPort: 9021
    resources: {}
    securityContext:
      capabilities:
        drop:
        - CAP_MKNOD
        - CAP_AUDIT_WRITE
    volumeMounts:
    - mountPath: /etc/zookeeper/secrets
      name: edab71c22fdaed4a4df998340874e1a44c813378c7e761336578206298cfa7b2-pvc
    - mountPath: /var/lib/zookeeper/data
      name: eba9189ae8e18ed4746b735e86c794c950ee4c732ecc361f8e9f78571a6091cb-pvc
    - mountPath: /var/lib/zookeeper/log
      name: 074de8171cd8e478c8b2248552333cd1e5d01d66bd8caa49e58c69ac4451962f-pvc
  - image: docker.io/confluentinc/cp-server:7.0.1
    name: confluent-ice-broker
    resources: {}
    securityContext:
      capabilities:
        drop:
        - CAP_MKNOD
        - CAP_AUDIT_WRITE
    volumeMounts:
    - mountPath: /etc/kafka/secrets
      name: c80e733ce6255244040cd1829d68fec6a7c23485624e881b797ee26f82689ee0-pvc
    - mountPath: /var/lib/kafka/data
      name: eed423b5242e3d5bb35117493c8933ff2db7096ec45fc53f9f88024207aefbe1-pvc
  - image: docker.io/xxxxxxxxx/cp-connector-jdbc-elasticsearch:latest
    name: confluent-ice-kafka-connect
    resources: {}
    securityContext:
      capabilities:
        drop:
        - CAP_MKNOD
        - CAP_AUDIT_WRITE
    volumeMounts:
    - mountPath: /etc/kafka/secrets
      name: 279968e5e8d44904508a460433e6017f3680c9ddcff096341900f7da536eda2b-pvc
    - mountPath: /var/lib/kafka/data
      name: 67a7342ed0e77fd58a6c85433bff8656ea91c4c94edbbb250c66b4a4d8a29bd2-pvc
    - mountPath: /etc/kafka-connect/jars
      name: 649e2e7e7a7b30b7955e1c210daef2776e6c226520855108eddf470326578f53-pvc
    - mountPath: /etc/kafka-connect/secrets
      name: eba230fb252f453fdaf383c27b260eb5026c32b5d31883a9bc11996c15b89bc8-pvc
  - image: docker.io/confluentinc/cp-schema-registry:7.0.1
    name: confluent-ice-schema-registry
    resources: {}
    securityContext:
      capabilities:
        drop:
        - CAP_MKNOD
        - CAP_AUDIT_WRITE
    volumeMounts:
    - mountPath: /etc/schema-registry/secrets
      name: 0364489a7dbd67de77b1cbbae2f0a33d135a7d28168c5d04eb47fce3cdac1daa-pvc
  - image: docker.io/confluentinc/cp-ksqldb-server:7.0.1
    name: confluent-ice-ksqldb-server
    resources: {}
    securityContext:
      capabilities:
        drop:
        - CAP_MKNOD
        - CAP_AUDIT_WRITE
  - image: docker.io/confluentinc/cp-enterprise-control-center:7.0.1
    name: confluent-ice-control-center
    resources: {}
    securityContext:
      capabilities:
        drop:
        - CAP_MKNOD
        - CAP_AUDIT_WRITE
  restartPolicy: Always
  volumes:
  - name: eed423b5242e3d5bb35117493c8933ff2db7096ec45fc53f9f88024207aefbe1-pvc
    persistentVolumeClaim:
      claimName: eed423b5242e3d5bb35117493c8933ff2db7096ec45fc53f9f88024207aefbe1
  - name: 649e2e7e7a7b30b7955e1c210daef2776e6c226520855108eddf470326578f53-pvc
    persistentVolumeClaim:
      claimName: 649e2e7e7a7b30b7955e1c210daef2776e6c226520855108eddf470326578f53
  - name: eba230fb252f453fdaf383c27b260eb5026c32b5d31883a9bc11996c15b89bc8-pvc
    persistentVolumeClaim:
      claimName: eba230fb252f453fdaf383c27b260eb5026c32b5d31883a9bc11996c15b89bc8
  - name: edab71c22fdaed4a4df998340874e1a44c813378c7e761336578206298cfa7b2-pvc
    persistentVolumeClaim:
      claimName: edab71c22fdaed4a4df998340874e1a44c813378c7e761336578206298cfa7b2
  - name: eba9189ae8e18ed4746b735e86c794c950ee4c732ecc361f8e9f78571a6091cb-pvc
    persistentVolumeClaim:
      claimName: eba9189ae8e18ed4746b735e86c794c950ee4c732ecc361f8e9f78571a6091cb
  - name: 074de8171cd8e478c8b2248552333cd1e5d01d66bd8caa49e58c69ac4451962f-pvc
    persistentVolumeClaim:
      claimName: 074de8171cd8e478c8b2248552333cd1e5d01d66bd8caa49e58c69ac4451962f
  - name: c80e733ce6255244040cd1829d68fec6a7c23485624e881b797ee26f82689ee0-pvc
    persistentVolumeClaim:
      claimName: c80e733ce6255244040cd1829d68fec6a7c23485624e881b797ee26f82689ee0
  - name: 279968e5e8d44904508a460433e6017f3680c9ddcff096341900f7da536eda2b-pvc
    persistentVolumeClaim:
      claimName: 279968e5e8d44904508a460433e6017f3680c9ddcff096341900f7da536eda2b
  - name: 67a7342ed0e77fd58a6c85433bff8656ea91c4c94edbbb250c66b4a4d8a29bd2-pvc
    persistentVolumeClaim:
      claimName: 67a7342ed0e77fd58a6c85433bff8656ea91c4c94edbbb250c66b4a4d8a29bd2
  - name: 0364489a7dbd67de77b1cbbae2f0a33d135a7d28168c5d04eb47fce3cdac1daa-pvc
    persistentVolumeClaim:
      claimName: 0364489a7dbd67de77b1cbbae2f0a33d135a7d28168c5d04eb47fce3cdac1daa
status: {}

从docker-compose迁移到Podman pods时,不会生成像参考情报(Moving from docker-compose to Podman pods、Podmanのポッドとコンテナ作成手順の覚書)的YAML文件,无法从YAML文件上查看指定的环境变量等。

我没有理解原因…所以我决定直接动手解决。

    YAMLファイルを修正(環境変数の追加など)
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-3.4.2
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2022-03-03T07:50:29Z"
  labels:
    app: confluent
  name: confluent
spec:
  containers:
  - image: docker.io/confluentinc/cp-zookeeper:7.0.1
    name: ice-zookeeper
    ports:
    - containerPort: 2181
      hostPort: 2181
    env:
    - name: ZOOKEEPER_CLIENT_PORT
      value: 2181
    - name: KAFKA_OPTS
      value: -Dlog4j.configuration=file:/etc/kafka/log4j.properties
    volumes:
    - 
  - image: docker.io/confluentinc/cp-server:7.0.1
    name: ice-broker
    ports:
    - containerPort: 9092
      hostPort: 9092
    - containerPort: 10091
      hostPort: 10091
    env:
    - name: KAFKA_ZOOKEEPER_CONNECT
      value: localhost:2181
    - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
      value: PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
    - name: KAFKA_ADVERTISED_LISTENERS
      value: PLAINTEXT://localhost:9092,EXTERNAL://xx.xx.xx.xx:10091
    - name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
      value: 1
    - name: KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR
      value: 1
    - name: KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR
      value: 1
    - name: KAFKA_TRANSACTION_STATE_LOG_MIN_ISR
      value: 1
    - name: KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR
      value: 1
    - name: CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS
      value: 1
    - name: KAFKA_OPTS
      value: -Dlog4j.configuration=file:/etc/kafka/log4j.properties
    volumes:
    - 
  - image: docker.io/xxxxxxxxx/cp-connector-jdbc-elasticsearch:latest
    name: ice-kafka-connect
    ports:
    - containerPort: 8083
      hostPort: 8083
    env:
    - name: CONNECT_BOOTSTRAP_SERVERS
      value: localhost:9092
    - name: CONNECT_LISTENERS
      value: http://0.0.0.0:8083
    - name: CONNECT_GROUP_ID
      value: ice-connect-cluster
    - name: CONNECT_CONFIG_STORAGE_TOPIC
      value: connect-configs
    - name: CONNECT_OFFSET_STORAGE_TOPIC
      value: connect-offsets
    - name: CONNECT_STATUS_STORAGE_TOPIC
      value: connect-statuses
    - name: CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR
      value: 1
    - name: CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR
      value: 1
    - name: CONNECT_STATUS_STORAGE_REPLICATION_FACTOR
      value: 1
    - name: CONNECT_KEY_CONVERTER
      value: io.confluent.connect.avro.AvroConverter
    - name: CONNECT_VALUE_CONVERTER
      value: io.confluent.connect.avro.AvroConverter
    - name: CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL
      value: http://localhost:8081
    - name: CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL
      value: http://localhost:8081
    - name: CONNECT_INTERNAL_KEY_CONVERTER
      value: io.confluent.connect.avro.AvroConverter
    - name: CONNECT_INTERNAL_VALUE_CONVERTER
      value: io.confluent.connect.avro.AvroConverter
    - name: CONNECT_REST_ADVERTISED_HOST_NAME
      value: connect
    - name: CONNECT_PLUGIN_PATH
      value: /usr/share/confluent-hub-components
    - name: KAFKA_OPTS
      value: -Dlog4j.configuration=file:/etc/kafka/connect-log4j.properties 
  - image: docker.io/confluentinc/cp-schema-registry:7.0.1
    name: ice-schema-registry
    ports:
    - containerPort: 8081
      hostPort: 8081
    env:
    - name: SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
      value: localhost:9092
    - name: SCHEMA_REGISTRY_LISTENERS
      value: http://0.0.0.0:8081
    - name: SCHEMA_REGISTRY_HOST_NAME
      value: schema-registry
    - name: KAFKA_OPTS
      value: -Dlog4j.configuration=file:/etc/schema-registry/log4j.properties
  - image: docker.io/confluentinc/cp-ksqldb-server:7.0.1
    name: ice-ksqldb-server
    ports:
    - containerPort: 8088
      hostPort: 8088
    env:
    - name: KSQL_BOOTSTRAP_SERVERS
      value: localhost:9092
    - name: KSQL_HOST_NAME
      value: ksqldb-server
    - name: KSQL_LISTENERS
      value: http://0.0.0.0:8088
    - name: KSQL_KSQL_SCHEMA_REGISTRY_URL
      value: http://localhost:8081
    - name: KSQL_KSQL_CONNECT_URL
      value: http://localhost:8083
    - name: KAFKA_OPTS
      value: -Dlog4j.configuration=file:/etc/ksqldb-server/log4j.properties
  - image: docker.io/confluentinc/cp-enterprise-control-center:7.0.1
    name: ice-control-center
    ports:
    - containerPort: 9021
      hostPort: 9021
    env:
    - name: CONTROL_CENTER_BOOTSTRAP_SERVERS
      value: localhost:9092
    - name: CONTROL_CENTER_REPLICATION_FACTOR
      value: 1
    - name: CONTROL_CENTER_CONNECT_CONNECT1_CLUSTER
      value: http://localhost:8083
    - name: CONTROL_CENTER_KSQL_KSQLDB1_URL
      value: http://localhost:8088
    - name: CONTROL_CENTER_SCHEMA_REGISTRY_URL
      value: http://localhost:8081
    - name: KAFKA_OPTS
      value: -Dlog4j.configuration=file:/etc/confluent-control-center/log4j.properties

使用Podman玩kube可以从YAML文件运行Pod。

    現在のPodを削除して、YAMLファイルから再作成
$ sudo podman play kube ice-confluent-fix.yaml

检查容器的运行状态(podman ps)和日志(podman logs <containerID/container名>),如果没有问题,就可以了。

(仅供参考) 外部卷的装载

我参考了这个方法步骤。根据步骤,在启动容器时指定了外部卷,但由于我已经创建了YAML文件,所以我选择将其添加到YAML文件中。

    ホスト上にディレクトリの作成
# mkdir -p vol1/zk-data
# mkdir -p vol2/zk-txn-logs
# mkdir -p vol3/kafka-data
    コンテナの実行ユーザーから読み書きできるようにpermission(検証用なので777で)を変更
# chmod -R 777 vol1/zk-data
# chmod -R 777 vol2/zk-txn-logs
# chmod -R 777 vol3/kafka-data
    YAMLファイルの変更

我在YAML文件中添加了.spec.containers.volumeMounts(Zookeeper、Kafka)和.spec.volumes。

# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-3.4.2
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2022-03-03T07:50:29Z"
  labels:
    app: confluent
  name: confluent
spec:
  containers:
  - image: docker.io/confluentinc/cp-zookeeper:7.0.1
    name: ice-zookeeper
    ports:
    - containerPort: 2181
      hostPort: 2181
    env:
    - name: ZOOKEEPER_CLIENT_PORT
      value: 2181
    - name: KAFKA_OPTS
      value: -Dlog4j.configuration=file:/etc/kafka/log4j.properties
    # 追加
    volumeMounts: 
    - mountPath: /var/lib/zookeeper/data
      name: zk-data
    - mountPath: /var/lib/zookeeper/log
      name: zk-txn-logs
  - image: docker.io/confluentinc/cp-server:7.0.1
    name: ice-broker
    ports:
    - containerPort: 9092
      hostPort: 9092
    - containerPort: 10091
      hostPort: 10091
    env:
    - name: KAFKA_ZOOKEEPER_CONNECT
      value: localhost:2181
    - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
      value: PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
    - name: KAFKA_ADVERTISED_LISTENERS
      value: PLAINTEXT://localhost:9092,EXTERNAL://xx.xx.xx.xx:10091
    - name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
      value: 1
    - name: KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR
      value: 1
    - name: KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR
      value: 1
    - name: KAFKA_TRANSACTION_STATE_LOG_MIN_ISR
      value: 1
    - name: KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR
      value: 1
    - name: CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS
      value: 1
    - name: KAFKA_OPTS
      value: -Dlog4j.configuration=file:/etc/kafka/log4j.properties
    # 追加
    volumeMounts: 
    - mountPath: /var/lib/kafka/data
      name: kafka-data
  - image: docker.io/xxxxxxxxx/cp-connector-jdbc-elasticsearch:latest
    name: ice-kafka-connect
    ports:
    - containerPort: 8083
      hostPort: 8083
    env:
    - name: CONNECT_BOOTSTRAP_SERVERS
      value: localhost:9092
    - name: CONNECT_LISTENERS
      value: http://0.0.0.0:8083
    - name: CONNECT_GROUP_ID
      value: ice-connect-cluster
    - name: CONNECT_CONFIG_STORAGE_TOPIC
      value: connect-configs
    - name: CONNECT_OFFSET_STORAGE_TOPIC
      value: connect-offsets
    - name: CONNECT_STATUS_STORAGE_TOPIC
      value: connect-statuses
    - name: CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR
      value: 1
    - name: CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR
      value: 1
    - name: CONNECT_STATUS_STORAGE_REPLICATION_FACTOR
      value: 1
    - name: CONNECT_KEY_CONVERTER
      value: io.confluent.connect.avro.AvroConverter
    - name: CONNECT_VALUE_CONVERTER
      value: io.confluent.connect.avro.AvroConverter
    - name: CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL
      value: http://localhost:8081
    - name: CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL
      value: http://localhost:8081
    - name: CONNECT_INTERNAL_KEY_CONVERTER
      value: io.confluent.connect.avro.AvroConverter
    - name: CONNECT_INTERNAL_VALUE_CONVERTER
      value: io.confluent.connect.avro.AvroConverter
    - name: CONNECT_REST_ADVERTISED_HOST_NAME
      value: connect
    - name: CONNECT_PLUGIN_PATH
      value: /usr/share/confluent-hub-components
    - name: KAFKA_OPTS
      value: -Dlog4j.configuration=file:/etc/kafka/connect-log4j.properties 
  - image: docker.io/confluentinc/cp-schema-registry:7.0.1
    name: ice-schema-registry
    ports:
    - containerPort: 8081
      hostPort: 8081
    env:
    - name: SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
      value: localhost:9092
    - name: SCHEMA_REGISTRY_LISTENERS
      value: http://0.0.0.0:8081
    - name: SCHEMA_REGISTRY_HOST_NAME
      value: schema-registry
    - name: KAFKA_OPTS
      value: -Dlog4j.configuration=file:/etc/schema-registry/log4j.properties
  - image: docker.io/confluentinc/cp-ksqldb-server:7.0.1
    name: ice-ksqldb-server
    ports:
    - containerPort: 8088
      hostPort: 8088
    env:
    - name: KSQL_BOOTSTRAP_SERVERS
      value: localhost:9092
    - name: KSQL_HOST_NAME
      value: ksqldb-server
    - name: KSQL_LISTENERS
      value: http://0.0.0.0:8088
    - name: KSQL_KSQL_SCHEMA_REGISTRY_URL
      value: http://localhost:8081
    - name: KSQL_KSQL_CONNECT_URL
      value: http://localhost:8083
    - name: KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR
      value: 1
    - name: KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE
      value: true
    - name: KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE
      value: true
    - name: KAFKA_OPTS
      value: -Dlog4j.configuration=file:/etc/ksqldb-server/log4j.properties
  - image: docker.io/confluentinc/cp-enterprise-control-center:7.0.1
    name: ice-control-center
    ports:
    - containerPort: 9021
      hostPort: 9021
    env:
    - name: CONTROL_CENTER_BOOTSTRAP_SERVERS
      value: localhost:9092
    - name: CONTROL_CENTER_REPLICATION_FACTOR
      value: 1
    - name: CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS
      value: 1
    - name: CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS
      value: 1
    - name: CONFLUENT_METRICS_TOPIC_REPLICATION
      value: 1
    - name: CONTROL_CENTER_CONNECT_CONNECT1_CLUSTER
      value: http://localhost:8083
    - name: CONTROL_CENTER_KSQL_KSQLDB1_URL
      value: http://localhost:8088
    - name: CONTROL_CENTER_SCHEMA_REGISTRY_URL
      value: http://localhost:8081
    - name: KAFKA_OPTS
      value: -Dlog4j.configuration=file:/etc/confluent-control-center/log4j.properties
  # 追加
  volumes:
  - name: zk-data
    hostPath:
      # ホストからボリュームをマッピングする場合、フルパス指定
      path: /mnt/vol1/zk-data
      type: Directory
  - name: zk-txn-logs
    hostPath:
      path: /mnt/vol2/zk-txn-logs
      type: Directory
  - name: kafka-data
    hostPath:
      path: /mnt/vol3/kafka-data
      type: Directory
    Pod再作成
$ sudo podman pod stop confluent
$ sudo podman pod rm confluent
$ sudo podman play kube ice-confluent-fix-add-pv.yaml
    • マウントしたディレクトリにデータが書き込まれていることを確認

これでPodの再作成後もデータは失われない。

$ ls -l
合計 3104
drwxr-xr-x. 2 cloudadmin cloudadmin  4096  3月 22 13:10 __consumer_offsets-0
drwxr-xr-x. 2 cloudadmin cloudadmin  4096  3月 22 13:10 __consumer_offsets-1
drwxr-xr-x. 2 cloudadmin cloudadmin  4096  3月 22 13:10 __consumer_offsets-10
:
-rw-r--r--. 1 cloudadmin cloudadmin     4  3月 22 13:21 log-start-offset-checkpoint
-rw-r--r--. 1 cloudadmin cloudadmin    91  3月 22 13:10 meta.properties
-rw-r--r--. 1 cloudadmin cloudadmin 48931  3月 22 13:21 recovery-point-offset-checkpoint
-rw-r--r--. 1 cloudadmin cloudadmin 49084  3月 22 13:22 replication-offset-checkpoint
bannerAds