{"id":46741,"date":"2023-02-09T05:23:04","date_gmt":"2023-08-12T04:32:06","guid":{"rendered":"https:\/\/www.silicloud.com\/zh\/blog\/46741-2\/"},"modified":"2024-04-29T13:47:43","modified_gmt":"2024-04-29T05:47:43","slug":"46741-2","status":"publish","type":"post","link":"https:\/\/www.silicloud.com\/zh\/blog\/46741-2\/","title":{"rendered":""},"content":{"rendered":"<h1>\u306f\u3058\u3081\u306b<\/h1>\n<p>Apache Kafka\u306e\u5546\u7528\u7248\u3067\u3042\u308bConfluent Platform\u306b\u3064\u3044\u3066\u306e\u30e1\u30e2\u66f8\u304d\u3067\u3059\u3002\u521d\u7269\u306a\u306e\u3067\u8abf\u3079\u305f\u5185\u5bb9\u306e\u6574\u7406\u306a\u3069\u3082\u542b\u3081\u3066\u30ed\u30b0\u3092\u6b8b\u3057\u3066\u304a\u304d\u307e\u3059\u3002<br \/>\n\u5546\u7528\u7248\u3068\u3044\u3063\u3066\u3082Confluent Community License\u3068\u3044\u3046\u306e\u304c\u3042\u308a\u3001\u4e00\u90e8\u306e\u6a5f\u80fd\u306f\u7121\u511f\u3067\u5229\u7528\u3067\u304d\u308b\u306e\u3067\u7121\u511f\u306e\u7bc4\u56f2\u3067\u8a66\u3057\u3066\u3044\u307e\u3059\u3002<\/p>\n<h2>\u95a2\u9023\u8a18\u4e8b<\/h2>\n<p>Confluent Platform \u30e1\u30e2 &#8211; (1)\u74b0\u5883\u69cb\u7bc9<br \/>\nConfluent Platform \u30e1\u30e2 &#8211; (2)\u30e1\u30c3\u30bb\u30fc\u30b8\u9001\u53d7\u4fe1\u7c21\u6613\u30c6\u30b9\u30c8<br \/>\nConfluent Platform \u30e1\u30e2 &#8211; (3)Schema Registry\u7c21\u6613\u30c6\u30b9\u30c8<\/p>\n<h1>\u53c2\u8003\u60c5\u5831<\/h1>\n<p>Apache Kafka\u306e\u6982\u8981\u3068\u30a2\u30fc\u30ad\u30c6\u30af\u30c1\u30e3<br \/>\nConfluent Platform\u3063\u3066\u3069\u3093\u306a\u306e\uff1f<br \/>\nConfluent Platform(Community\u7248)\u3092\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3057\u3066Kafka Connect\u3092\u8a66\u3057\u3066\u307f\u308b<br \/>\nApache Kafka\u306eProducer\/Broker\/Consumer\u306e\u3057\u304f\u307f\u3068\u8a2d\u5b9a\u4e00\u89a7<br \/>\nManual Install using ZIP and TAR Archives<\/p>\n<h1>Confluent Platform Community License\u306b\u3064\u3044\u3066<\/h1>\n<div><img decoding=\"async\" class=\"post-images\" title=\"\" src=\"https:\/\/cdn.silicloud.com\/blog-img\/blog\/img\/657d664937434c4406d09c45\/7-0.png\" alt=\"image.png\" \/><\/div>\n<p>Confluent Platform\u306e\u30b3\u30a2\u306e\u30b3\u30f3\u30dd\u30fc\u30cd\u30f3\u30c8\u3067\u3042\u308b Kafka, ZooKeeper\u8fba\u308a\u306fAPACHE 2.0 LICENCE\u3068\u3057\u3066\u5229\u7528\u53ef\u80fd\u306e\u3088\u3046\u3067\u3059\u3002Schema Rgistry\u3084REST Proxy\u306f COMMUNITY LICENSE\u306e\u7bc4\u56f2\u3067\u5229\u7528\u3067\u304d\u305d\u3046\u3067\u3059\u3002<br \/>\nControl Center\u3084\u30af\u30e9\u30b9\u30bf\u30fc\u95a2\u9023\u306e\u6a5f\u80fd\u3092\u4f7f\u3046\u5834\u5408\u306b\u306fENTERPRISE LICENSE\u304c\u5fc5\u8981(\u6709\u511f)\u3068\u3044\u3046\u3053\u3068\u306e\u3088\u3046\u3067\u3059\u3002<\/p>\n<p>\u53c2\u8003:<br \/>\nConfluent Platform Licenses<br \/>\nConfluent Community License Version 1.0<\/p>\n<h1>\u74b0\u5883\u60c5\u5831<\/h1>\n<p>RHEL V8.2<br \/>\nConfluent Community V6.2.0<\/p>\n<p>Windows10\u4e0a\u306eVirtualBox\u306b\u4eee\u60f3OS\u3068\u3057\u3066RHEL V8.2\u3092\u7acb\u3066\u3066\u3044\u308b\u306e\u3067\u305d\u3053\u306b\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3057\u307e\u3059\u3002<br \/>\nKafka\u306f\u8907\u6570\u306e\u30ce\u30fc\u30c9\u306b\u5206\u6563\u3055\u308c\u305f\u30af\u30e9\u30b9\u30bf\u30fc\u69cb\u6210\u3092\u884c\u3046\u3053\u3068\u304c\u3067\u304d\u307e\u3059\u304c\u3001\u3053\u3053\u3067\u306f\u7c21\u6613\u7684\u306a\u30c6\u30b9\u30c8\u74b0\u5883\u3092\u4f5c\u308b\u76ee\u7684\u306a\u306e\u3067\u30011\u30ce\u30fc\u30c9\u4e0a\u306b1Broker\u306e\u307f\u3092\u69cb\u6210\u3057\u307e\u3059\u3002<\/p>\n<p>\u524d\u63d0\u6761\u4ef6\u306f\u3053\u3061\u3089<br \/>\nConfluent System Requirements<\/p>\n<p>java\u306f\u65e2\u306b\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u6e08\u307f<\/p>\n<pre class=\"post-pre\"><code>[root@test12 ~]# java -version\r\nopenjdk version \"1.8.0_242\"\r\nOpenJDK Runtime Environment (build 1.8.0_242-b08)\r\nOpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)\r\n<\/code><\/pre>\n<h1>Confluent Platform Community Component\u306e\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb<\/h1>\n<p>\u30aa\u30d5\u30e9\u30a4\u30f3\u3067\u306e\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u65b9\u6cd5\u304c\u3042\u308b\u306e\u3067\u3053\u3061\u3089\u306e\u624b\u9806\u306b\u5f93\u3063\u3066\u3084\u3063\u3066\u307f\u307e\u3059\u3002<br \/>\nManual Install using ZIP and TAR Archives<\/p>\n<p>\u4e0a\u306e\u30ac\u30a4\u30c9\u306b\u3042\u308b confluent-community-6.2.0.tar.gz (\u7d04350MB) \u3092\u30c0\u30a6\u30f3\u30ed\u30fc\u30c9\u3057\u3066\u30bf\u30fc\u30b2\u30c3\u30c8\u30de\u30b7\u30f3\u306b\u6301\u3063\u3066\u3044\u304d\u5c55\u958b\u3057\u307e\u3059\u3002\u3053\u3053\u3067\u306f\u3001\/opt\/\u4ee5\u4e0b\u306b\u5c55\u958b\u3059\u308b\u3053\u3068\u306b\u3057\u307e\u3059\u3002<\/p>\n<pre class=\"post-pre\"><code>[root@test12 \/Local_Inst_Image\/Confluent]# ls -la\r\n\u5408\u8a08 348552\r\ndrwxrwx---. 1 root vboxsf         0  8\u6708 13 18:00 .\r\ndrwxrwx---. 1 root vboxsf     16384  8\u6708 13 17:59 ..\r\n-rwxrwx---. 1 root vboxsf 356898902  8\u6708 13 18:00 confluent-community-6.2.0.tar.gz\r\n\r\n[root@test12 \/Local_Inst_Image\/Confluent]# tar xzf confluent-community-6.2.0.tar.gz -C \/opt\r\n<\/code><\/pre>\n<p>\/opt\/confluent-6.2.0\u4ee5\u4e0b\u306b\u30d5\u30a1\u30a4\u30eb\u304c\u5c55\u958b\u3055\u308c\u307e\u3057\u305f\u3002<br \/>\nowner,group\u5909\u66f4\u3057\u307e\u3059\u3002<\/p>\n<pre class=\"post-pre\"><code>[root@test12 \/opt]# chown -R root:root confluent-6.2.0\r\n\r\n[root@test12 \/opt]# ls -la confluent-6.2.0\/\r\n\u5408\u8a08 8\r\ndrwxr-xr-x. 7 root root   77  6\u6708  6 08:51 .\r\ndrwxr-xr-x. 7 root root   95  8\u6708 13 18:05 ..\r\n-rw-r--r--. 1 root root  871  6\u6708  6 08:51 README\r\ndrwxr-xr-x. 3 root root 4096  6\u6708  6 07:11 bin\r\ndrwxr-xr-x. 8 root root  116  6\u6708  6 07:11 etc\r\ndrwxr-xr-x. 3 root root   21  6\u6708  6 07:11 lib\r\ndrwxr-xr-x. 6 root root   71  6\u6708  6 07:11 share\r\ndrwxr-xr-x. 2 root root  178  6\u6708  6 08:51 src\r\n<\/code><\/pre>\n<p>tar\u5c55\u958b\u3059\u308b\u3060\u3051\u3067OK\u307f\u305f\u3044\u3067\u3059\u3002\u697d\u3061\u3093\u3002<\/p>\n<h1>Confluent\u30b3\u30f3\u30dd\u30fc\u30cd\u30f3\u30c8\u306e\u7ba1\u7406<\/h1>\n<p>\u3053\u3053\u3067\u306f\u3001ZooKeeper, Kafka(Broker), Schem Registry \u306e3\u3064\u306e\u30b3\u30f3\u30dd\u30fc\u30cd\u30f3\u30c8\u3092\u6271\u3044\u307e\u3059\u3002<br \/>\n\u3068\u308a\u3042\u3048\u305a\u5168\u3066\u30c7\u30d5\u30a9\u30eb\u30c8\u306e\u69cb\u6210\u3067\u8d77\u52d5\u3055\u305b\u3066\u307f\u307e\u3059\u3002<\/p>\n<p>(Control Center\u3068\u3044\u3046\u30b3\u30f3\u30dd\u30fc\u30cd\u30f3\u30c8\u3092\u4f7f\u3046\u3068GUI\u306e\u7ba1\u7406\u7528\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30fc\u30b9\u304c\u63d0\u4f9b\u3055\u308c\u308b\u3088\u3046\u3067\u3001\u305d\u308c\u3092\u4f7f\u3046\u3068\u5206\u304b\u308a\u3084\u3059\u305d\u3046\u306a\u306e\u3067\u3059\u304c\u3001\u6b8b\u5ff5\u306a\u304c\u3089Community License\u3067\u306f\u5229\u7528\u3067\u304d\u306a\u3044\u3088\u3046\u3067\u3059&#8230;)<\/p>\n<h2>ZooKeeper<\/h2>\n<h3>\u69cb\u6210<\/h3>\n<p>Configure Confluent Platform &#8211; ZooKeeper<br \/>\n\u30c7\u30d5\u30a9\u30eb\u30c8\u3067\u63d0\u4f9b\u3055\u308c\u3066\u3044\u308b\u30d7\u30ed\u30d1\u30c6\u30a3\u30d5\u30a1\u30a4\u30eb\u3092\u78ba\u8a8d\u3002<\/p>\n<pre class=\"post-pre\"><code><span class=\"c\"># Licensed to the Apache Software Foundation (ASF) under one or more\r\n# contributor license agreements.  See the NOTICE file distributed with\r\n# this work for additional information regarding copyright ownership.\r\n# The ASF licenses this file to You under the Apache License, Version 2.0\r\n# (the \"License\"); you may not use this file except in compliance with\r\n# the License.  You may obtain a copy of the License at\r\n#\r\n#    http:\/\/www.apache.org\/licenses\/LICENSE-2.0\r\n#\r\n# Unless required by applicable law or agreed to in writing, software\r\n# distributed under the License is distributed on an \"AS IS\" BASIS,\r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n# See the License for the specific language governing permissions and\r\n# limitations under the License.\r\n# the directory where the snapshot is stored.\r\n<\/span><span class=\"py\">dataDir<\/span><span class=\"p\">=<\/span><span class=\"s\">\/tmp\/zookeeper<\/span>\r\n<span class=\"c\"># the port at which the clients will connect\r\n<\/span><span class=\"py\">clientPort<\/span><span class=\"p\">=<\/span><span class=\"s\">2181<\/span>\r\n<span class=\"c\"># disable the per-ip limit on the number of connections since this is a non-production config\r\n<\/span><span class=\"py\">maxClientCnxns<\/span><span class=\"p\">=<\/span><span class=\"s\">0<\/span>\r\n<span class=\"c\"># Disable the adminserver by default to avoid port conflicts.\r\n# Set the port to something non-conflicting if choosing to enable this\r\n<\/span><span class=\"py\">admin.enableServer<\/span><span class=\"p\">=<\/span><span class=\"s\">false<\/span>\r\n<span class=\"c\"># admin.serverPort=8080\r\n<\/span><\/code><\/pre>\n<p>\u3053\u306e\u8fba\u306b\u30d1\u30e9\u30e1\u30fc\u30bf\u30fc\u306eReference\u3089\u3057\u304d\u3082\u306e\u306f\u3042\u308a&#8230;<br \/>\nRunning ZooKeeper in Production &#8211; Configuration Options<br \/>\nComplete List\u306f\u3053\u3063\u3061\u898b\u308d\u3068\u3044\u3046\u3053\u3068\u3067Apache\u306e\u30b5\u30a4\u30c8\u306b\u98db\u3070\u3055\u308c\u305f\u304c&#8230;<br \/>\nApache &#8211; ZooKeeper Administrator&#8217;s Guide<br \/>\nadmin.enableServer\u3068\u304b\u898b\u3064\u304b\u3089\u306a\u3044\u3002\u4e0a\u306e\u30ea\u30f3\u30af\u306fV3.4.10\u306e\u30c9\u30ad\u30e5\u30e1\u30f3\u30c8\u306e\u3088\u3046\u3060\u304c\u30d0\u30fc\u30b8\u30e7\u30f3\u304c\u53e4\u3044\u3063\u307d\u3044\u3002<br \/>\n\u65b0\u3057\u76ee\u306e\u30d0\u30fc\u30b8\u30e7\u30f3\u306e\u30c9\u30ad\u30e5\u30e1\u30f3\u30c8\u306b\u306fadmin.enableServer\u306e\u8a18\u8f09\u306f\u3042\u3063\u305f\u3002<br \/>\nApache &#8211; ZooKeeper Administrator&#8217;s Guide (3.5.9)<br \/>\n(\u5f8c\u3067ZooKeeper\u306e\u8d77\u52d5\u6642\u30ed\u30b0\u3092\u78ba\u8a8d\u3057\u305f\u3068\u3053\u308dV3.5.9\u3060\u3063\u305f)<\/p>\n<h3>\u8d77\u52d5\/\u505c\u6b62<\/h3>\n<p>Start Confluent Platform<\/p>\n<ul class=\"post-ul\">\n<li style=\"list-style-type: none;\">\n<ul class=\"post-ul\">\u8d77\u52d5\u30b3\u30de\u30f3\u30c9: bin\/zookeeper-server-start -daemon<\/ul>\n<\/li>\n<\/ul>\n<p>\u505c\u6b62\u30b3\u30de\u30f3\u30c9: bin\/zookeeper-server-stop<\/p>\n<p>\u30ed\u30b0: logs\/zookeeper.out<\/p>\n<p>Listen\u30dd\u30fc\u30c8: 2181<\/p>\n<p>\u203b\u30de\u30cb\u30e5\u30a2\u30eb\u306b\u30b3\u30de\u30f3\u30c9\u30ea\u30d5\u30a1\u30ec\u30f3\u30b9\u307f\u305f\u3044\u306a\u3082\u306e\u304c\u898b\u5f53\u305f\u3089\u306a\u3044&#8230;\u3002\u4e0a\u306e\u30d5\u30a1\u30a4\u30eb\u8997\u3044\u305f\u3089\u30b7\u30a7\u30eb\u30fb\u30b9\u30af\u30ea\u30d7\u30c8\u3060\u3063\u305f\u306e\u3067\u305d\u3053\u304b\u3089\u5224\u65ad\u3059\u308b\u306b -daemon\u30aa\u30d7\u30b7\u30e7\u30f3\u3092\u3064\u3051\u308c\u3070\u30d0\u30c3\u30af\u30b0\u30e9\u30a6\u30f3\u30c9\u8d77\u52d5\u3057\u3066\u304f\u308c\u308b\u3088\u3046\u3067\u3059\u3002(\u672b\u5c3e\u306e\u30b7\u30a7\u30eb\u30fb\u30b9\u30af\u30ea\u30d7\u30c8\u4e00\u89a7\u53c2\u7167)<\/p>\n<p>\u63d0\u4f9b\u3055\u308c\u308b\u30d7\u30ed\u30d1\u30c6\u30a3\u30d5\u30a1\u30a4\u30eb\u3092\u305d\u306e\u307e\u307e\u4f7f\u3063\u3066\u4ee5\u4e0b\u306e\u30b3\u30de\u30f3\u30c9\u3067\u8d77\u52d5\u3057\u3066\u307f\u307e\u3059\u3002<br \/>\n\/opt\/confluent-6.2.0\/bin\/zookeeper-server-start -daemon \/opt\/confluent-6.2.0\/etc\/kafka\/zookeeper.properties<\/p>\n<p>logs\/zookeeper.out\u306b\u30e1\u30c3\u30bb\u30fc\u30b8\u304c\u51fa\u529b\u3055\u308c\u307e\u3059\u3002<br \/>\n(server.log\u306b\u3082\u540c\u3058\u5185\u5bb9\u306e\u30e1\u30c3\u30bb\u30fc\u30b8\u304c\u51fa\u529b\u3055\u308c\u307e\u3059\u304c\u3001server.log\u306f\u5f8c\u8ff0\u306eKafka\u306e\u30e1\u30c3\u30bb\u30fc\u30b8\u3082\u5408\u308f\u305b\u3066\u51fa\u529b\u3055\u308c\u308b\u3088\u3046\u3067\u3059\u3002)<\/p>\n<details>\u8d77\u52d5\u6642\u30ed\u30b0: logs\/zookeeper.out<br \/>\n[2021-08-14 09:24:44,670] INFO Reading configuration from: \/opt\/confluent-6.2.0\/etc\/kafka\/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)<br \/>\n[2021-08-14 09:24:44,695] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)<br \/>\n[2021-08-14 09:24:44,695] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)<br \/>\n[2021-08-14 09:24:44,697] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)<br \/>\n[2021-08-14 09:24:44,697] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)<br \/>\n[2021-08-14 09:24:44,697] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)<br \/>\n[2021-08-14 09:24:44,697] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)<br \/>\n[2021-08-14 09:24:44,700] INFO Log4j 1.2 jmx support found and enabled. (org.apache.zookeeper.jmx.ManagedUtil)<br \/>\n[2021-08-14 09:24:44,714] INFO Reading configuration from: \/opt\/confluent-6.2.0\/etc\/kafka\/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)<br \/>\n[2021-08-14 09:24:44,714] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)<br \/>\n[2021-08-14 09:24:44,714] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)<br \/>\n[2021-08-14 09:24:44,715] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)<br \/>\n[2021-08-14 09:24:44,717] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog)<br \/>\n[2021-08-14 09:24:44,734] INFO Server environment:zookeeper.version=3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01\/06\/2021 20:03 GMT (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,734] INFO Server environment:host.name=localhost (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,734] INFO Server environment:java.version=1.8.0_242 (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,734] INFO Server environment:java.vendor=Oracle Corporation (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,734] INFO Server environment:java.home=\/usr\/lib\/jvm\/java-1.8.0-openjdk-1.8.0.242.b08-4.el8.x86_64\/jre (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,735] INFO Server environment:java.class.path=\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/netty-transport-4.1.62.Final.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/netty-buffer-4.1.62.Final.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/connect-mirror-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-streams-scala_2.13-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-util-ajax-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jakarta.inject-2.6.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/audience-annotations-0.5.0.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/scala-java8-compat_2.13-0.9.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-metadata-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jakarta.activation-api-1.2.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/slf4j-api-1.7.30.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/scala-logging_2.13-3.9.2.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/zookeeper-jute-3.5.9.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/netty-codec-4.1.62.Final.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/connect-json-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-security-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/scala-reflect-2.13.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/scala-library-2.13.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-jaxrs-base-2.10.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka_2.13-6.2.0-ccs-sources.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/netty-handler-4.1.62.Final.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jakarta.ws.rs-api-2.1.6.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/zstd-jni-1.4.9-1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-util-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-server-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/zookeeper-3.5.9.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/javassist-3.27.0-GA.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-shell-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-client-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/commons-cli-1.4.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-core-2.10.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-servlet-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/connect-runtime-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/scala-collection-compat_2.13-2.3.0.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-jaxrs-json-provider-2.10.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-streams-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jakarta.validation-api-2.0.2.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-module-scala_2.13-2.10.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jakarta.annotation-api-1.3.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jersey-server-2.34.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/hk2-utils-2.6.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/lz4-java-1.7.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/maven-artifact-3.8.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/connect-transforms-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/metrics-core-2.2.0.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-annotations-2.10.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka_2.13-6.2.0-ccs-test-sources.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/connect-mirror-client-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jakarta.xml.bind-api-2.3.2.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/reflections-0.9.12.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/argparse4j-0.7.0.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/netty-transport-native-epoll-4.1.62.Final.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/activation-1.1.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-raft-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-tools-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-http-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka_2.13-6.2.0-ccs-javadoc.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/connect-api-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/connect-file-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jersey-container-servlet-2.34.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jersey-container-servlet-core-2.34.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/netty-resolver-4.1.62.Final.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jaxb-api-2.3.0.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-streams-test-utils-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-log4j-appender-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jersey-hk2-2.34.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/javax.servlet-api-3.1.0.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-datatype-jdk8-2.10.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/slf4j-log4j12-1.7.30.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/commons-lang3-3.8.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/plexus-utils-3.2.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/javax.ws.rs-api-2.1.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jersey-common-2.34.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka_2.13-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-module-paranamer-2.10.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/netty-common-4.1.62.Final.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/netty-transport-native-unix-common-4.1.62.Final.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka_2.13-6.2.0-ccs-test.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-streams-examples-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-io-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/osgi-resource-locator-1.0.3.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/rocksdbjni-5.18.4.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/aopalliance-repackaged-2.6.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-dataformat-csv-2.10.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/snappy-java-1.1.8.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-databind-2.10.5.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-continuation-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/hk2-locator-2.6.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/confluent-log4j-1.2.17-cp2.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/hk2-api-2.6.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jopt-simple-5.0.4.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/paranamer-2.8.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-module-jaxb-annotations-2.10.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-clients-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-servlets-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/connect-basic-auth-extension-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jline-3.12.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jersey-client-2.34.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/confluent-telemetry\/* (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,737] INFO Server environment:java.library.path=\/opt\/ibm\/cics\/lib:\/opt\/ibm\/cicssm\/lib:\/usr\/java\/packages\/lib\/amd64:\/usr\/lib64:\/lib64:\/lib:\/usr\/lib (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,737] INFO Server environment:java.io.tmpdir=\/tmp (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,737] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,737] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,738] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,738] INFO Server environment:os.version=4.18.0-193.el8.x86_64 (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,738] INFO Server environment:user.name=root (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,738] INFO Server environment:user.home=\/root (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,738] INFO Server environment:user.dir=\/opt\/confluent-6.2.0\/bin (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,738] INFO Server environment:os.memory.free=497MB (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,738] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,738] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,740] INFO minSessionTimeout set to 6000 (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,740] INFO maxSessionTimeout set to 60000 (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,740] INFO Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir \/tmp\/zookeeper\/version-2 snapdir \/tmp\/zookeeper\/version-2 (org.apache.zookeeper.server.ZooKeeperServer)<br \/>\n[2021-08-14 09:24:44,749] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory)<br \/>\n[2021-08-14 09:24:44,753] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory)<br \/>\n[2021-08-14 09:24:44,757] INFO binding to port 0.0.0.0\/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)<br \/>\n[2021-08-14 09:24:44,800] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase)<br \/>\n[2021-08-14 09:24:44,803] INFO Reading snapshot \/tmp\/zookeeper\/version-2\/snapshot.0 (org.apache.zookeeper.server.persistence.FileSnap)<br \/>\n[2021-08-14 09:24:44,810] INFO Snapshotting: 0x0 to \/tmp\/zookeeper\/version-2\/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog)<br \/>\n[2021-08-14 09:24:44,822] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor)<br \/>\n[2021-08-14 09:24:44,828] INFO Using checkIntervalMs=60000 maxPerMinute=10000 (org.apache.zookeeper.server.ContainerManager)<\/details>\n<p>\u8d77\u52d5\u3057\u305f\u3063\u307d\u3044\u3002\u30dd\u30fc\u30c82181\u304cListen\u3055\u308c\u305f\u72b6\u614b\u306b\u306a\u308a\u307e\u3057\u305f\u3002<br \/>\n\u8d77\u52d5\u6642\u306e\u30ed\u30b0\u898b\u308b\u3068\u4ee5\u4e0b\u306e\u30e1\u30c3\u30bb\u30fc\u30b8\u304c\u51fa\u3066\u305f\u306e\u3067\u3001ZooKeeper\u306e\u30d0\u30fc\u30b8\u30e7\u30f3\u3068\u3057\u3066\u306f3.5.9\u3089\u3057\u3044\u3002<\/p>\n<pre class=\"post-pre\"><code>Server environment:zookeeper.version=3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01\/06\/2021 20:03 GMT (org.apache.zookeeper.server.ZooKeeperServer)\r\n<\/code><\/pre>\n<h2>Kafka(Broker)<\/h2>\n<h3>\u69cb\u6210<\/h3>\n<p>Configure Confluent Platform &#8211; Kafka<\/p>\n<p>\u88fd\u54c1\u63d0\u4f9b\u306e\u30d7\u30ed\u30d1\u30c6\u30a3\u30d5\u30a1\u30a4\u30eb\u306f\u3053\u3061\u3089\u3002<\/p>\n<details>\/opt\/confluent-6.2.0\/etc\/kafka\/server.properties\/opt\/confluent-6.2.0\/etc\/kafka\/server.properties<br \/>\n# Licensed to the Apache Software Foundation (ASF) under one or more<br \/>\n# contributor license agreements. See the NOTICE file distributed with<br \/>\n# this work for additional information regarding copyright ownership.<br \/>\n# The ASF licenses this file to You under the Apache License, Version 2.0<br \/>\n# (the &#8220;License&#8221;); you may not use this file except in compliance with<br \/>\n# the License. You may obtain a copy of the License at<br \/>\n#<br \/>\n# http:\/\/www.apache.org\/licenses\/LICENSE-2.0<br \/>\n#<br \/>\n# Unless required by applicable law or agreed to in writing, software<br \/>\n# distributed under the License is distributed on an &#8220;AS IS&#8221; BASIS,<br \/>\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.<br \/>\n# See the License for the specific language governing permissions and<br \/>\n# limitations under the License.<\/p>\n<p># see kafka.server.KafkaConfig for additional details and defaults<\/p>\n<p>############################# Server Basics #############################<\/p>\n<p># The id of the broker. This must be set to a unique integer for each broker.<br \/>\nbroker.id=0<\/p>\n<p>############################# Socket Server Settings #############################<\/p>\n<p># The address the socket server listens on. It will get the value returned from<br \/>\n# java.net.InetAddress.getCanonicalHostName() if not configured.<br \/>\n# FORMAT:<br \/>\n# listeners = listener_name:\/\/host_name:port<br \/>\n# EXAMPLE:<br \/>\n# listeners = PLAINTEXT:\/\/your.host.name:9092<br \/>\n#listeners=PLAINTEXT:\/\/:9092<\/p>\n<p># Hostname and port the broker will advertise to producers and consumers. If not set,<br \/>\n# it uses the value for &#8220;listeners&#8221; if configured. Otherwise, it will use the value<br \/>\n# returned from java.net.InetAddress.getCanonicalHostName().<br \/>\n#advertised.listeners=PLAINTEXT:\/\/your.host.name:9092<\/p>\n<p># Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details<br \/>\n#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL<\/p>\n<p># The number of threads that the server uses for receiving requests from the network and sending responses to the network<br \/>\nnum.network.threads=3<\/p>\n<p># The number of threads that the server uses for processing requests, which may include disk I\/O<br \/>\nnum.io.threads=8<\/p>\n<p># The send buffer (SO_SNDBUF) used by the socket server<br \/>\nsocket.send.buffer.bytes=102400<\/p>\n<p># The receive buffer (SO_RCVBUF) used by the socket server<br \/>\nsocket.receive.buffer.bytes=102400<\/p>\n<p># The maximum size of a request that the socket server will accept (protection against OOM)<br \/>\nsocket.request.max.bytes=104857600<\/p>\n<p>############################# Log Basics #############################<\/p>\n<p># A comma separated list of directories under which to store log files<br \/>\nlog.dirs=\/tmp\/kafka-logs<\/p>\n<p># The default number of log partitions per topic. More partitions allow greater<br \/>\n# parallelism for consumption, but this will also result in more files across<br \/>\n# the brokers.<br \/>\nnum.partitions=1<\/p>\n<p># The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.<br \/>\n# This value is recommended to be increased for installations with data dirs located in RAID array.<br \/>\nnum.recovery.threads.per.data.dir=1<\/p>\n<p>############################# Internal Topic Settings #############################<br \/>\n# The replication factor for the group metadata internal topics &#8220;__consumer_offsets&#8221; and &#8220;__transaction_state&#8221;<br \/>\n# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.<br \/>\noffsets.topic.replication.factor=1<br \/>\ntransaction.state.log.replication.factor=1<br \/>\ntransaction.state.log.min.isr=1<\/p>\n<p>############################# Log Flush Policy #############################<\/p>\n<p># Messages are immediately written to the filesystem but by default we only fsync() to sync<br \/>\n# the OS cache lazily. The following configurations control the flush of data to disk.<br \/>\n# There are a few important trade-offs here:<br \/>\n# 1. Durability: Unflushed data may be lost if you are not using replication.<br \/>\n# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.<br \/>\n# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.<br \/>\n# The settings below allow one to configure the flush policy to flush data after a period of time or<br \/>\n# every N messages (or both). This can be done globally and overridden on a per-topic basis.<\/p>\n<p># The number of messages to accept before forcing a flush of data to disk<br \/>\n#log.flush.interval.messages=10000<\/p>\n<p># The maximum amount of time a message can sit in a log before we force a flush<br \/>\n#log.flush.interval.ms=1000<\/p>\n<p>############################# Log Retention Policy #############################<\/p>\n<p># The following configurations control the disposal of log segments. The policy can<br \/>\n# be set to delete segments after a period of time, or after a given size has accumulated.<br \/>\n# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens<br \/>\n# from the end of the log.<\/p>\n<p># The minimum age of a log file to be eligible for deletion due to age<br \/>\nlog.retention.hours=168<\/p>\n<p># A size-based retention policy for logs. Segments are pruned from the log unless the remaining<br \/>\n# segments drop below log.retention.bytes. Functions independently of log.retention.hours.<br \/>\n#log.retention.bytes=1073741824<\/p>\n<p># The maximum size of a log segment file. When this size is reached a new log segment will be created.<br \/>\nlog.segment.bytes=1073741824<\/p>\n<p># The interval at which log segments are checked to see if they can be deleted according<br \/>\n# to the retention policies<br \/>\nlog.retention.check.interval.ms=300000<\/p>\n<p>############################# Zookeeper #############################<\/p>\n<p># Zookeeper connection string (see zookeeper docs for details).<br \/>\n# This is a comma separated host:port pairs, each corresponding to a zk<br \/>\n# server. e.g. &#8220;127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002&#8221;.<br \/>\n# You can also append an optional chroot string to the urls to specify the<br \/>\n# root directory for all kafka znodes.<br \/>\nzookeeper.connect=localhost:2181<\/p>\n<p># Timeout in ms for connecting to zookeeper<br \/>\nzookeeper.connection.timeout.ms=18000<\/p>\n<p>##################### Confluent Metrics Reporter #######################<br \/>\n# Confluent Control Center and Confluent Auto Data Balancer integration<br \/>\n#<br \/>\n# Uncomment the following lines to publish monitoring data for<br \/>\n# Confluent Control Center and Confluent Auto Data Balancer<br \/>\n# If you are using a dedicated metrics cluster, also adjust the settings<br \/>\n# to point to your metrics kakfa cluster.<br \/>\n#metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter<br \/>\n#confluent.metrics.reporter.bootstrap.servers=localhost:9092<br \/>\n#<br \/>\n# Uncomment the following line if the metrics cluster has a single broker<br \/>\n#confluent.metrics.reporter.topic.replicas=1<\/p>\n<p>############################# Group Coordinator Settings #############################<\/p>\n<p># The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.<br \/>\n# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.<br \/>\n# The default value for this is 3 seconds.<br \/>\n# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.<br \/>\n# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.<br \/>\ngroup.initial.rebalance.delay.ms=0<\/p>\n<\/details>\n<p>\u30b3\u30e1\u30f3\u30c8\u304c\u591a\u304f\u3066\u9577\u3044\u306e\u3067\u30d1\u30e9\u30e1\u30fc\u30bf\u90e8\u5206\u3060\u3051\u62bd\u51fa\u3059\u308b\u3068\u3053\u3093\u306a\u611f\u3058\u3002<\/p>\n<pre class=\"post-pre\"><code>[root@test12 \/opt\/confluent-6.2.0\/etc\/kafka]# cat server.properties  | grep -v -e \"^#\" | grep -v -e \"^\\s*$\"\r\nbroker.id=0\r\nnum.network.threads=3\r\nnum.io.threads=8\r\nsocket.send.buffer.bytes=102400\r\nsocket.receive.buffer.bytes=102400\r\nsocket.request.max.bytes=104857600\r\nlog.dirs=\/tmp\/kafka-logs\r\nnum.partitions=1\r\nnum.recovery.threads.per.data.dir=1\r\noffsets.topic.replication.factor=1\r\ntransaction.state.log.replication.factor=1\r\ntransaction.state.log.min.isr=1\r\nlog.retention.hours=168\r\nlog.segment.bytes=1073741824\r\nlog.retention.check.interval.ms=300000\r\nzookeeper.connect=localhost:2181\r\nzookeeper.connection.timeout.ms=18000\r\ngroup.initial.rebalance.delay.ms=0\r\n<\/code><\/pre>\n<p>\u30d1\u30e9\u30e1\u30fc\u30bf\u30fc\u306e\u30ea\u30d5\u30a1\u30ec\u30f3\u30b9\u306f\u3053\u3061\u3089<br \/>\nKafka Broker Configurations<\/p>\n<p>\u203bListen\u3059\u308b\u30dd\u30fc\u30c8\u95a2\u9023\u306e\u8a2d\u5b9a\u304c\u3055\u308c\u3066\u3044\u306a\u3044\u304c\u3001\u30ea\u30d5\u30a1\u30ec\u30f3\u30b9\u3092\u898b\u308b\u3068lisners\u304c\u6307\u5b9a\u3055\u308c\u3066\u3044\u306a\u3051\u308c\u3070port\u304c\u4f7f\u308f\u308c\u308b\u3063\u307d\u304f\u3001port\u306e\u30c7\u30d5\u30a9\u30eb\u30c8\u304c9092\u306b\u306a\u3063\u3066\u307e\u3057\u305f\u3002<\/p>\n<p>\u203btopic\u306b\u4fdd\u6301\u3055\u308c\u308b\u30c7\u30fc\u30bf\u306f\u5b9f\u4f53\u3068\u3057\u3066\u306fOS\u4e0a\u306e\u30d5\u30a1\u30a4\u30eb\u3068\u3057\u3066\u4fdd\u6301\u3055\u308c\u307e\u3059\u3002\u305d\u306e\u66f8\u304d\u51fa\u3057\u5148\u306flog.dirs\u30d1\u30e9\u30e1\u30fc\u30bf\u30fc\u3067\u6307\u5b9a\u3055\u308c\u3001\u4e0a\u306e\u901a\u308a\u30c7\u30d5\u30a9\u30eb\u30c8\u3067\u306f\/tmp\/kafka-logs\u306b\u306a\u3063\u3066\u3044\u307e\u3059\u3002\/tmp\/\u30c7\u30a3\u30ec\u30af\u30c8\u30ea\u4ee5\u4e0b\u306f\u901a\u5e38\u306f\u81ea\u52d5\u30af\u30ea\u30fc\u30f3\u30a2\u30c3\u30d7\u306e\u5bfe\u8c61\u306b\u306a\u3063\u3066\u3044\u308b\u3053\u3068\u304c\u591a\u3044\u306e\u3067\u3001\u81ea\u52d5\u524a\u9664\u3055\u308c\u306a\u3044\u3088\u3046\u306b\u3059\u308b\u306b\u306f\u3053\u306e\u30d1\u30e9\u30e1\u30fc\u30bf\u30fc\u306e\u8a2d\u5b9a\u3092\u5909\u66f4\u3057\u3066\u51fa\u529b\u5148\u30c7\u30a3\u30ec\u30af\u30c8\u30ea\u3092\u5909\u66f4\u3059\u308b\u304b\u3001\/tmp\u30af\u30ea\u30fc\u30f3\u30a2\u30c3\u30d7\u306e\u8a2d\u5b9a\u3092\u5909\u66f4\u3059\u308b\u5fc5\u8981\u304c\u3042\u308a\u307e\u3059\u3002<\/p>\n<p>\u53c2\u8003:<br \/>\nStack Overflow &#8211; Which directory does apache kafka store the data in broker nodes<br \/>\nKafka Broker Configurations &#8211; log.dirs<br \/>\n\u3010CentOS\u3011\/tmp\u914d\u4e0b\u306e\u30d5\u30a1\u30a4\u30eb\u304c\u6d88\u3048\u308b\u7406\u7531<\/p>\n<h3>\u8d77\u52d5\/\u505c\u6b62<\/h3>\n<p>Start Confluent Platform<\/p>\n<ul class=\"post-ul\">\n<li style=\"list-style-type: none;\">\n<ul class=\"post-ul\">\u8d77\u52d5\u30b3\u30de\u30f3\u30c9: bin\/kafka-server-start -daemon<\/ul>\n<\/li>\n<\/ul>\n<p>\u505c\u6b62\u30b3\u30de\u30f3\u30c9: bin\/kafka-server-stop<\/p>\n<p>\u30ed\u30b0: logs\/kafkaServer.out<\/p>\n<p>Listen\u30dd\u30fc\u30c8: 9092<\/p>\n<p>\u4ee5\u4e0b\u306e\u30b3\u30de\u30f3\u30c9\u3067\u8d77\u52d5<br \/>\n\/opt\/confluent-6.2.0\/bin\/kafka-server-start -daemon \/opt\/confluent-6.2.0\/etc\/kafka\/server.properties<\/p>\n<details>\u8d77\u52d5\u6642\u30ed\u30b0: logs\/kafkaServer.out<br \/>\n[2021-08-14 11:17:16,090] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)<br \/>\n[2021-08-14 11:17:16,584] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)<br \/>\n[2021-08-14 11:17:16,712] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)<br \/>\n[2021-08-14 11:17:16,716] INFO starting (kafka.server.KafkaServer)<br \/>\n[2021-08-14 11:17:16,717] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)<br \/>\n[2021-08-14 11:17:16,747] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)<br \/>\n[2021-08-14 11:17:16,752] INFO Client environment:zookeeper.version=3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01\/06\/2021 20:03 GMT (org.apache.zookeeper.ZooKeeper)<br \/>\n[2021-08-14 11:17:16,752] INFO Client environment:host.name=localhost (org.apache.zookeeper.ZooKeeper)<br \/>\n[2021-08-14 11:17:16,752] INFO Client environment:java.version=1.8.0_242 (org.apache.zookeeper.ZooKeeper)<br \/>\n[2021-08-14 11:17:16,752] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)<br \/>\n[2021-08-14 11:17:16,752] INFO Client environment:java.home=\/usr\/lib\/jvm\/java-1.8.0-openjdk-1.8.0.242.b08-4.el8.x86_64\/jre (org.apache.zookeeper.ZooKeeper)<br \/>\n[2021-08-14 11:17:16,761] INFO Client environment:java.class.path=\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/netty-transport-4.1.62.Final.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/netty-buffer-4.1.62.Final.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/connect-mirror-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-streams-scala_2.13-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-util-ajax-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jakarta.inject-2.6.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/audience-annotations-0.5.0.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/scala-java8-compat_2.13-0.9.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-metadata-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jakarta.activation-api-1.2.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/slf4j-api-1.7.30.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/scala-logging_2.13-3.9.2.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/zookeeper-jute-3.5.9.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/netty-codec-4.1.62.Final.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/connect-json-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-security-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/scala-reflect-2.13.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/scala-library-2.13.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-jaxrs-base-2.10.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka_2.13-6.2.0-ccs-sources.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/netty-handler-4.1.62.Final.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jakarta.ws.rs-api-2.1.6.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/zstd-jni-1.4.9-1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-util-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-server-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/zookeeper-3.5.9.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/javassist-3.27.0-GA.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-shell-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-client-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/commons-cli-1.4.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-core-2.10.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-servlet-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/connect-runtime-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/scala-collection-compat_2.13-2.3.0.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-jaxrs-json-provider-2.10.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-streams-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jakarta.validation-api-2.0.2.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-module-scala_2.13-2.10.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jakarta.annotation-api-1.3.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jersey-server-2.34.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/hk2-utils-2.6.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/lz4-java-1.7.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/maven-artifact-3.8.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/connect-transforms-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/metrics-core-2.2.0.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-annotations-2.10.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka_2.13-6.2.0-ccs-test-sources.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/connect-mirror-client-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jakarta.xml.bind-api-2.3.2.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/reflections-0.9.12.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/argparse4j-0.7.0.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/netty-transport-native-epoll-4.1.62.Final.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/activation-1.1.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-raft-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-tools-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-http-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka_2.13-6.2.0-ccs-javadoc.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/connect-api-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/connect-file-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jersey-container-servlet-2.34.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jersey-container-servlet-core-2.34.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/netty-resolver-4.1.62.Final.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jaxb-api-2.3.0.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-streams-test-utils-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-log4j-appender-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jersey-hk2-2.34.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/javax.servlet-api-3.1.0.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-datatype-jdk8-2.10.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/slf4j-log4j12-1.7.30.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/commons-lang3-3.8.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/plexus-utils-3.2.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/javax.ws.rs-api-2.1.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jersey-common-2.34.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka_2.13-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-module-paranamer-2.10.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/netty-common-4.1.62.Final.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/netty-transport-native-unix-common-4.1.62.Final.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka_2.13-6.2.0-ccs-test.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-streams-examples-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-io-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/osgi-resource-locator-1.0.3.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/rocksdbjni-5.18.4.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/aopalliance-repackaged-2.6.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-dataformat-csv-2.10.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/snappy-java-1.1.8.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-databind-2.10.5.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-continuation-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/hk2-locator-2.6.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/confluent-log4j-1.2.17-cp2.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/hk2-api-2.6.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jopt-simple-5.0.4.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/paranamer-2.8.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jackson-module-jaxb-annotations-2.10.5.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/kafka-clients-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jetty-servlets-9.4.40.v20210413.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/connect-basic-auth-extension-6.2.0-ccs.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jline-3.12.1.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/kafka\/jersey-client-2.34.jar:\/opt\/confluent-6.2.0\/bin\/..\/share\/java\/confluent-telemetry\/* (org.apache.zookeeper.ZooKeeper)<br \/>\n[2021-08-14 11:17:16,763] INFO Client environment:java.library.path=\/opt\/ibm\/cics\/lib:\/opt\/ibm\/cicssm\/lib:\/usr\/java\/packages\/lib\/amd64:\/usr\/lib64:\/lib64:\/lib:\/usr\/lib (org.apache.zookeeper.ZooKeeper)<br \/>\n[2021-08-14 11:17:16,763] INFO Client environment:java.io.tmpdir=\/tmp (org.apache.zookeeper.ZooKeeper)<br \/>\n[2021-08-14 11:17:16,763] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper)<br \/>\n[2021-08-14 11:17:16,763] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)<br \/>\n[2021-08-14 11:17:16,763] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)<br \/>\n[2021-08-14 11:17:16,763] INFO Client environment:os.version=4.18.0-193.el8.x86_64 (org.apache.zookeeper.ZooKeeper)<br \/>\n[2021-08-14 11:17:16,764] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)<br \/>\n[2021-08-14 11:17:16,764] INFO Client environment:user.home=\/root (org.apache.zookeeper.ZooKeeper)<br \/>\n[2021-08-14 11:17:16,764] INFO Client environment:user.dir=\/opt\/confluent-6.2.0\/bin (org.apache.zookeeper.ZooKeeper)<br \/>\n[2021-08-14 11:17:16,764] INFO Client environment:os.memory.free=980MB (org.apache.zookeeper.ZooKeeper)<br \/>\n[2021-08-14 11:17:16,764] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)<br \/>\n[2021-08-14 11:17:16,764] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)<br \/>\n[2021-08-14 11:17:16,766] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@5b8dfcc1 (org.apache.zookeeper.ZooKeeper)<br \/>\n[2021-08-14 11:17:16,770] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)<br \/>\n[2021-08-14 11:17:16,785] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)<br \/>\n[2021-08-14 11:17:16,796] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)<br \/>\n[2021-08-14 11:17:16,798] INFO Opening socket connection to server localhost\/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)<br \/>\n[2021-08-14 11:17:16,802] INFO Socket connection established, initiating session, client: \/0:0:0:0:0:0:0:1:37008, server: localhost\/0:0:0:0:0:0:0:1:2181 (org.apache.zookeeper.ClientCnxn)<br \/>\n[2021-08-14 11:17:16,831] INFO Session establishment complete on server localhost\/0:0:0:0:0:0:0:1:2181, sessionid = 0x1000070216a0000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)<br \/>\n[2021-08-14 11:17:16,834] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)<br \/>\n[2021-08-14 11:17:17,053] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread)<br \/>\n[2021-08-14 11:17:17,065] INFO Feature ZK node at path: \/feature does not exist (kafka.server.FinalizedFeatureChangeListener)<br \/>\n[2021-08-14 11:17:17,065] INFO Cleared cache (kafka.server.FinalizedFeatureCache)<br \/>\n[2021-08-14 11:17:17,311] INFO Cluster ID = Mp9ekWqlR2-jIyzo3tDKMg (kafka.server.KafkaServer)<br \/>\n[2021-08-14 11:17:17,314] WARN No meta.properties file under dir \/tmp\/kafka-logs\/meta.properties (kafka.server.BrokerMetadataCheckpoint)<br \/>\n[2021-08-14 11:17:17,381] INFO KafkaConfig values:<br \/>\nadvertised.host.name = null<br \/>\nadvertised.listeners = null<br \/>\nadvertised.port = null<br \/>\nalter.config.policy.class.name = null<br \/>\nalter.log.dirs.replication.quota.window.num = 11<br \/>\nalter.log.dirs.replication.quota.window.size.seconds = 1<br \/>\nauthorizer.class.name =<br \/>\nauto.create.topics.enable = true<br \/>\nauto.leader.rebalance.enable = true<br \/>\nbackground.threads = 10<br \/>\nbroker.heartbeat.interval.ms = 2000<br \/>\nbroker.id = 0<br \/>\nbroker.id.generation.enable = true<br \/>\nbroker.rack = null<br \/>\nbroker.session.timeout.ms = 9000<br \/>\nclient.quota.callback.class = null<br \/>\ncompression.type = producer<br \/>\nconnection.failed.authentication.delay.ms = 100<br \/>\nconnections.max.idle.ms = 600000<br \/>\nconnections.max.reauth.ms = 0<br \/>\ncontrol.plane.listener.name = null<br \/>\ncontrolled.shutdown.enable = true<br \/>\ncontrolled.shutdown.max.retries = 3<br \/>\ncontrolled.shutdown.retry.backoff.ms = 5000<br \/>\ncontroller.listener.names = null<br \/>\ncontroller.quorum.append.linger.ms = 25<br \/>\ncontroller.quorum.election.backoff.max.ms = 1000<br \/>\ncontroller.quorum.election.timeout.ms = 1000<br \/>\ncontroller.quorum.fetch.timeout.ms = 2000<br \/>\ncontroller.quorum.request.timeout.ms = 2000<br \/>\ncontroller.quorum.retry.backoff.ms = 20<br \/>\ncontroller.quorum.voters = []<br \/>\ncontroller.quota.window.num = 11<br \/>\ncontroller.quota.window.size.seconds = 1<br \/>\ncontroller.socket.timeout.ms = 30000<br \/>\ncreate.topic.policy.class.name = null<br \/>\ndefault.replication.factor = 1<br \/>\ndelegation.token.expiry.check.interval.ms = 3600000<br \/>\ndelegation.token.expiry.time.ms = 86400000<br \/>\ndelegation.token.master.key = null<br \/>\ndelegation.token.max.lifetime.ms = 604800000<br \/>\ndelegation.token.secret.key = null<br \/>\ndelete.records.purgatory.purge.interval.requests = 1<br \/>\ndelete.topic.enable = true<br \/>\nfetch.max.bytes = 57671680<br \/>\nfetch.purgatory.purge.interval.requests = 1000<br \/>\ngroup.initial.rebalance.delay.ms = 0<br \/>\ngroup.max.session.timeout.ms = 1800000<br \/>\ngroup.max.size = 2147483647<br \/>\ngroup.min.session.timeout.ms = 6000<br \/>\nhost.name =<br \/>\ninitial.broker.registration.timeout.ms = 60000<br \/>\ninter.broker.listener.name = null<br \/>\ninter.broker.protocol.version = 2.8-IV1<br \/>\nkafka.metrics.polling.interval.secs = 10<br \/>\nkafka.metrics.reporters = []<br \/>\nleader.imbalance.check.interval.seconds = 300<br \/>\nleader.imbalance.per.broker.percentage = 10<br \/>\nlistener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL<br \/>\nlisteners = null<br \/>\nlog.cleaner.backoff.ms = 15000<br \/>\nlog.cleaner.dedupe.buffer.size = 134217728<br \/>\nlog.cleaner.delete.retention.ms = 86400000<br \/>\nlog.cleaner.enable = true<br \/>\nlog.cleaner.io.buffer.load.factor = 0.9<br \/>\nlog.cleaner.io.buffer.size = 524288<br \/>\nlog.cleaner.io.max.bytes.per.second = 1.7976931348623157E308<br \/>\nlog.cleaner.max.compaction.lag.ms = 9223372036854775807<br \/>\nlog.cleaner.min.cleanable.ratio = 0.5<br \/>\nlog.cleaner.min.compaction.lag.ms = 0<br \/>\nlog.cleaner.threads = 1<br \/>\nlog.cleanup.policy = [delete]<br \/>\nlog.dir = \/tmp\/kafka-logs<br \/>\nlog.dirs = \/tmp\/kafka-logs<br \/>\nlog.flush.interval.messages = 9223372036854775807<br \/>\nlog.flush.interval.ms = null<br \/>\nlog.flush.offset.checkpoint.interval.ms = 60000<br \/>\nlog.flush.scheduler.interval.ms = 9223372036854775807<br \/>\nlog.flush.start.offset.checkpoint.interval.ms = 60000<br \/>\nlog.index.interval.bytes = 4096<br \/>\nlog.index.size.max.bytes = 10485760<br \/>\nlog.message.downconversion.enable = true<br \/>\nlog.message.format.version = 2.8-IV1<br \/>\nlog.message.timestamp.difference.max.ms = 9223372036854775807<br \/>\nlog.message.timestamp.type = CreateTime<br \/>\nlog.preallocate = false<br \/>\nlog.retention.bytes = -1<br \/>\nlog.retention.check.interval.ms = 300000<br \/>\nlog.retention.hours = 168<br \/>\nlog.retention.minutes = null<br \/>\nlog.retention.ms = null<br \/>\nlog.roll.hours = 168<br \/>\nlog.roll.jitter.hours = 0<br \/>\nlog.roll.jitter.ms = null<br \/>\nlog.roll.ms = null<br \/>\nlog.segment.bytes = 1073741824<br \/>\nlog.segment.delete.delay.ms = 60000<br \/>\nmax.connection.creation.rate = 2147483647<br \/>\nmax.connections = 2147483647<br \/>\nmax.connections.per.ip = 2147483647<br \/>\nmax.connections.per.ip.overrides =<br \/>\nmax.incremental.fetch.session.cache.slots = 1000<br \/>\nmessage.max.bytes = 1048588<br \/>\nmetadata.log.dir = null<br \/>\nmetric.reporters = []<br \/>\nmetrics.num.samples = 2<br \/>\nmetrics.recording.level = INFO<br \/>\nmetrics.sample.window.ms = 30000<br \/>\nmin.insync.replicas = 1<br \/>\nnode.id = -1<br \/>\nnum.io.threads = 8<br \/>\nnum.network.threads = 3<br \/>\nnum.partitions = 1<br \/>\nnum.recovery.threads.per.data.dir = 1<br \/>\nnum.replica.alter.log.dirs.threads = null<br \/>\nnum.replica.fetchers = 1<br \/>\noffset.metadata.max.bytes = 4096<br \/>\noffsets.commit.required.acks = -1<br \/>\noffsets.commit.timeout.ms = 5000<br \/>\noffsets.load.buffer.size = 5242880<br \/>\noffsets.retention.check.interval.ms = 600000<br \/>\noffsets.retention.minutes = 10080<br \/>\noffsets.topic.compression.codec = 0<br \/>\noffsets.topic.num.partitions = 50<br \/>\noffsets.topic.replication.factor = 1<br \/>\noffsets.topic.segment.bytes = 104857600<br \/>\npassword.encoder.cipher.algorithm = AES\/CBC\/PKCS5Padding<br \/>\npassword.encoder.iterations = 4096<br \/>\npassword.encoder.key.length = 128<br \/>\npassword.encoder.keyfactory.algorithm = null<br \/>\npassword.encoder.old.secret = null<br \/>\npassword.encoder.secret = null<br \/>\nport = 9092<br \/>\nprincipal.builder.class = null<br \/>\nprocess.roles = []<br \/>\nproducer.purgatory.purge.interval.requests = 1000<br \/>\nqueued.max.request.bytes = -1<br \/>\nqueued.max.requests = 500<br \/>\nquota.consumer.default = 9223372036854775807<br \/>\nquota.producer.default = 9223372036854775807<br \/>\nquota.window.num = 11<br \/>\nquota.window.size.seconds = 1<br \/>\nreplica.fetch.backoff.ms = 1000<br \/>\nreplica.fetch.max.bytes = 1048576<br \/>\nreplica.fetch.min.bytes = 1<br \/>\nreplica.fetch.response.max.bytes = 10485760<br \/>\nreplica.fetch.wait.max.ms = 500<br \/>\nreplica.high.watermark.checkpoint.interval.ms = 5000<br \/>\nreplica.lag.time.max.ms = 30000<br \/>\nreplica.selector.class = null<br \/>\nreplica.socket.receive.buffer.bytes = 65536<br \/>\nreplica.socket.timeout.ms = 30000<br \/>\nreplication.quota.window.num = 11<br \/>\nreplication.quota.window.size.seconds = 1<br \/>\nrequest.timeout.ms = 30000<br \/>\nreserved.broker.max.id = 1000<br \/>\nsasl.client.callback.handler.class = null<br \/>\nsasl.enabled.mechanisms = [GSSAPI]<br \/>\nsasl.jaas.config = null<br \/>\nsasl.kerberos.kinit.cmd = \/usr\/bin\/kinit<br \/>\nsasl.kerberos.min.time.before.relogin = 60000<br \/>\nsasl.kerberos.principal.to.local.rules = [DEFAULT]<br \/>\nsasl.kerberos.service.name = null<br \/>\nsasl.kerberos.ticket.renew.jitter = 0.05<br \/>\nsasl.kerberos.ticket.renew.window.factor = 0.8<br \/>\nsasl.login.callback.handler.class = null<br \/>\nsasl.login.class = null<br \/>\nsasl.login.refresh.buffer.seconds = 300<br \/>\nsasl.login.refresh.min.period.seconds = 60<br \/>\nsasl.login.refresh.window.factor = 0.8<br \/>\nsasl.login.refresh.window.jitter = 0.05<br \/>\nsasl.mechanism.controller.protocol = GSSAPI<br \/>\nsasl.mechanism.inter.broker.protocol = GSSAPI<br \/>\nsasl.server.callback.handler.class = null<br \/>\nsecurity.inter.broker.protocol = PLAINTEXT<br \/>\nsecurity.providers = null<br \/>\nsocket.connection.setup.timeout.max.ms = 30000<br \/>\nsocket.connection.setup.timeout.ms = 10000<br \/>\nsocket.receive.buffer.bytes = 102400<br \/>\nsocket.request.max.bytes = 104857600<br \/>\nsocket.send.buffer.bytes = 102400<br \/>\nssl.cipher.suites = []<br \/>\nssl.client.auth = none<br \/>\nssl.enabled.protocols = [TLSv1.2]<br \/>\nssl.endpoint.identification.algorithm = https<br \/>\nssl.engine.factory.class = null<br \/>\nssl.key.password = null<br \/>\nssl.keymanager.algorithm = SunX509<br \/>\nssl.keystore.certificate.chain = null<br \/>\nssl.keystore.key = null<br \/>\nssl.keystore.location = null<br \/>\nssl.keystore.password = null<br \/>\nssl.keystore.type = JKS<br \/>\nssl.principal.mapping.rules = DEFAULT<br \/>\nssl.protocol = TLSv1.2<br \/>\nssl.provider = null<br \/>\nssl.secure.random.implementation = null<br \/>\nssl.trustmanager.algorithm = PKIX<br \/>\nssl.truststore.certificates = null<br \/>\nssl.truststore.location = null<br \/>\nssl.truststore.password = null<br \/>\nssl.truststore.type = JKS<br \/>\ntransaction.abort.timed.out.transaction.cleanup.interval.ms = 10000<br \/>\ntransaction.max.timeout.ms = 900000<br \/>\ntransaction.remove.expired.transaction.cleanup.interval.ms = 3600000<br \/>\ntransaction.state.log.load.buffer.size = 5242880<br \/>\ntransaction.state.log.min.isr = 1<br \/>\ntransaction.state.log.num.partitions = 50<br \/>\ntransaction.state.log.replication.factor = 1<br \/>\ntransaction.state.log.segment.bytes = 104857600<br \/>\ntransactional.id.expiration.ms = 604800000<br \/>\nunclean.leader.election.enable = false<br \/>\nzookeeper.clientCnxnSocket = null<br \/>\nzookeeper.connect = localhost:2181<br \/>\nzookeeper.connection.timeout.ms = 18000<br \/>\nzookeeper.max.in.flight.requests = 10<br \/>\nzookeeper.session.timeout.ms = 18000<br \/>\nzookeeper.set.acl = false<br \/>\nzookeeper.ssl.cipher.suites = null<br \/>\nzookeeper.ssl.client.enable = false<br \/>\nzookeeper.ssl.crl.enable = false<br \/>\nzookeeper.ssl.enabled.protocols = null<br \/>\nzookeeper.ssl.endpoint.identification.algorithm = HTTPS<br \/>\nzookeeper.ssl.keystore.location = null<br \/>\nzookeeper.ssl.keystore.password = null<br \/>\nzookeeper.ssl.keystore.type = null<br \/>\nzookeeper.ssl.ocsp.enable = false<br \/>\nzookeeper.ssl.protocol = TLSv1.2<br \/>\nzookeeper.ssl.truststore.location = null<br \/>\nzookeeper.ssl.truststore.password = null<br \/>\nzookeeper.ssl.truststore.type = null<br \/>\nzookeeper.sync.time.ms = 2000<br \/>\n(kafka.server.KafkaConfig)<br \/>\n[2021-08-14 11:17:17,394] INFO KafkaConfig values:<br \/>\nadvertised.host.name = null<br \/>\nadvertised.listeners = null<br \/>\nadvertised.port = null<br \/>\nalter.config.policy.class.name = null<br \/>\nalter.log.dirs.replication.quota.window.num = 11<br \/>\nalter.log.dirs.replication.quota.window.size.seconds = 1<br \/>\nauthorizer.class.name =<br \/>\nauto.create.topics.enable = true<br \/>\nauto.leader.rebalance.enable = true<br \/>\nbackground.threads = 10<br \/>\nbroker.heartbeat.interval.ms = 2000<br \/>\nbroker.id = 0<br \/>\nbroker.id.generation.enable = true<br \/>\nbroker.rack = null<br \/>\nbroker.session.timeout.ms = 9000<br \/>\nclient.quota.callback.class = null<br \/>\ncompression.type = producer<br \/>\nconnection.failed.authentication.delay.ms = 100<br \/>\nconnections.max.idle.ms = 600000<br \/>\nconnections.max.reauth.ms = 0<br \/>\ncontrol.plane.listener.name = null<br \/>\ncontrolled.shutdown.enable = true<br \/>\ncontrolled.shutdown.max.retries = 3<br \/>\ncontrolled.shutdown.retry.backoff.ms = 5000<br \/>\ncontroller.listener.names = null<br \/>\ncontroller.quorum.append.linger.ms = 25<br \/>\ncontroller.quorum.election.backoff.max.ms = 1000<br \/>\ncontroller.quorum.election.timeout.ms = 1000<br \/>\ncontroller.quorum.fetch.timeout.ms = 2000<br \/>\ncontroller.quorum.request.timeout.ms = 2000<br \/>\ncontroller.quorum.retry.backoff.ms = 20<br \/>\ncontroller.quorum.voters = []<br \/>\ncontroller.quota.window.num = 11<br \/>\ncontroller.quota.window.size.seconds = 1<br \/>\ncontroller.socket.timeout.ms = 30000<br \/>\ncreate.topic.policy.class.name = null<br \/>\ndefault.replication.factor = 1<br \/>\ndelegation.token.expiry.check.interval.ms = 3600000<br \/>\ndelegation.token.expiry.time.ms = 86400000<br \/>\ndelegation.token.master.key = null<br \/>\ndelegation.token.max.lifetime.ms = 604800000<br \/>\ndelegation.token.secret.key = null<br \/>\ndelete.records.purgatory.purge.interval.requests = 1<br \/>\ndelete.topic.enable = true<br \/>\nfetch.max.bytes = 57671680<br \/>\nfetch.purgatory.purge.interval.requests = 1000<br \/>\ngroup.initial.rebalance.delay.ms = 0<br \/>\ngroup.max.session.timeout.ms = 1800000<br \/>\ngroup.max.size = 2147483647<br \/>\ngroup.min.session.timeout.ms = 6000<br \/>\nhost.name =<br \/>\ninitial.broker.registration.timeout.ms = 60000<br \/>\ninter.broker.listener.name = null<br \/>\ninter.broker.protocol.version = 2.8-IV1<br \/>\nkafka.metrics.polling.interval.secs = 10<br \/>\nkafka.metrics.reporters = []<br \/>\nleader.imbalance.check.interval.seconds = 300<br \/>\nleader.imbalance.per.broker.percentage = 10<br \/>\nlistener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL<br \/>\nlisteners = null<br \/>\nlog.cleaner.backoff.ms = 15000<br \/>\nlog.cleaner.dedupe.buffer.size = 134217728<br \/>\nlog.cleaner.delete.retention.ms = 86400000<br \/>\nlog.cleaner.enable = true<br \/>\nlog.cleaner.io.buffer.load.factor = 0.9<br \/>\nlog.cleaner.io.buffer.size = 524288<br \/>\nlog.cleaner.io.max.bytes.per.second = 1.7976931348623157E308<br \/>\nlog.cleaner.max.compaction.lag.ms = 9223372036854775807<br \/>\nlog.cleaner.min.cleanable.ratio = 0.5<br \/>\nlog.cleaner.min.compaction.lag.ms = 0<br \/>\nlog.cleaner.threads = 1<br \/>\nlog.cleanup.policy = [delete]<br \/>\nlog.dir = \/tmp\/kafka-logs<br \/>\nlog.dirs = \/tmp\/kafka-logs<br \/>\nlog.flush.interval.messages = 9223372036854775807<br \/>\nlog.flush.interval.ms = null<br \/>\nlog.flush.offset.checkpoint.interval.ms = 60000<br \/>\nlog.flush.scheduler.interval.ms = 9223372036854775807<br \/>\nlog.flush.start.offset.checkpoint.interval.ms = 60000<br \/>\nlog.index.interval.bytes = 4096<br \/>\nlog.index.size.max.bytes = 10485760<br \/>\nlog.message.downconversion.enable = true<br \/>\nlog.message.format.version = 2.8-IV1<br \/>\nlog.message.timestamp.difference.max.ms = 9223372036854775807<br \/>\nlog.message.timestamp.type = CreateTime<br \/>\nlog.preallocate = false<br \/>\nlog.retention.bytes = -1<br \/>\nlog.retention.check.interval.ms = 300000<br \/>\nlog.retention.hours = 168<br \/>\nlog.retention.minutes = null<br \/>\nlog.retention.ms = null<br \/>\nlog.roll.hours = 168<br \/>\nlog.roll.jitter.hours = 0<br \/>\nlog.roll.jitter.ms = null<br \/>\nlog.roll.ms = null<br \/>\nlog.segment.bytes = 1073741824<br \/>\nlog.segment.delete.delay.ms = 60000<br \/>\nmax.connection.creation.rate = 2147483647<br \/>\nmax.connections = 2147483647<br \/>\nmax.connections.per.ip = 2147483647<br \/>\nmax.connections.per.ip.overrides =<br \/>\nmax.incremental.fetch.session.cache.slots = 1000<br \/>\nmessage.max.bytes = 1048588<br \/>\nmetadata.log.dir = null<br \/>\nmetric.reporters = []<br \/>\nmetrics.num.samples = 2<br \/>\nmetrics.recording.level = INFO<br \/>\nmetrics.sample.window.ms = 30000<br \/>\nmin.insync.replicas = 1<br \/>\nnode.id = -1<br \/>\nnum.io.threads = 8<br \/>\nnum.network.threads = 3<br \/>\nnum.partitions = 1<br \/>\nnum.recovery.threads.per.data.dir = 1<br \/>\nnum.replica.alter.log.dirs.threads = null<br \/>\nnum.replica.fetchers = 1<br \/>\noffset.metadata.max.bytes = 4096<br \/>\noffsets.commit.required.acks = -1<br \/>\noffsets.commit.timeout.ms = 5000<br \/>\noffsets.load.buffer.size = 5242880<br \/>\noffsets.retention.check.interval.ms = 600000<br \/>\noffsets.retention.minutes = 10080<br \/>\noffsets.topic.compression.codec = 0<br \/>\noffsets.topic.num.partitions = 50<br \/>\noffsets.topic.replication.factor = 1<br \/>\noffsets.topic.segment.bytes = 104857600<br \/>\npassword.encoder.cipher.algorithm = AES\/CBC\/PKCS5Padding<br \/>\npassword.encoder.iterations = 4096<br \/>\npassword.encoder.key.length = 128<br \/>\npassword.encoder.keyfactory.algorithm = null<br \/>\npassword.encoder.old.secret = null<br \/>\npassword.encoder.secret = null<br \/>\nport = 9092<br \/>\nprincipal.builder.class = null<br \/>\nprocess.roles = []<br \/>\nproducer.purgatory.purge.interval.requests = 1000<br \/>\nqueued.max.request.bytes = -1<br \/>\nqueued.max.requests = 500<br \/>\nquota.consumer.default = 9223372036854775807<br \/>\nquota.producer.default = 9223372036854775807<br \/>\nquota.window.num = 11<br \/>\nquota.window.size.seconds = 1<br \/>\nreplica.fetch.backoff.ms = 1000<br \/>\nreplica.fetch.max.bytes = 1048576<br \/>\nreplica.fetch.min.bytes = 1<br \/>\nreplica.fetch.response.max.bytes = 10485760<br \/>\nreplica.fetch.wait.max.ms = 500<br \/>\nreplica.high.watermark.checkpoint.interval.ms = 5000<br \/>\nreplica.lag.time.max.ms = 30000<br \/>\nreplica.selector.class = null<br \/>\nreplica.socket.receive.buffer.bytes = 65536<br \/>\nreplica.socket.timeout.ms = 30000<br \/>\nreplication.quota.window.num = 11<br \/>\nreplication.quota.window.size.seconds = 1<br \/>\nrequest.timeout.ms = 30000<br \/>\nreserved.broker.max.id = 1000<br \/>\nsasl.client.callback.handler.class = null<br \/>\nsasl.enabled.mechanisms = [GSSAPI]<br \/>\nsasl.jaas.config = null<br \/>\nsasl.kerberos.kinit.cmd = \/usr\/bin\/kinit<br \/>\nsasl.kerberos.min.time.before.relogin = 60000<br \/>\nsasl.kerberos.principal.to.local.rules = [DEFAULT]<br \/>\nsasl.kerberos.service.name = null<br \/>\nsasl.kerberos.ticket.renew.jitter = 0.05<br \/>\nsasl.kerberos.ticket.renew.window.factor = 0.8<br \/>\nsasl.login.callback.handler.class = null<br \/>\nsasl.login.class = null<br \/>\nsasl.login.refresh.buffer.seconds = 300<br \/>\nsasl.login.refresh.min.period.seconds = 60<br \/>\nsasl.login.refresh.window.factor = 0.8<br \/>\nsasl.login.refresh.window.jitter = 0.05<br \/>\nsasl.mechanism.controller.protocol = GSSAPI<br \/>\nsasl.mechanism.inter.broker.protocol = GSSAPI<br \/>\nsasl.server.callback.handler.class = null<br \/>\nsecurity.inter.broker.protocol = PLAINTEXT<br \/>\nsecurity.providers = null<br \/>\nsocket.connection.setup.timeout.max.ms = 30000<br \/>\nsocket.connection.setup.timeout.ms = 10000<br \/>\nsocket.receive.buffer.bytes = 102400<br \/>\nsocket.request.max.bytes = 104857600<br \/>\nsocket.send.buffer.bytes = 102400<br \/>\nssl.cipher.suites = []<br \/>\nssl.client.auth = none<br \/>\nssl.enabled.protocols = [TLSv1.2]<br \/>\nssl.endpoint.identification.algorithm = https<br \/>\nssl.engine.factory.class = null<br \/>\nssl.key.password = null<br \/>\nssl.keymanager.algorithm = SunX509<br \/>\nssl.keystore.certificate.chain = null<br \/>\nssl.keystore.key = null<br \/>\nssl.keystore.location = null<br \/>\nssl.keystore.password = null<br \/>\nssl.keystore.type = JKS<br \/>\nssl.principal.mapping.rules = DEFAULT<br \/>\nssl.protocol = TLSv1.2<br \/>\nssl.provider = null<br \/>\nssl.secure.random.implementation = null<br \/>\nssl.trustmanager.algorithm = PKIX<br \/>\nssl.truststore.certificates = null<br \/>\nssl.truststore.location = null<br \/>\nssl.truststore.password = null<br \/>\nssl.truststore.type = JKS<br \/>\ntransaction.abort.timed.out.transaction.cleanup.interval.ms = 10000<br \/>\ntransaction.max.timeout.ms = 900000<br \/>\ntransaction.remove.expired.transaction.cleanup.interval.ms = 3600000<br \/>\ntransaction.state.log.load.buffer.size = 5242880<br \/>\ntransaction.state.log.min.isr = 1<br \/>\ntransaction.state.log.num.partitions = 50<br \/>\ntransaction.state.log.replication.factor = 1<br \/>\ntransaction.state.log.segment.bytes = 104857600<br \/>\ntransactional.id.expiration.ms = 604800000<br \/>\nunclean.leader.election.enable = false<br \/>\nzookeeper.clientCnxnSocket = null<br \/>\nzookeeper.connect = localhost:2181<br \/>\nzookeeper.connection.timeout.ms = 18000<br \/>\nzookeeper.max.in.flight.requests = 10<br \/>\nzookeeper.session.timeout.ms = 18000<br \/>\nzookeeper.set.acl = false<br \/>\nzookeeper.ssl.cipher.suites = null<br \/>\nzookeeper.ssl.client.enable = false<br \/>\nzookeeper.ssl.crl.enable = false<br \/>\nzookeeper.ssl.enabled.protocols = null<br \/>\nzookeeper.ssl.endpoint.identification.algorithm = HTTPS<br \/>\nzookeeper.ssl.keystore.location = null<br \/>\nzookeeper.ssl.keystore.password = null<br \/>\nzookeeper.ssl.keystore.type = null<br \/>\nzookeeper.ssl.ocsp.enable = false<br \/>\nzookeeper.ssl.protocol = TLSv1.2<br \/>\nzookeeper.ssl.truststore.location = null<br \/>\nzookeeper.ssl.truststore.password = null<br \/>\nzookeeper.ssl.truststore.type = null<br \/>\nzookeeper.sync.time.ms = 2000<br \/>\n(kafka.server.KafkaConfig)<br \/>\n[2021-08-14 11:17:17,454] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)<br \/>\n[2021-08-14 11:17:17,455] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)<br \/>\n[2021-08-14 11:17:17,456] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)<br \/>\n[2021-08-14 11:17:17,458] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)<br \/>\n[2021-08-14 11:17:17,484] INFO Log directory \/tmp\/kafka-logs not found, creating it. (kafka.log.LogManager)<br \/>\n[2021-08-14 11:17:17,500] INFO Loading logs from log dirs ArraySeq(\/tmp\/kafka-logs) (kafka.log.LogManager)<br \/>\n[2021-08-14 11:17:17,506] INFO Attempting recovery for all logs in \/tmp\/kafka-logs since no clean shutdown file was found (kafka.log.LogManager)<br \/>\n[2021-08-14 11:17:17,532] INFO Loaded 0 logs in 33ms. (kafka.log.LogManager)<br \/>\n[2021-08-14 11:17:17,533] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)<br \/>\n[2021-08-14 11:17:17,536] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)<br \/>\n[2021-08-14 11:17:18,486] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)<br \/>\n[2021-08-14 11:17:18,490] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)<br \/>\n[2021-08-14 11:17:18,568] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)<br \/>\n[2021-08-14 11:17:18,614] INFO [broker-0-to-controller-send-thread]: Starting (kafka.server.BrokerToControllerRequestThread)<br \/>\n[2021-08-14 11:17:18,647] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)<br \/>\n[2021-08-14 11:17:18,648] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)<br \/>\n[2021-08-14 11:17:18,648] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)<br \/>\n[2021-08-14 11:17:18,649] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)<br \/>\n[2021-08-14 11:17:18,670] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)<br \/>\n[2021-08-14 11:17:18,701] INFO Creating \/brokers\/ids\/0 (is it secure? false) (kafka.zk.KafkaZkClient)<br \/>\n[2021-08-14 11:17:18,720] INFO Stat of the created znode at \/brokers\/ids\/0 is: 25,25,1628907438713,1628907438713,1,0,0,72058075634860032,202,0,25<br \/>\n(kafka.zk.KafkaZkClient)<br \/>\n[2021-08-14 11:17:18,721] INFO Registered broker 0 at path \/brokers\/ids\/0 with addresses: PLAINTEXT:\/\/localhost:9092, czxid (broker epoch): 25 (kafka.zk.KafkaZkClient)<br \/>\n[2021-08-14 11:17:18,788] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)<br \/>\n[2021-08-14 11:17:18,799] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)<br \/>\n[2021-08-14 11:17:18,801] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)<br \/>\n[2021-08-14 11:17:18,803] INFO Successfully created \/controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient)<br \/>\n[2021-08-14 11:17:18,815] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)<br \/>\n[2021-08-14 11:17:18,849] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)<br \/>\n[2021-08-14 11:17:18,850] INFO Feature ZK node created at path: \/feature (kafka.server.FinalizedFeatureChangeListener)<br \/>\n[2021-08-14 11:17:18,897] INFO Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Features{}, epoch=0). (kafka.server.FinalizedFeatureCache)<br \/>\n[2021-08-14 11:17:18,902] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)<br \/>\n[2021-08-14 11:17:18,902] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)<br \/>\n[2021-08-14 11:17:18,906] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)<br \/>\n[2021-08-14 11:17:18,908] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)<br \/>\n[2021-08-14 11:17:18,946] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)<br \/>\n[2021-08-14 11:17:18,974] INFO [\/config\/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)<br \/>\n[2021-08-14 11:17:19,052] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Starting socket server acceptors and processors (kafka.network.SocketServer)<br \/>\n[2021-08-14 11:17:19,069] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Started data-plane acceptor and processor(s) for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)<br \/>\n[2021-08-14 11:17:19,070] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Started socket server acceptors and processors (kafka.network.SocketServer)<br \/>\n[2021-08-14 11:17:19,074] INFO Kafka version: 6.2.0-ccs (org.apache.kafka.common.utils.AppInfoParser)<br \/>\n[2021-08-14 11:17:19,074] INFO Kafka commitId: 1a5755cf9401c84f (org.apache.kafka.common.utils.AppInfoParser)<br \/>\n[2021-08-14 11:17:19,074] INFO Kafka startTimeMs: 1628907439070 (org.apache.kafka.common.utils.AppInfoParser)<br \/>\n[2021-08-14 11:17:19,076] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)<br \/>\n[2021-08-14 11:17:19,134] INFO [broker-0-to-controller-send-thread]: Recorded new controller, from now on will use broker localhost:9092 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread)\u30dd\u30fc\u30c89092\u304cListen\u3055\u308c\u305f\u72b6\u614b\u306b\u306a\u308a\u307e\u3057\u305f\u3002<\/p>\n<p>Schema Registry<\/p>\n<p>\u69cb\u6210<br \/>\nConfigure Confluent Platform &#8211; Schema Registry<br \/>\n\u88fd\u54c1\u63d0\u4f9b\u306e\u30d7\u30ed\u30d1\u30c6\u30a3\u30d5\u30a1\u30a4\u30eb\u306f\u3053\u3061\u3089\u3002<br \/>\netc\/schema-registry\/schema-registry.properties<\/p>\n<p>\/opt\/confluent-6.2.0\/etc\/schema-registry\/schema-registry.properties<br \/>\n#<br \/>\n# Copyright 2018 Confluent Inc.<br \/>\n#<br \/>\n# Licensed under the Apache License, Version 2.0 (the &#8220;License&#8221;);<br \/>\n# you may not use this file except in compliance with the License.<br \/>\n# You may obtain a copy of the License at<br \/>\n#<br \/>\n# http:\/\/www.apache.org\/licenses\/LICENSE-2.0<br \/>\n#<br \/>\n# Unless required by applicable law or agreed to in writing, software<br \/>\n# distributed under the License is distributed on an &#8220;AS IS&#8221; BASIS,<br \/>\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.<br \/>\n# See the License for the specific language governing permissions and<br \/>\n# limitations under the License.<br \/>\n#<\/p>\n<p># The address the socket server listens on.<br \/>\n# FORMAT:<br \/>\n# listeners = listener_name:\/\/host_name:port<br \/>\n# EXAMPLE:<br \/>\n# listeners = PLAINTEXT:\/\/your.host.name:9092<br \/>\nlisteners=http:\/\/0.0.0.0:8081<\/p>\n<p># Zookeeper connection string for the Zookeeper cluster used by your Kafka cluster<br \/>\n# (see zookeeper docs for details).<br \/>\n# This is a comma separated host:port pairs, each corresponding to a zk<br \/>\n# server. e.g. &#8220;127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002&#8221;.<br \/>\n# Note: use of this property is deprecated.<br \/>\n#kafkastore.connection.url=localhost:2181<\/p>\n<p># Alternatively, Schema Registry can now operate without Zookeeper, handling all coordination via<br \/>\n# Kafka brokers. Use this setting to specify the bootstrap servers for your Kafka cluster and it<br \/>\n# will be used both for selecting the leader schema registry instance and for storing the data for<br \/>\n# registered schemas.<br \/>\n# (Note that you cannot mix the two modes; use this mode only on new deployments or by shutting down<br \/>\n# all instances, switching to the new configuration, and then starting the schema registry<br \/>\n# instances again.)<br \/>\nkafkastore.bootstrap.servers=PLAINTEXT:\/\/localhost:9092<\/p>\n<p># The name of the topic to store schemas in<br \/>\nkafkastore.topic=_schemas<\/p>\n<p># If true, API requests that fail will include extra debugging information, including stack traces<br \/>\ndebug=false<\/p>\n<p>\u30d1\u30e9\u30e1\u30fc\u30bf\u30fc\u306e\u30ea\u30d5\u30a1\u30ec\u30f3\u30b9\u306f\u3053\u3061\u3089<br \/>\nSchema Registry Configuration Options<\/p>\n<p>\u8d77\u52d5\/\u505c\u6b62<br \/>\nStart Confluent Platform<\/p>\n<p>\u8d77\u52d5\u30b3\u30de\u30f3\u30c9: bin\/schema-registry-start -daemon<\/p>\n<p>\u505c\u6b62\u30b3\u30de\u30f3\u30c9: bin\/schema-registry-stop<\/p>\n<p>\u30ed\u30b0: logs\/schema-registry.log<\/p>\n<p>Listen\u30dd\u30fc\u30c8: 8081<\/p>\n<p>\u4ee5\u4e0b\u306e\u30b3\u30de\u30f3\u30c9\u3067\u8d77\u52d5<br \/>\n\/opt\/confluent-6.2.0\/bin\/schema-registry-start -daemon \/opt\/confluent-6.2.0\/etc\/schema-registry\/schema-registry.properties<br \/>\n\u8d77\u52d5\u6642\u30ed\u30b0: logs\/schema-registry.log<br \/>\n[2021-08-14 11:57:16,923] INFO SchemaRegistryConfig values:<br \/>\naccess.control.allow.headers =<br \/>\naccess.control.allow.methods =<br \/>\naccess.control.allow.origin =<br \/>\naccess.control.skip.options = true<br \/>\nauthentication.method = NONE<br \/>\nauthentication.realm =<br \/>\nauthentication.roles = [*]<br \/>\nauthentication.skip.paths = []<br \/>\navro.compatibility.level =<br \/>\ncompression.enable = true<br \/>\ncsrf.prevention.enable = false<br \/>\ncsrf.prevention.token.endpoint = \/csrf<br \/>\ncsrf.prevention.token.expiration.minutes = 30<br \/>\ncsrf.prevention.token.max.entries = 10000<br \/>\ndebug = false<br \/>\nhost.name = localhost<br \/>\nidle.timeout.ms = 30000<br \/>\ninter.instance.headers.whitelist = []<br \/>\ninter.instance.protocol = http<br \/>\nkafkastore.bootstrap.servers = [PLAINTEXT:\/\/localhost:9092]<br \/>\nkafkastore.checkpoint.dir = \/tmp<br \/>\nkafkastore.checkpoint.version = 0<br \/>\nkafkastore.connection.url =<br \/>\nkafkastore.group.id =<br \/>\nkafkastore.init.timeout.ms = 60000<br \/>\nkafkastore.sasl.kerberos.kinit.cmd = \/usr\/bin\/kinit<br \/>\nkafkastore.sasl.kerberos.min.time.before.relogin = 60000<br \/>\nkafkastore.sasl.kerberos.service.name =<br \/>\nkafkastore.sasl.kerberos.ticket.renew.jitter = 0.05<br \/>\nkafkastore.sasl.kerberos.ticket.renew.window.factor = 0.8<br \/>\nkafkastore.sasl.mechanism = GSSAPI<br \/>\nkafkastore.security.protocol = PLAINTEXT<br \/>\nkafkastore.ssl.cipher.suites =<br \/>\nkafkastore.ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1<br \/>\nkafkastore.ssl.endpoint.identification.algorithm =<br \/>\nkafkastore.ssl.key.password = [hidden]<br \/>\nkafkastore.ssl.keymanager.algorithm = SunX509<br \/>\nkafkastore.ssl.keystore.location =<br \/>\nkafkastore.ssl.keystore.password = [hidden]<br \/>\nkafkastore.ssl.keystore.type = JKS<br \/>\nkafkastore.ssl.protocol = TLS<br \/>\nkafkastore.ssl.provider =<br \/>\nkafkastore.ssl.trustmanager.algorithm = PKIX<br \/>\nkafkastore.ssl.truststore.location =<br \/>\nkafkastore.ssl.truststore.password = [hidden]<br \/>\nkafkastore.ssl.truststore.type = JKS<br \/>\nkafkastore.timeout.ms = 500<br \/>\nkafkastore.topic = _schemas<br \/>\nkafkastore.topic.replication.factor = 3<br \/>\nkafkastore.topic.skip.validation = false<br \/>\nkafkastore.update.handlers = []<br \/>\nkafkastore.write.max.retries = 5<br \/>\nkafkastore.zk.session.timeout.ms = 30000<br \/>\nleader.eligibility = true<br \/>\nlisteners = [http:\/\/0.0.0.0:8081]<br \/>\nmaster.eligibility = null<br \/>\nmetric.reporters = []<br \/>\nmetrics.jmx.prefix = kafka.schema.registry<br \/>\nmetrics.num.samples = 2<br \/>\nmetrics.sample.window.ms = 30000<br \/>\nmetrics.tag.map = []<br \/>\nmode.mutability = true<br \/>\nport = 8081<br \/>\nrequest.logger.name = io.confluent.rest-utils.requests<br \/>\nrequest.queue.capacity = 2147483647<br \/>\nrequest.queue.capacity.growby = 64<br \/>\nrequest.queue.capacity.init = 128<br \/>\nresource.extension.class = []<br \/>\nresource.extension.classes = []<br \/>\nresource.static.locations = []<br \/>\nresponse.http.headers.config =<br \/>\nresponse.mediatype.default = application\/vnd.schemaregistry.v1+json<br \/>\nresponse.mediatype.preferred = [application\/vnd.schemaregistry.v1+json, application\/vnd.schemaregistry+json, application\/json]<br \/>\nrest.servlet.initializor.classes = []<br \/>\nschema.cache.expiry.secs = 300<br \/>\nschema.cache.size = 1000<br \/>\nschema.compatibility.level = backward<br \/>\nschema.providers = []<br \/>\nschema.registry.group.id = schema-registry<br \/>\nschema.registry.inter.instance.protocol =<br \/>\nschema.registry.resource.extension.class = []<br \/>\nschema.registry.zk.namespace = schema_registry<br \/>\nshutdown.graceful.ms = 1000<br \/>\nssl.cipher.suites = []<br \/>\nssl.client.auth = false<br \/>\nssl.client.authentication = NONE<br \/>\nssl.enabled.protocols = []<br \/>\nssl.endpoint.identification.algorithm = null<br \/>\nssl.key.password = [hidden]<br \/>\nssl.keymanager.algorithm =<br \/>\nssl.keystore.location =<br \/>\nssl.keystore.password = [hidden]<br \/>\nssl.keystore.reload = false<br \/>\nssl.keystore.type = JKS<br \/>\nssl.keystore.watch.location =<br \/>\nssl.protocol = TLS<br \/>\nssl.provider =<br \/>\nssl.trustmanager.algorithm =<br \/>\nssl.truststore.location =<br \/>\nssl.truststore.password = [hidden]<br \/>\nssl.truststore.type = JKS<br \/>\nthread.pool.max = 200<br \/>\nthread.pool.min = 8<br \/>\nwebsocket.path.prefix = \/ws<br \/>\nwebsocket.servlet.initializor.classes = []<br \/>\nzookeeper.set.acl = false<br \/>\n(io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig)<br \/>\n[2021-08-14 11:57:16,998] INFO Logging initialized @1039ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log)<br \/>\n[2021-08-14 11:57:17,007] INFO Initial capacity 128, increased by 64, maximum capacity 2147483647. (io.confluent.rest.ApplicationServer)<br \/>\n[2021-08-14 11:57:17,117] INFO Adding listener: http:\/\/0.0.0.0:8081 (io.confluent.rest.ApplicationServer)<br \/>\n[2021-08-14 11:57:17,999] INFO Registering schema provider for AVRO: io.confluent.kafka.schemaregistry.avro.AvroSchemaProvider (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)<br \/>\n[2021-08-14 11:57:17,999] INFO Registering schema provider for JSON: io.confluent.kafka.schemaregistry.json.JsonSchemaProvider (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)<br \/>\n[2021-08-14 11:57:17,999] INFO Registering schema provider for PROTOBUF: io.confluent.kafka.schemaregistry.protobuf.ProtobufSchemaProvider (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)<br \/>\n[2021-08-14 11:57:18,063] INFO Initializing KafkaStore with broker endpoints: PLAINTEXT:\/\/localhost:9092 (io.confluent.kafka.schemaregistry.storage.KafkaStore)<br \/>\n[2021-08-14 11:57:18,088] INFO Creating schemas topic _schemas (io.confluent.kafka.schemaregistry.storage.KafkaStore)<br \/>\n[2021-08-14 11:57:18,090] WARN Creating the schema topic _schemas using a replication factor of 1, which is less than the desired one of 3. If this is a production environment, it&#8217;s crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore)<br \/>\n[2021-08-14 11:57:18,442] INFO Kafka store reader thread starting consumer (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)<br \/>\n[2021-08-14 11:57:18,603] INFO Seeking to beginning for all partitions (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)<br \/>\n[2021-08-14 11:57:18,604] INFO Initialized last consumed offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)<br \/>\n[2021-08-14 11:57:18,606] INFO [kafka-store-reader-thread-_schemas]: Starting (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)<br \/>\n[2021-08-14 11:57:18,851] INFO Wait to catch up until the offset at 0 (io.confluent.kafka.schemaregistry.storage.KafkaStore)<br \/>\n[2021-08-14 11:57:18,963] INFO Reached offset at 0 (io.confluent.kafka.schemaregistry.storage.KafkaStore)<br \/>\n[2021-08-14 11:57:18,964] INFO Joining schema registry with Kafka-based coordination (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)<br \/>\n[2021-08-14 11:57:19,843] INFO Finished rebalance with leader election result: Assignment{version=1, error=0, leader=&#8217;sr-1-76d7d323-be6d-42fa-b17e-d6d60fd14ac6&#8242;, leaderIdentity=version=1,host=localhost,port=8081,scheme=http,leaderEligibility=true} (io.confluent.kafka.schemaregistry.leaderelector.kafka.KafkaGroupLeaderElector)<br \/>\n[2021-08-14 11:57:19,877] INFO Wait to catch up until the offset at 1 (io.confluent.kafka.schemaregistry.storage.KafkaStore)<br \/>\n[2021-08-14 11:57:19,881] INFO Reached offset at 1 (io.confluent.kafka.schemaregistry.storage.KafkaStore)<br \/>\n[2021-08-14 11:57:20,084] INFO jetty-9.4.40.v20210413; built: 2021-04-13T20:42:42.668Z; git: b881a572662e1943a14ae12e7e1207989f218b74; jvm 1.8.0_242-b08 (org.eclipse.jetty.server.Server)<br \/>\n[2021-08-14 11:57:20,185] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session)<br \/>\n[2021-08-14 11:57:20,186] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session)<br \/>\n[2021-08-14 11:57:20,187] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session)<br \/>\n[2021-08-14 11:57:20,992] INFO HV000001: Hibernate Validator 6.1.7.Final (org.hibernate.validator.internal.util.Version)<br \/>\n[2021-08-14 11:57:21,380] INFO Started o.e.j.s.ServletContextHandler@3e7dd664{\/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)<br \/>\n[2021-08-14 11:57:21,401] INFO Started o.e.j.s.ServletContextHandler@71c27ee8{\/ws,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)<br \/>\n[2021-08-14 11:57:21,431] INFO Started NetworkTrafficServerConnector@4c762604{HTTP\/1.1, (http\/1.1)}{0.0.0.0:8081} (org.eclipse.jetty.server.AbstractConnector)<br \/>\n[2021-08-14 11:57:21,432] INFO Started @5478ms (org.eclipse.jetty.server.Server)<br \/>\n[2021-08-14 11:57:21,432] INFO Server started, listening for requests&#8230; (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain)<\/p>\n<p>\u30dd\u30fc\u30c88081\u304cListen\u3055\u308c\u305f\u72b6\u614b\u306b\u306a\u308a\u307e\u3057\u305f\u3002<\/p>\n<p>\u53c2\u8003: \u8d77\u52d5\/\u505c\u6b62\u95a2\u9023\u30b9\u30af\u30ea\u30d7\u30c8<br \/>\n\/opt\/confluent-6.2.0\/bin\/zookeeper-server-start<\/p>\n<p>zookeeper-server-start<br \/>\n#!\/bin\/bash<br \/>\n# Licensed to the Apache Software Foundation (ASF) under one or more<br \/>\n# contributor license agreements. See the NOTICE file distributed with<br \/>\n# this work for additional information regarding copyright ownership.<br \/>\n# The ASF licenses this file to You under the Apache License, Version 2.0<br \/>\n# (the &#8220;License&#8221;); you may not use this file except in compliance with<br \/>\n# the License. You may obtain a copy of the License at<br \/>\n#<br \/>\n# http:\/\/www.apache.org\/licenses\/LICENSE-2.0<br \/>\n#<br \/>\n# Unless required by applicable law or agreed to in writing, software<br \/>\n# distributed under the License is distributed on an &#8220;AS IS&#8221; BASIS,<br \/>\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.<br \/>\n# See the License for the specific language governing permissions and<br \/>\n# limitations under the License.<\/p>\n<p>if [ $# -lt 1 ];<br \/>\nthen<br \/>\necho &#8220;USAGE: $0 [-daemon] zookeeper.properties&#8221;<br \/>\nexit 1<br \/>\nfi<br \/>\nbase_dir=$(dirname $0)<\/p>\n<p>if [ &#8220;x$KAFKA_LOG4J_OPTS&#8221; = &#8220;x&#8221; ]; then<br \/>\nLOG4J_CONFIG_NORMAL_INSTALL=&#8221;\/etc\/kafka\/log4j.properties&#8221;<br \/>\nLOG4J_CONFIG_ZIP_INSTALL=&#8221;$base_dir\/..\/etc\/kafka\/log4j.properties&#8221;<br \/>\nif [ -e &#8220;$LOG4J_CONFIG_NORMAL_INSTALL&#8221; ]; then # Normal install layout<br \/>\nKAFKA_LOG4J_OPTS=&#8221;-Dlog4j.configuration=file:${LOG4J_CONFIG_NORMAL_INSTALL}&#8221;<br \/>\nelif [ -e &#8220;${LOG4J_CONFIG_ZIP_INSTALL}&#8221; ]; then # Simple zip file layout<br \/>\nKAFKA_LOG4J_OPTS=&#8221;-Dlog4j.configuration=file:${LOG4J_CONFIG_ZIP_INSTALL}&#8221;<br \/>\nelse # Fallback to normal default<br \/>\nKAFKA_LOG4J_OPTS=&#8221;-Dlog4j.configuration=file:$base_dir\/..\/config\/log4j.properties&#8221;<br \/>\nfi<br \/>\nfi<br \/>\nexport KAFKA_LOG4J_OPTS<\/p>\n<p>if [ &#8220;x$KAFKA_HEAP_OPTS&#8221; = &#8220;x&#8221; ]; then<br \/>\nexport KAFKA_HEAP_OPTS=&#8221;-Xmx512M -Xms512M&#8221;<br \/>\nfi<\/p>\n<p>EXTRA_ARGS=${EXTRA_ARGS-&#8216;-name zookeeper -loggc&#8217;}<\/p>\n<p>COMMAND=$1<br \/>\ncase $COMMAND in<br \/>\n-daemon)<br \/>\nEXTRA_ARGS=&#8221;-daemon &#8220;$EXTRA_ARGS<br \/>\nshift<br \/>\n;;<br \/>\n*)<br \/>\n;;<br \/>\nesac<\/p>\n<p>exec $base_dir\/kafka-run-class $EXTRA_ARGS org.apache.zookeeper.server.quorum.QuorumPeerMain &#8220;$@&#8221;<\/p>\n<p>\/opt\/confluent-6.2.0\/bin\/zookeeper-server-stop<\/p>\n<p>zookeeper-server-stop<br \/>\n#!\/bin\/bash<br \/>\n# Licensed to the Apache Software Foundation (ASF) under one or more<br \/>\n# contributor license agreements. See the NOTICE file distributed with<br \/>\n# this work for additional information regarding copyright ownership.<br \/>\n# The ASF licenses this file to You under the Apache License, Version 2.0<br \/>\n# (the &#8220;License&#8221;); you may not use this file except in compliance with<br \/>\n# the License. You may obtain a copy of the License at<br \/>\n#<br \/>\n# http:\/\/www.apache.org\/licenses\/LICENSE-2.0<br \/>\n#<br \/>\n# Unless required by applicable law or agreed to in writing, software<br \/>\n# distributed under the License is distributed on an &#8220;AS IS&#8221; BASIS,<br \/>\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.<br \/>\n# See the License for the specific language governing permissions and<br \/>\n# limitations under the License.<br \/>\nSIGNAL=${SIGNAL:-TERM}<\/p>\n<p>OSNAME=$(uname -s)<br \/>\nif [[ &#8220;$OSNAME&#8221; == &#8220;OS\/390&#8243; ]]; then<br \/>\nif [ -z $JOBNAME ]; then<br \/>\nJOBNAME=&#8221;ZKEESTRT&#8221;<br \/>\nfi<br \/>\nPIDS=$(ps -A -o pid,jobname,comm | grep -i $JOBNAME | grep java | grep -v grep | awk &#8216;{print $1}&#8217;)<br \/>\nelif [[ &#8220;$OSNAME&#8221; == &#8220;OS400&#8221; ]]; then<br \/>\nPIDS=$(ps -Af | grep java | grep -i QuorumPeerMain | grep -v grep | awk &#8216;{print $2}&#8217;)<br \/>\nelse<br \/>\nPIDS=$(ps ax | grep java | grep -i QuorumPeerMain | grep -v grep | awk &#8216;{print $1}&#8217;)<br \/>\nfi<\/p>\n<p>if [ -z &#8220;$PIDS&#8221; ]; then<br \/>\necho &#8220;No zookeeper server to stop&#8221;<br \/>\nexit 1<br \/>\nelse<br \/>\nkill -s $SIGNAL $PIDS<br \/>\nfi<\/p>\n<p>\/opt\/confluent-6.2.0\/bin\/kafka-server-start<\/p>\n<p>kafka-server-start<br \/>\n#!\/bin\/bash<br \/>\n# Licensed to the Apache Software Foundation (ASF) under one or more<br \/>\n# contributor license agreements. See the NOTICE file distributed with<br \/>\n# this work for additional information regarding copyright ownership.<br \/>\n# The ASF licenses this file to You under the Apache License, Version 2.0<br \/>\n# (the &#8220;License&#8221;); you may not use this file except in compliance with<br \/>\n# the License. You may obtain a copy of the License at<br \/>\n#<br \/>\n# http:\/\/www.apache.org\/licenses\/LICENSE-2.0<br \/>\n#<br \/>\n# Unless required by applicable law or agreed to in writing, software<br \/>\n# distributed under the License is distributed on an &#8220;AS IS&#8221; BASIS,<br \/>\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.<br \/>\n# See the License for the specific language governing permissions and<br \/>\n# limitations under the License.<\/p>\n<p>if [ $# -lt 1 ];<br \/>\nthen<br \/>\necho &#8220;USAGE: $0 [-daemon] server.properties [&#8211;override property=value]*&#8221;<br \/>\nexit 1<br \/>\nfi<br \/>\nbase_dir=$(dirname $0)<\/p>\n<p>if [ &#8220;x$KAFKA_LOG4J_OPTS&#8221; = &#8220;x&#8221; ]; then<br \/>\nLOG4J_CONFIG_NORMAL_INSTALL=&#8221;\/etc\/kafka\/log4j.properties&#8221;<br \/>\nLOG4J_CONFIG_ZIP_INSTALL=&#8221;$base_dir\/..\/etc\/kafka\/log4j.properties&#8221;<br \/>\nif [ -e &#8220;$LOG4J_CONFIG_NORMAL_INSTALL&#8221; ]; then # Normal install layout<br \/>\nKAFKA_LOG4J_OPTS=&#8221;-Dlog4j.configuration=file:${LOG4J_CONFIG_NORMAL_INSTALL}&#8221;<br \/>\nelif [ -e &#8220;${LOG4J_CONFIG_ZIP_INSTALL}&#8221; ]; then # Simple zip file layout<br \/>\nKAFKA_LOG4J_OPTS=&#8221;-Dlog4j.configuration=file:${LOG4J_CONFIG_ZIP_INSTALL}&#8221;<br \/>\nelse # Fallback to normal default<br \/>\nKAFKA_LOG4J_OPTS=&#8221;-Dlog4j.configuration=file:$base_dir\/..\/config\/log4j.properties&#8221;<br \/>\nfi<br \/>\nfi<br \/>\nexport KAFKA_LOG4J_OPTS<\/p>\n<p>if [ &#8220;x$KAFKA_HEAP_OPTS&#8221; = &#8220;x&#8221; ]; then<br \/>\nexport KAFKA_HEAP_OPTS=&#8221;-Xmx1G -Xms1G&#8221;<br \/>\nfi<\/p>\n<p>EXTRA_ARGS=${EXTRA_ARGS-&#8216;-name kafkaServer -loggc&#8217;}<\/p>\n<p>COMMAND=$1<br \/>\ncase $COMMAND in<br \/>\n-daemon)<br \/>\nEXTRA_ARGS=&#8221;-daemon &#8220;$EXTRA_ARGS<br \/>\nshift<br \/>\n;;<br \/>\n*)<br \/>\n;;<br \/>\nesac<\/p>\n<p>exec $base_dir\/kafka-run-class $EXTRA_ARGS kafka.Kafka &#8220;$@&#8221;<\/p>\n<p>\/opt\/confluent-6.2.0\/bin\/kafka-server-stop<\/p>\n<p>kafka-server-stop<br \/>\n#!\/bin\/bash<br \/>\n# Licensed to the Apache Software Foundation (ASF) under one or more<br \/>\n# contributor license agreements. See the NOTICE file distributed with<br \/>\n# this work for additional information regarding copyright ownership.<br \/>\n# The ASF licenses this file to You under the Apache License, Version 2.0<br \/>\n# (the &#8220;License&#8221;); you may not use this file except in compliance with<br \/>\n# the License. You may obtain a copy of the License at<br \/>\n#<br \/>\n# http:\/\/www.apache.org\/licenses\/LICENSE-2.0<br \/>\n#<br \/>\n# Unless required by applicable law or agreed to in writing, software<br \/>\n# distributed under the License is distributed on an &#8220;AS IS&#8221; BASIS,<br \/>\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.<br \/>\n# See the License for the specific language governing permissions and<br \/>\n# limitations under the License.<br \/>\nSIGNAL=${SIGNAL:-TERM}<\/p>\n<p>OSNAME=$(uname -s)<br \/>\nif [[ &#8220;$OSNAME&#8221; == &#8220;OS\/390&#8243; ]]; then<br \/>\nif [ -z $JOBNAME ]; then<br \/>\nJOBNAME=&#8221;KAFKSTRT&#8221;<br \/>\nfi<br \/>\nPIDS=$(ps -A -o pid,jobname,comm | grep -i $JOBNAME | grep java | grep -v grep | awk &#8216;{print $1}&#8217;)<br \/>\nelif [[ &#8220;$OSNAME&#8221; == &#8220;OS400&#8221; ]]; then<br \/>\nPIDS=$(ps -Af | grep -i &#8216;kafka\\.Kafka&#8217; | grep java | grep -v grep | awk &#8216;{print $2}&#8217;)<br \/>\nelse<br \/>\nPIDS=$(ps ax | grep &#8216; kafka\\.Kafka &#8216; | grep java | grep -v grep | awk &#8216;{print $1}&#8217;)<br \/>\nPIDS_SUPPORT=$(ps ax | grep -i &#8216;io\\.confluent\\.support\\.metrics\\.SupportedKafka&#8217; | grep java | grep -v grep | awk &#8216;{print $1}&#8217;)<br \/>\nfi<\/p>\n<p>if [ -z &#8220;$PIDS&#8221; ]; then<br \/>\n# Normal Kafka is not running, but maybe we are running the support wrapper?<br \/>\nif [ -z &#8220;${PIDS_SUPPORT}&#8221; ]; then<br \/>\necho &#8220;No kafka server to stop&#8221;<br \/>\nexit 1<br \/>\nelse<br \/>\nkill -s $SIGNAL $PIDS_SUPPORT<br \/>\nfi<br \/>\nelse<br \/>\nkill -s $SIGNAL $PIDS<br \/>\nfi<\/p>\n<p>\/opt\/confluent-6.2.0\/bin\/kafka-run-class<\/p>\n<p>kafka-run-class<br \/>\n#!\/bin\/bash<br \/>\n# Licensed to the Apache Software Foundation (ASF) under one or more<br \/>\n# contributor license agreements. See the NOTICE file distributed with<br \/>\n# this work for additional information regarding copyright ownership.<br \/>\n# The ASF licenses this file to You under the Apache License, Version 2.0<br \/>\n# (the &#8220;License&#8221;); you may not use this file except in compliance with<br \/>\n# the License. You may obtain a copy of the License at<br \/>\n#<br \/>\n# http:\/\/www.apache.org\/licenses\/LICENSE-2.0<br \/>\n#<br \/>\n# Unless required by applicable law or agreed to in writing, software<br \/>\n# distributed under the License is distributed on an &#8220;AS IS&#8221; BASIS,<br \/>\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.<br \/>\n# See the License for the specific language governing permissions and<br \/>\n# limitations under the License.<\/p>\n<p>if [ $# -lt 1 ];<br \/>\nthen<br \/>\necho &#8220;USAGE: $0 [-daemon] [-name servicename] [-loggc] classname [opts]&#8221;<br \/>\nexit 1<br \/>\nfi<\/p>\n<p># CYGWIN == 1 if Cygwin is detected, else 0.<br \/>\nif [[ $(uname -a) =~ &#8220;CYGWIN&#8221; ]]; then<br \/>\nCYGWIN=1<br \/>\nelse<br \/>\nCYGWIN=0<br \/>\nfi<\/p>\n<p>if [ -z &#8220;$INCLUDE_TEST_JARS&#8221; ]; then<br \/>\nINCLUDE_TEST_JARS=false<br \/>\nfi<\/p>\n<p># Exclude jars not necessary for running commands.<br \/>\nregex=&#8221;(-(test|test-sources|src|scaladoc|javadoc)\\.jar|jar.asc)$&#8221;<br \/>\nshould_include_file() {<br \/>\nif [ &#8220;$INCLUDE_TEST_JARS&#8221; = true ]; then<br \/>\nreturn 0<br \/>\nfi<br \/>\nfile=$1<br \/>\nif [ -z &#8220;$(echo &#8220;$file&#8221; | egrep &#8220;$regex&#8221;)&#8221; ] ; then<br \/>\nreturn 0<br \/>\nelse<br \/>\nreturn 1<br \/>\nfi<br \/>\n}<\/p>\n<p>base_dir=$(dirname $0)\/..<\/p>\n<p>if [ -z &#8220;$SCALA_VERSION&#8221; ]; then<br \/>\nSCALA_VERSION=2.13.5<br \/>\nif [[ -f &#8220;$base_dir\/gradle.properties&#8221; ]]; then<br \/>\nSCALA_VERSION=`grep &#8220;^scalaVersion=&#8221; &#8220;$base_dir\/gradle.properties&#8221; | cut -d= -f 2`<br \/>\nfi<br \/>\nfi<\/p>\n<p>if [ -z &#8220;$SCALA_BINARY_VERSION&#8221; ]; then<br \/>\nSCALA_BINARY_VERSION=$(echo $SCALA_VERSION | cut -f 1-2 -d &#8216;.&#8217;)<br \/>\nfi<\/p>\n<p># run .\/gradlew copyDependantLibs to get all dependant jars in a local dir<br \/>\nshopt -s nullglob<br \/>\nif [ -z &#8220;$UPGRADE_KAFKA_STREAMS_TEST_VERSION&#8221; ]; then<br \/>\nfor dir in &#8220;$base_dir&#8221;\/core\/build\/dependant-libs-${SCALA_VERSION}*;<br \/>\ndo<br \/>\nCLASSPATH=&#8221;$CLASSPATH:$dir\/*&#8221;<br \/>\ndone<br \/>\nfi<\/p>\n<p>for file in &#8220;$base_dir&#8221;\/examples\/build\/libs\/kafka-examples*.jar;<br \/>\ndo<br \/>\nif should_include_file &#8220;$file&#8221;; then<br \/>\nCLASSPATH=&#8221;$CLASSPATH&#8221;:&#8221;$file&#8221;<br \/>\nfi<br \/>\ndone<\/p>\n<p>if [ -z &#8220;$UPGRADE_KAFKA_STREAMS_TEST_VERSION&#8221; ]; then<br \/>\nclients_lib_dir=$(dirname $0)\/..\/clients\/build\/libs<br \/>\nstreams_lib_dir=$(dirname $0)\/..\/streams\/build\/libs<br \/>\nstreams_dependant_clients_lib_dir=$(dirname $0)\/..\/streams\/build\/dependant-libs-${SCALA_VERSION}<br \/>\nelse<br \/>\nclients_lib_dir=\/opt\/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION\/libs<br \/>\nstreams_lib_dir=$clients_lib_dir<br \/>\nstreams_dependant_clients_lib_dir=$streams_lib_dir<br \/>\nfi<\/p>\n<p>for file in &#8220;$clients_lib_dir&#8221;\/kafka-clients*.jar;<br \/>\ndo<br \/>\nif should_include_file &#8220;$file&#8221;; then<br \/>\nCLASSPATH=&#8221;$CLASSPATH&#8221;:&#8221;$file&#8221;<br \/>\nfi<br \/>\ndone<\/p>\n<p>for file in &#8220;$streams_lib_dir&#8221;\/kafka-streams*.jar;<br \/>\ndo<br \/>\nif should_include_file &#8220;$file&#8221;; then<br \/>\nCLASSPATH=&#8221;$CLASSPATH&#8221;:&#8221;$file&#8221;<br \/>\nfi<br \/>\ndone<\/p>\n<p>if [ -z &#8220;$UPGRADE_KAFKA_STREAMS_TEST_VERSION&#8221; ]; then<br \/>\nfor file in &#8220;$base_dir&#8221;\/streams\/examples\/build\/libs\/kafka-streams-examples*.jar;<br \/>\ndo<br \/>\nif should_include_file &#8220;$file&#8221;; then<br \/>\nCLASSPATH=&#8221;$CLASSPATH&#8221;:&#8221;$file&#8221;<br \/>\nfi<br \/>\ndone<br \/>\nelse<br \/>\nVERSION_NO_DOTS=`echo $UPGRADE_KAFKA_STREAMS_TEST_VERSION | sed &#8216;s\/\\.\/\/g&#8217;`<br \/>\nSHORT_VERSION_NO_DOTS=${VERSION_NO_DOTS:0:((${#VERSION_NO_DOTS} &#8211; 1))} # remove last char, ie, bug-fix number<br \/>\nfor file in &#8220;$base_dir&#8221;\/streams\/upgrade-system-tests-$SHORT_VERSION_NO_DOTS\/build\/libs\/kafka-streams-upgrade-system-tests*.jar;<br \/>\ndo<br \/>\nif should_include_file &#8220;$file&#8221;; then<br \/>\nCLASSPATH=&#8221;$file&#8221;:&#8221;$CLASSPATH&#8221;<br \/>\nfi<br \/>\ndone<br \/>\nif [ &#8220;$SHORT_VERSION_NO_DOTS&#8221; = &#8220;0100&#8221; ]; then<br \/>\nCLASSPATH=&#8221;\/opt\/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION\/libs\/zkclient-0.8.jar&#8221;:&#8221;$CLASSPATH&#8221;<br \/>\nCLASSPATH=&#8221;\/opt\/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION\/libs\/zookeeper-3.4.6.jar&#8221;:&#8221;$CLASSPATH&#8221;<br \/>\nfi<br \/>\nif [ &#8220;$SHORT_VERSION_NO_DOTS&#8221; = &#8220;0101&#8221; ]; then<br \/>\nCLASSPATH=&#8221;\/opt\/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION\/libs\/zkclient-0.9.jar&#8221;:&#8221;$CLASSPATH&#8221;<br \/>\nCLASSPATH=&#8221;\/opt\/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION\/libs\/zookeeper-3.4.8.jar&#8221;:&#8221;$CLASSPATH&#8221;<br \/>\nfi<br \/>\nfi<\/p>\n<p>for file in &#8220;$streams_dependant_clients_lib_dir&#8221;\/rocksdb*.jar;<br \/>\ndo<br \/>\nCLASSPATH=&#8221;$CLASSPATH&#8221;:&#8221;$file&#8221;<br \/>\ndone<\/p>\n<p>for file in &#8220;$streams_dependant_clients_lib_dir&#8221;\/*hamcrest*.jar;<br \/>\ndo<br \/>\nCLASSPATH=&#8221;$CLASSPATH&#8221;:&#8221;$file&#8221;<br \/>\ndone<\/p>\n<p>for file in &#8220;$base_dir&#8221;\/shell\/build\/libs\/kafka-shell*.jar;<br \/>\ndo<br \/>\nif should_include_file &#8220;$file&#8221;; then<br \/>\nCLASSPATH=&#8221;$CLASSPATH&#8221;:&#8221;$file&#8221;<br \/>\nfi<br \/>\ndone<\/p>\n<p>for dir in &#8220;$base_dir&#8221;\/shell\/build\/dependant-libs-${SCALA_VERSION}*;<br \/>\ndo<br \/>\nCLASSPATH=&#8221;$CLASSPATH:$dir\/*&#8221;<br \/>\ndone<\/p>\n<p>for file in &#8220;$base_dir&#8221;\/tools\/build\/libs\/kafka-tools*.jar;<br \/>\ndo<br \/>\nif should_include_file &#8220;$file&#8221;; then<br \/>\nCLASSPATH=&#8221;$CLASSPATH&#8221;:&#8221;$file&#8221;<br \/>\nfi<br \/>\ndone<\/p>\n<p>for dir in &#8220;$base_dir&#8221;\/tools\/build\/dependant-libs-${SCALA_VERSION}*;<br \/>\ndo<br \/>\nCLASSPATH=&#8221;$CLASSPATH:$dir\/*&#8221;<br \/>\ndone<\/p>\n<p>for cc_pkg in &#8220;api&#8221; &#8220;transforms&#8221; &#8220;runtime&#8221; &#8220;file&#8221; &#8220;mirror&#8221; &#8220;mirror-client&#8221; &#8220;json&#8221; &#8220;tools&#8221; &#8220;basic-auth-extension&#8221;<br \/>\ndo<br \/>\nfor file in &#8220;$base_dir&#8221;\/connect\/${cc_pkg}\/build\/libs\/connect-${cc_pkg}*.jar;<br \/>\ndo<br \/>\nif should_include_file &#8220;$file&#8221;; then<br \/>\nCLASSPATH=&#8221;$CLASSPATH&#8221;:&#8221;$file&#8221;<br \/>\nfi<br \/>\ndone<br \/>\nif [ -d &#8220;$base_dir\/connect\/${cc_pkg}\/build\/dependant-libs&#8221; ] ; then<br \/>\nCLASSPATH=&#8221;$CLASSPATH:$base_dir\/connect\/${cc_pkg}\/build\/dependant-libs\/*&#8221;<br \/>\nfi<br \/>\ndone<\/p>\n<p># classpath addition for release<br \/>\nfor file in &#8220;$base_dir&#8221;\/libs\/*;<br \/>\ndo<br \/>\nif should_include_file &#8220;$file&#8221;; then<br \/>\nCLASSPATH=&#8221;$CLASSPATH&#8221;:&#8221;$file&#8221;<br \/>\nfi<br \/>\ndone<\/p>\n<p># CONFLUENT: classpath addition for releases with LSB-style layout<br \/>\nCLASSPATH=&#8221;$CLASSPATH&#8221;:&#8221;$base_dir\/share\/java\/kafka\/*&#8221;<\/p>\n<p># classpath for telemetry<br \/>\nCLASSPATH=&#8221;$CLASSPATH&#8221;:&#8221;$base_dir\/share\/java\/confluent-telemetry\/*&#8221;<\/p>\n<p>for file in &#8220;$base_dir&#8221;\/core\/build\/libs\/kafka_${SCALA_BINARY_VERSION}*.jar;<br \/>\ndo<br \/>\nif should_include_file &#8220;$file&#8221;; then<br \/>\nCLASSPATH=&#8221;$CLASSPATH&#8221;:&#8221;$file&#8221;<br \/>\nfi<br \/>\ndone<br \/>\nshopt -u nullglob<\/p>\n<p>if [ -z &#8220;$CLASSPATH&#8221; ] ; then<br \/>\necho &#8220;Classpath is empty. Please build the project first e.g. by running &#8216;.\/gradlew jar -PscalaVersion=$SCALA_VERSION'&#8221;<br \/>\nexit 1<br \/>\nfi<\/p>\n<p># JMX settings<br \/>\nif [ -z &#8220;$KAFKA_JMX_OPTS&#8221; ]; then<br \/>\nKAFKA_JMX_OPTS=&#8221;-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false &#8221;<br \/>\nfi<\/p>\n<p># JMX port to use<br \/>\nif [ $JMX_PORT ]; then<br \/>\nKAFKA_JMX_OPTS=&#8221;$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT &#8221;<br \/>\nfi<\/p>\n<p># Log directory to use<br \/>\nif [ &#8220;x$LOG_DIR&#8221; = &#8220;x&#8221; ]; then<br \/>\nLOG_DIR=&#8221;$base_dir\/logs&#8221;<br \/>\nfi<\/p>\n<p># Log4j settings<br \/>\nif [ -z &#8220;$KAFKA_LOG4J_OPTS&#8221; ]; then<br \/>\n# Log to console. This is a tool.<br \/>\nLOG4J_CONFIG_NORMAL_INSTALL=&#8221;\/etc\/kafka\/tools-log4j.properties&#8221;<br \/>\nLOG4J_CONFIG_ZIP_INSTALL=&#8221;$base_dir\/etc\/kafka\/tools-log4j.properties&#8221;<br \/>\nif [ -e &#8220;$LOG4J_CONFIG_NORMAL_INSTALL&#8221; ]; then # Normal install layout<br \/>\nLOG4J_DIR=&#8221;${LOG4J_CONFIG_NORMAL_INSTALL}&#8221;<br \/>\nelif [ -e &#8220;${LOG4J_CONFIG_ZIP_INSTALL}&#8221; ]; then # Simple zip file layout<br \/>\nLOG4J_DIR=&#8221;${LOG4J_CONFIG_ZIP_INSTALL}&#8221;<br \/>\nelse # Fallback to normal default<br \/>\nLOG4J_DIR=&#8221;$base_dir\/config\/tools-log4j.properties&#8221;<br \/>\nfi<br \/>\n# If Cygwin is detected, LOG4J_DIR is converted to Windows format.<br \/>\n(( CYGWIN )) &amp;&amp; LOG4J_DIR=$(cygpath &#8211;path &#8211;mixed &#8220;${LOG4J_DIR}&#8221;)<br \/>\nKAFKA_LOG4J_OPTS=&#8221;-Dlog4j.configuration=file:${LOG4J_DIR}&#8221;<br \/>\nelse<br \/>\n# create logs directory<br \/>\nif [ ! -d &#8220;$LOG_DIR&#8221; ]; then<br \/>\nmkdir -p &#8220;$LOG_DIR&#8221;<br \/>\nfi<br \/>\nfi<\/p>\n<p># If Cygwin is detected, LOG_DIR is converted to Windows format.<br \/>\n(( CYGWIN )) &amp;&amp; LOG_DIR=$(cygpath &#8211;path &#8211;mixed &#8220;${LOG_DIR}&#8221;)<br \/>\nKAFKA_LOG4J_OPTS=&#8221;-Dkafka.logs.dir=$LOG_DIR $KAFKA_LOG4J_OPTS&#8221;<\/p>\n<p># Generic jvm settings you want to add<br \/>\nif [ -z &#8220;$KAFKA_OPTS&#8221; ]; then<br \/>\nKAFKA_OPTS=&#8221;&#8221;<br \/>\nfi<\/p>\n<p># Set Debug options if enabled<br \/>\nif [ &#8220;x$KAFKA_DEBUG&#8221; != &#8220;x&#8221; ]; then<\/p>\n<p># Use default ports<br \/>\nDEFAULT_JAVA_DEBUG_PORT=&#8221;5005&#8243;<\/p>\n<p>if [ -z &#8220;$JAVA_DEBUG_PORT&#8221; ]; then<br \/>\nJAVA_DEBUG_PORT=&#8221;$DEFAULT_JAVA_DEBUG_PORT&#8221;<br \/>\nfi<\/p>\n<p># Use the defaults if JAVA_DEBUG_OPTS was not set<br \/>\nDEFAULT_JAVA_DEBUG_OPTS=&#8221;-agentlib:jdwp=transport=dt_socket,server=y,suspend=${DEBUG_SUSPEND_FLAG:-n},address=$JAVA_DEBUG_PORT&#8221;<br \/>\nif [ -z &#8220;$JAVA_DEBUG_OPTS&#8221; ]; then<br \/>\nJAVA_DEBUG_OPTS=&#8221;$DEFAULT_JAVA_DEBUG_OPTS&#8221;<br \/>\nfi<\/p>\n<p>echo &#8220;Enabling Java debug options: $JAVA_DEBUG_OPTS&#8221;<br \/>\nKAFKA_OPTS=&#8221;$JAVA_DEBUG_OPTS $KAFKA_OPTS&#8221;<br \/>\nfi<\/p>\n<p># Which java to use<br \/>\nif [ -z &#8220;$JAVA_HOME&#8221; ]; then<br \/>\nJAVA=&#8221;java&#8221;<br \/>\nelse<br \/>\nJAVA=&#8221;$JAVA_HOME\/bin\/java&#8221;<br \/>\nfi<\/p>\n<p># Memory options<br \/>\nif [ -z &#8220;$KAFKA_HEAP_OPTS&#8221; ]; then<br \/>\nKAFKA_HEAP_OPTS=&#8221;-Xmx256M&#8221;<br \/>\nfi<\/p>\n<p># JVM performance options<br \/>\n# MaxInlineLevel=15 is the default since JDK 14 and can be removed once older JDKs are no longer supported<br \/>\nif [ -z &#8220;$KAFKA_JVM_PERFORMANCE_OPTS&#8221; ]; then<br \/>\nKAFKA_JVM_PERFORMANCE_OPTS=&#8221;-server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:MaxInlineLevel=15 -Djava.awt.headless=true&#8221;<br \/>\nfi<\/p>\n<p>while [ $# -gt 0 ]; do<br \/>\nCOMMAND=$1<br \/>\ncase $COMMAND in<br \/>\n-name)<br \/>\nDAEMON_NAME=$2<br \/>\nCONSOLE_OUTPUT_FILE=$LOG_DIR\/$DAEMON_NAME.out<br \/>\nshift 2<br \/>\n;;<br \/>\n-loggc)<br \/>\nif [ -z &#8220;$KAFKA_GC_LOG_OPTS&#8221; ]; then<br \/>\nGC_LOG_ENABLED=&#8221;true&#8221;<br \/>\nfi<br \/>\nshift<br \/>\n;;<br \/>\n-daemon)<br \/>\nDAEMON_MODE=&#8221;true&#8221;<br \/>\nshift<br \/>\n;;<br \/>\n*)<br \/>\nbreak<br \/>\n;;<br \/>\nesac<br \/>\ndone<\/p>\n<p># GC options<br \/>\nGC_FILE_SUFFIX=&#8217;-gc.log&#8217;<br \/>\nGC_LOG_FILE_NAME=&#8221;<br \/>\nif [ &#8220;x$GC_LOG_ENABLED&#8221; = &#8220;xtrue&#8221; ]; then<br \/>\nGC_LOG_FILE_NAME=$DAEMON_NAME$GC_FILE_SUFFIX<\/p>\n<p># The first segment of the version number, which is &#8216;1&#8217; for releases before Java 9<br \/>\n# it then becomes &#8216;9&#8217;, &#8217;10&#8217;, &#8230;<br \/>\n# Some examples of the first line of `java &#8211;version`:<br \/>\n# 8 -&gt; java version &#8220;1.8.0_152&#8221;<br \/>\n# 9.0.4 -&gt; java version &#8220;9.0.4&#8221;<br \/>\n# 10 -&gt; java version &#8220;10&#8221; 2018-03-20<br \/>\n# 10.0.1 -&gt; java version &#8220;10.0.1&#8221; 2018-04-17<br \/>\n# We need to match to the end of the line to prevent sed from printing the characters that do not match<br \/>\nJAVA_MAJOR_VERSION=$(&#8220;$JAVA&#8221; -version 2&gt;&amp;1 | sed -E -n &#8216;s\/.* version &#8220;([0-9]*).*$\/\\1\/p&#8217;)<br \/>\nif [[ &#8220;$JAVA_MAJOR_VERSION&#8221; -ge &#8220;9&#8221; ]] ; then<br \/>\nKAFKA_GC_LOG_OPTS=&#8221;-Xlog:gc*:file=$LOG_DIR\/$GC_LOG_FILE_NAME:time,tags:filecount=10,filesize=100M&#8221;<br \/>\nelse<br \/>\nKAFKA_GC_LOG_OPTS=&#8221;-Xloggc:$LOG_DIR\/$GC_LOG_FILE_NAME -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M&#8221;<br \/>\nfi<br \/>\nfi<\/p>\n<p># Remove a possible colon prefix from the classpath (happens at lines like `CLASSPATH=&#8221;$CLASSPATH:$file&#8221;` when CLASSPATH is blank)<br \/>\n# Syntax used on the right side is native Bash string manipulation; for more details see<br \/>\n# http:\/\/tldp.org\/LDP\/abs\/html\/string-manipulation.html, specifically the section titled &#8220;Substring Removal&#8221;<br \/>\nCLASSPATH=${CLASSPATH#:}<\/p>\n<p># If Cygwin is detected, classpath is converted to Windows format.<br \/>\n(( CYGWIN )) &amp;&amp; CLASSPATH=$(cygpath &#8211;path &#8211;mixed &#8220;${CLASSPATH}&#8221;)<\/p>\n<p># Launch mode<br \/>\nif [ &#8220;x$DAEMON_MODE&#8221; = &#8220;xtrue&#8221; ]; then<br \/>\nnohup &#8220;$JAVA&#8221; $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp &#8220;$CLASSPATH&#8221; $KAFKA_OPTS &#8220;$@&#8221; &gt; &#8220;$CONSOLE_OUTPUT_FILE&#8221; 2&gt;&amp;1 &lt; \/dev\/null &amp; else exec &#8220;$JAVA&#8221; $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp &#8220;$CLASSPATH&#8221; $KAFKA_OPTS &#8220;$@&#8221; fi \/opt\/confluent-6.2.0\/bin\/schema-registry-start schema-registry-start #!\/bin\/bash # # Copyright 2018 Confluent Inc. # # Licensed under the Apache License, Version 2.0 (the &#8220;License&#8221;); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http:\/\/www.apache.org\/licenses\/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an &#8220;AS IS&#8221; BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # print_synopsis() { echo &#8220;USAGE: $0 [-daemon] schema-registry.properties&#8221; } EXTRA_ARGS=${EXTRA_ARGS-&#8216;-name schemaRegistry&#8217;} # EXTRA_ARGS=${EXTRA_ARGS-&#8216;-name schemaRegistry -loggc&#8217;} ARG=$1 case $ARG in -daemon) EXTRA_ARGS=&#8221;-daemon &#8220;$EXTRA_ARGS shift ;; *) ;; esac if [[ $# -lt 1 ]]; then print_synopsis exit 1 fi if [[ ! -e &#8220;$1&#8221; ]]; then echo &#8220;Property file $1 does not exist&#8221; print_synopsis exit 1 fi exec $(dirname $0)\/schema-registry-run-class ${EXTRA_ARGS} io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain &#8220;$@&#8221; \/opt\/confluent-6.2.0\/bin\/schema-registry-stop schema-registry-stop #!\/bin\/bash # # Copyright 2018 Confluent Inc. # # Licensed under the Apache License, Version 2.0 (the &#8220;License&#8221;); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http:\/\/www.apache.org\/licenses\/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an &#8220;AS IS&#8221; BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # When stopping, search for both the current SchemaRegistryMain class and the deprecated Main class. exec $(dirname $0)\/schema-registry-stop-service &#8220;(io.confluent.kafka.schemaregistry.rest.Main)|(io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain)&#8221; \/opt\/confluent-6.2.0\/bin\/schema-registry-run-class schema-registry-run-class #!\/bin\/bash # # Copyright 2018 Confluent Inc. # # Licensed under the Apache License, Version 2.0 (the &#8220;License&#8221;); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http:\/\/www.apache.org\/licenses\/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an &#8220;AS IS&#8221; BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # if [ $# -lt 1 ]; then echo &#8220;USAGE: $0 [-daemon] [-name servicename] [-loggc] classname [opts]&#8221; exit 1 fi base_dir=$(dirname $0)\/.. # CYGINW == 1 if Cygwin is detected, else 0. if [[ $(uname -a) =~ &#8220;CYGWIN&#8221; ]]; then CYGWIN=1 else CYGWIN=0 fi # Development jars. `mvn package` should collect all the required dependency jars here for dir in $base_dir\/package-schema-registry\/target\/kafka-schema-registry-package-*-development; do CLASSPATH=$CLASSPATH:$dir\/share\/java\/schema-registry\/* done # Production jars, including kafka, rest-utils, and schema-registry for library in &#8220;confluent-security\/schema-registry&#8221; &#8220;confluent-common&#8221; &#8220;confluent-telemetry&#8221; &#8220;rest-utils&#8221; &#8220;schema-registry&#8221;; do CLASSPATH=$CLASSPATH:$base_dir\/share\/java\/$library\/* done # Log directory to use if [ &#8220;x$LOG_DIR&#8221; = &#8220;x&#8221; ]; then LOG_DIR=&#8221;$base_dir\/logs&#8221; fi # create logs directory if [ ! -d &#8220;$LOG_DIR&#8221; ]; then mkdir -p &#8220;$LOG_DIR&#8221; fi # logj4 settings if [ &#8220;x$SCHEMA_REGISTRY_LOG4J_OPTS&#8221; = &#8220;x&#8221; ]; then # Test for files from dev -&gt; packages so this will work as expected in dev if you have packages<br \/>\n# installed<br \/>\nif [ -e &#8220;$base_dir\/config\/log4j.properties&#8221; ]; then # Dev environment<br \/>\nLOG4J_DIR=&#8221;$base_dir\/config\/log4j.properties&#8221;<br \/>\nelif [ -e &#8220;$base_dir\/etc\/schema-registry\/log4j.properties&#8221; ]; then # Simple zip file layout<br \/>\nLOG4J_DIR=&#8221;$base_dir\/etc\/schema-registry\/log4j.properties&#8221;<br \/>\nelif [ -e &#8220;\/etc\/schema-registry\/log4j.properties&#8221; ]; then # Normal install layout<br \/>\nLOG4J_DIR=&#8221;\/etc\/schema-registry\/log4j.properties&#8221;<br \/>\nfi<\/p>\n<p># If Cygwin is detected, LOG4J_DIR is converted to Windows format.<br \/>\n(( CYGWIN )) &amp;&amp; LOG4J_DIR=$(cygpath &#8211;path &#8211;mixed &#8220;${LOG4J_DIR}&#8221;)<\/p>\n<p>SCHEMA_REGISTRY_LOG4J_OPTS=&#8221;-Dlog4j.configuration=file:${LOG4J_DIR}&#8221;<br \/>\nfi<\/p>\n<p># If Cygwin is detected, LOG_DIR is converted to Windows format.<br \/>\n(( CYGWIN )) &amp;&amp; LOG_DIR=$(cygpath &#8211;path &#8211;mixed &#8220;${LOG_DIR}&#8221;)<\/p>\n<p>SCHEMA_REGISTRY_LOG4J_OPTS=&#8221;-Dschema-registry.log.dir=$LOG_DIR $SCHEMA_REGISTRY_LOG4J_OPTS&#8221;<\/p>\n<p># JMX settings<br \/>\nif [ -z &#8220;$SCHEMA_REGISTRY_JMX_OPTS&#8221; ]; then<br \/>\nSCHEMA_REGISTRY_JMX_OPTS=&#8221;-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false &#8221;<br \/>\nfi<\/p>\n<p># JMX port to use<br \/>\nif [ $JMX_PORT ]; then<br \/>\nSCHEMA_REGISTRY_JMX_OPTS=&#8221;$SCHEMA_REGISTRY_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT &#8221;<br \/>\nfi<\/p>\n<p># Generic jvm settings you want to add<br \/>\nif [ -z &#8220;$SCHEMA_REGISTRY_OPTS&#8221; ]; then<br \/>\nSCHEMA_REGISTRY_OPTS=&#8221;&#8221;<br \/>\nfi<\/p>\n<p># Which java to use<br \/>\nif [ -z &#8220;$JAVA_HOME&#8221; ]; then<br \/>\nJAVA=&#8221;java&#8221;<br \/>\nelse<br \/>\nJAVA=&#8221;$JAVA_HOME\/bin\/java&#8221;<br \/>\nfi<\/p>\n<p># Memory options<br \/>\nif [ -z &#8220;$SCHEMA_REGISTRY_HEAP_OPTS&#8221; ]; then<br \/>\nSCHEMA_REGISTRY_HEAP_OPTS=&#8221;-Xmx512M&#8221;<br \/>\nfi<\/p>\n<p># JVM performance options<br \/>\nif [ -z &#8220;$SCHEMA_REGISTRY_JVM_PERFORMANCE_OPTS&#8221; ]; then<br \/>\nSCHEMA_REGISTRY_JVM_PERFORMANCE_OPTS=&#8221;-server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true&#8221;<br \/>\nfi<\/p>\n<p>while [ $# -gt 0 ]; do<br \/>\nCOMMAND=$1<br \/>\ncase $COMMAND in<br \/>\n-help)<br \/>\nHELP=&#8221;true&#8221;<br \/>\nbreak<br \/>\n;;<br \/>\n-name)<br \/>\nDAEMON_NAME=$2<br \/>\nCONSOLE_OUTPUT_FILE=$LOG_DIR\/$DAEMON_NAME.out<br \/>\nshift 2<br \/>\n;;<br \/>\n-loggc)<br \/>\nif [ -z &#8220;$SCHEMA_REGISTRY_GC_LOG_OPTS&#8221; ]; then<br \/>\nGC_LOG_ENABLED=&#8221;true&#8221;<br \/>\nfi<br \/>\nshift<br \/>\n;;<br \/>\n-daemon)<br \/>\nDAEMON_MODE=&#8221;true&#8221;<br \/>\nshift<br \/>\n;;<br \/>\n*)<br \/>\nbreak<br \/>\n;;<br \/>\nesac<br \/>\ndone<\/p>\n<p>if [ &#8220;x$$HELP&#8221; = &#8220;xtrue&#8221; ]; then<br \/>\necho &#8220;USAGE: $0 [-daemon] [-name servicename] [-loggc] classname [opts]&#8221;<br \/>\nexit 0<br \/>\nfi<\/p>\n<p>MAIN=$1<br \/>\nshift<\/p>\n<p># GC options<br \/>\nGC_FILE_SUFFIX=&#8217;-gc.log&#8217;<br \/>\nGC_LOG_FILE_NAME=&#8221;<br \/>\nif [ &#8220;x$GC_LOG_ENABLED&#8221; = &#8220;xtrue&#8221; ]; then<br \/>\nGC_LOG_FILE_NAME=$DAEMON_NAME$GC_FILE_SUFFIX<\/p>\n<p># The first segment of the version number, which is &#8216;1&#8217; for releases before Java 9<br \/>\n# it then becomes &#8216;9&#8217;, &#8217;10&#8217;, &#8230;<br \/>\n# Some examples of the first line of `java &#8211;version`:<br \/>\n# 8 -&gt; java version &#8220;1.8.0_152&#8221;<br \/>\n# 9.0.4 -&gt; java version &#8220;9.0.4&#8221;<br \/>\n# 10 -&gt; java version &#8220;10&#8221; 2018-03-20<br \/>\n# 10.0.1 -&gt; java version &#8220;10.0.1&#8221; 2018-04-17<br \/>\n# We need to match to the end of the line to prevent sed from printing the characters that do not match<br \/>\nJAVA_MAJOR_VERSION=$($JAVA -version 2&gt;&amp;1 | sed -E -n &#8216;s\/.* version &#8220;([0-9]*).*$\/\\1\/p&#8217;)<br \/>\nif [[ &#8220;$JAVA_MAJOR_VERSION&#8221; -ge &#8220;9&#8221; ]] ; then<br \/>\nSCHEMA_REGISTRY_GC_LOG_OPTS=&#8221;-Xlog:gc*:file=$LOG_DIR\/$GC_LOG_FILE_NAME:time,tags:filecount=10,filesize=102400&#8243;<br \/>\nelse<br \/>\nSCHEMA_REGISTRY_GC_LOG_OPTS=&#8221;-Xloggc:$LOG_DIR\/$GC_LOG_FILE_NAME -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M&#8221;<br \/>\nfi<br \/>\nfi<\/p>\n<p># If Cygwin is detected, classpath is converted to Windows format.<br \/>\n(( CYGWIN )) &amp;&amp; CLASSPATH=$(cygpath &#8211;path &#8211;mixed &#8220;${CLASSPATH}&#8221;)<\/p>\n<p># Launch mode<br \/>\nif [ &#8220;x$DAEMON_MODE&#8221; = &#8220;xtrue&#8221; ]; then<br \/>\nCONSOLE_OUTPUT_FILE=${CONSOLE_OUTPUT_FILE:-${LOG_DIR}\/schema-registry-console.out}<br \/>\nnohup $JAVA $SCHEMA_REGISTRY_HEAP_OPTS $SCHEMA_REGISTRY_JVM_PERFORMANCE_OPTS $SCHEMA_REGISTRY_GC_LOG_OPTS $SCHEMA_REGISTRY_JMX_OPTS $SCHEMA_REGISTRY_LOG4J_OPTS -cp $CLASSPATH $SCHEMA_REGISTRY_OPTS &#8220;$MAIN&#8221; &#8220;$@&#8221; &gt; &#8220;${CONSOLE_OUTPUT_FILE}&#8221; 2&gt;&amp;1 &lt; \/dev\/null &amp;<br \/>\nelse<br \/>\nexec &#8220;$JAVA&#8221; $SCHEMA_REGISTRY_HEAP_OPTS $SCHEMA_REGISTRY_JVM_PERFORMANCE_OPTS $SCHEMA_REGISTRY_GC_LOG_OPTS $SCHEMA_REGISTRY_JMX_OPTS $SCHEMA_REGISTRY_LOG4J_OPTS -cp $CLASSPATH $SCHEMA_REGISTRY_OPTS &#8220;$MAIN&#8221; &#8220;$@&#8221;<br \/>\nfi<\/p>\n<\/details>\n","protected":false},"excerpt":{"rendered":"<p>\u306f\u3058\u3081\u306b Apache Kafka\u306e\u5546\u7528\u7248\u3067\u3042\u308bConfluent Platform\u306b\u3064\u3044\u3066\u306e\u30e1\u30e2\u66f8\u304d\u3067\u3059\u3002\u521d [&hellip;]<\/p>\n","protected":false},"author":11,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-46741","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v21.5 (Yoast SEO v21.5) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>- Blog - Silicon Cloud<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.silicloud.com\/zh\/blog\/46741-2\/\" \/>\n<meta property=\"og:locale\" content=\"zh_CN\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:description\" content=\"\u306f\u3058\u3081\u306b Apache Kafka\u306e\u5546\u7528\u7248\u3067\u3042\u308bConfluent Platform\u306b\u3064\u3044\u3066\u306e\u30e1\u30e2\u66f8\u304d\u3067\u3059\u3002\u521d [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.silicloud.com\/zh\/blog\/46741-2\/\" \/>\n<meta property=\"og:site_name\" content=\"Blog - Silicon Cloud\" \/>\n<meta property=\"article:published_time\" content=\"2023-08-12T04:32:06+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-04-29T05:47:43+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/cdn.silicloud.com\/blog-img\/blog\/img\/657d664937434c4406d09c45\/7-0.png\" \/>\n<meta name=\"author\" content=\"\u65b0, \u97f5\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u4f5c\u8005\" \/>\n\t<meta name=\"twitter:data1\" content=\"\u65b0, \u97f5\" \/>\n\t<meta name=\"twitter:label2\" content=\"\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4\" \/>\n\t<meta name=\"twitter:data2\" content=\"62 \u5206\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.silicloud.com\/zh\/blog\/46741-2\/\",\"url\":\"https:\/\/www.silicloud.com\/zh\/blog\/46741-2\/\",\"name\":\"- Blog - Silicon Cloud\",\"isPartOf\":{\"@id\":\"https:\/\/www.silicloud.com\/zh\/blog\/#website\"},\"datePublished\":\"2023-08-12T04:32:06+00:00\",\"dateModified\":\"2024-04-29T05:47:43+00:00\",\"author\":{\"@id\":\"https:\/\/www.silicloud.com\/zh\/blog\/#\/schema\/person\/4ba4019495123db3038fd0809e6959c9\"},\"inLanguage\":\"zh-Hans\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.silicloud.com\/zh\/blog\/46741-2\/\"]}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.silicloud.com\/zh\/blog\/#website\",\"url\":\"https:\/\/www.silicloud.com\/zh\/blog\/\",\"name\":\"Blog - Silicon Cloud\",\"description\":\"\",\"inLanguage\":\"zh-Hans\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.silicloud.com\/zh\/blog\/#\/schema\/person\/4ba4019495123db3038fd0809e6959c9\",\"name\":\"\u65b0, \u97f5\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\/\/www.silicloud.com\/zh\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/d484b6c6e4ae82e8a9efea989e1d2af46d9b6ef128101e63b18f559fca0ae627?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/d484b6c6e4ae82e8a9efea989e1d2af46d9b6ef128101e63b18f559fca0ae627?s=96&d=mm&r=g\",\"caption\":\"\u65b0, \u97f5\"},\"url\":\"https:\/\/www.silicloud.com\/zh\/blog\/author\/yunxin\/\"},{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\/\/www.silicloud.com\/zh\/blog\/46741-2\/#local-main-organization-logo\",\"url\":\"\",\"contentUrl\":\"\",\"caption\":\"Blog - Silicon Cloud\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"- Blog - Silicon Cloud","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.silicloud.com\/zh\/blog\/46741-2\/","og_locale":"zh_CN","og_type":"article","og_description":"\u306f\u3058\u3081\u306b Apache Kafka\u306e\u5546\u7528\u7248\u3067\u3042\u308bConfluent Platform\u306b\u3064\u3044\u3066\u306e\u30e1\u30e2\u66f8\u304d\u3067\u3059\u3002\u521d [&hellip;]","og_url":"https:\/\/www.silicloud.com\/zh\/blog\/46741-2\/","og_site_name":"Blog - Silicon Cloud","article_published_time":"2023-08-12T04:32:06+00:00","article_modified_time":"2024-04-29T05:47:43+00:00","og_image":[{"url":"https:\/\/cdn.silicloud.com\/blog-img\/blog\/img\/657d664937434c4406d09c45\/7-0.png"}],"author":"\u65b0, \u97f5","twitter_card":"summary_large_image","twitter_misc":{"\u4f5c\u8005":"\u65b0, \u97f5","\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4":"62 \u5206"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.silicloud.com\/zh\/blog\/46741-2\/","url":"https:\/\/www.silicloud.com\/zh\/blog\/46741-2\/","name":"- Blog - Silicon Cloud","isPartOf":{"@id":"https:\/\/www.silicloud.com\/zh\/blog\/#website"},"datePublished":"2023-08-12T04:32:06+00:00","dateModified":"2024-04-29T05:47:43+00:00","author":{"@id":"https:\/\/www.silicloud.com\/zh\/blog\/#\/schema\/person\/4ba4019495123db3038fd0809e6959c9"},"inLanguage":"zh-Hans","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.silicloud.com\/zh\/blog\/46741-2\/"]}]},{"@type":"WebSite","@id":"https:\/\/www.silicloud.com\/zh\/blog\/#website","url":"https:\/\/www.silicloud.com\/zh\/blog\/","name":"Blog - Silicon Cloud","description":"","inLanguage":"zh-Hans"},{"@type":"Person","@id":"https:\/\/www.silicloud.com\/zh\/blog\/#\/schema\/person\/4ba4019495123db3038fd0809e6959c9","name":"\u65b0, \u97f5","image":{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/www.silicloud.com\/zh\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/d484b6c6e4ae82e8a9efea989e1d2af46d9b6ef128101e63b18f559fca0ae627?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/d484b6c6e4ae82e8a9efea989e1d2af46d9b6ef128101e63b18f559fca0ae627?s=96&d=mm&r=g","caption":"\u65b0, \u97f5"},"url":"https:\/\/www.silicloud.com\/zh\/blog\/author\/yunxin\/"},{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/www.silicloud.com\/zh\/blog\/46741-2\/#local-main-organization-logo","url":"","contentUrl":"","caption":"Blog - Silicon Cloud"}]}},"_links":{"self":[{"href":"https:\/\/www.silicloud.com\/zh\/blog\/wp-json\/wp\/v2\/posts\/46741","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.silicloud.com\/zh\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.silicloud.com\/zh\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.silicloud.com\/zh\/blog\/wp-json\/wp\/v2\/users\/11"}],"replies":[{"embeddable":true,"href":"https:\/\/www.silicloud.com\/zh\/blog\/wp-json\/wp\/v2\/comments?post=46741"}],"version-history":[{"count":2,"href":"https:\/\/www.silicloud.com\/zh\/blog\/wp-json\/wp\/v2\/posts\/46741\/revisions"}],"predecessor-version":[{"id":85640,"href":"https:\/\/www.silicloud.com\/zh\/blog\/wp-json\/wp\/v2\/posts\/46741\/revisions\/85640"}],"wp:attachment":[{"href":"https:\/\/www.silicloud.com\/zh\/blog\/wp-json\/wp\/v2\/media?parent=46741"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.silicloud.com\/zh\/blog\/wp-json\/wp\/v2\/categories?post=46741"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.silicloud.com\/zh\/blog\/wp-json\/wp\/v2\/tags?post=46741"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}