在Rocky Linux上通过MongoDB 6.0进行复制
本来我打算使用MongoDB 5.0(因为有很多日语解释文章),但当我开始构建阶段时,发现已经发布了6.0版本,所以这篇文章只是说先试试看。
在2022年11月17日,Rocky Linux的最新版本可用于操作系统。
$ cat /etc/redhat-release
Rocky Linux release 8.7 (Green Obsidian)
虽然已经进行了最小安装和yum更新,但通常会先安装常用软件包,所以可能会出现依赖关系。另外,为了方便起见,我将SELinux禁用了。
安装
由于Rocky默认的存储库没有MongoDB,因此需要添加RHEL存储库(由于没有资金,所以使用社区版)。
$ cat /etc/yum.repos.d/mongodb-org-6.0.repo
[mongodb-org-6.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/6.0/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-6.0.asc
我想在安装 Ansible 时固定版本,所以要查找带有版本名称的名称。
$ sudo dnf search --showduplicates mongodb-org
メタデータの期限切れの最終確認: 0:11:13 時間前の 2022年11月17日 03時52分33秒 に実施しました。
=================== 名前 完全一致: mongodb-org ====================
mongodb-org-6.0.3-1.el8.x86_64 : MongoDB open source
...: document-oriented database system (metapackage)
mongodb-org-6.0.0-1.el8.x86_64 : MongoDB open source
...: document-oriented database system (metapackage)
mongodb-org-6.0.1-1.el8.x86_64 : MongoDB open source
...: document-oriented database system (metapackage)
mongodb-org-6.0.2-1.el8.x86_64 : MongoDB open source
...: document-oriented database system (metapackage)
mongodb-org-6.0.3-1.el8.x86_64 : MongoDB open source
...: document-oriented database system (metapackage)
将mongodb-org-6.0.3-1.el8.x86_64设为当前版本。
先试着安装并启动一下。
$ sudo systemctl start mongod
$ sudo systemctl status mongod
● mongod.service - MongoDB Database Server
Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled>
Active: active (running) since Thu 2022-11-17 04:26:28 EST; 3s >
Docs: https://docs.mongodb.org/manual
Process: 106174 ExecStart=/usr/bin/mongod $OPTIONS (code=exited,>
Process: 106171 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongod>
Process: 106170 ExecStartPre=/usr/bin/chown mongod:mongod /var/r>
Process: 106168 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb >
Main PID: 106177 (mongod)
Memory: 144.3M
CGroup: /system.slice/mongod.service
└─106177 /usr/bin/mongod -f /etc/mongod.conf
11月 17 04:26:27 STMNG001 systemd[1]: Starting MongoDB Database Se>
11月 17 04:26:27 STMNG001 mongod[106174]: about to fork child proc>
11月 17 04:26:27 STMNG001 mongod[106177]: forked process: 106177
11月 17 04:26:28 STMNG001 mongod[106174]: child process started su>
11月 17 04:26:28 STMNG001 systemd[1]: Started MongoDB Database Ser>
lines 1-18/18 (END)
总之就是能跑起来了。
虽然与此无关,但据说 MongoDB 从 5.0 版本开始,只能在 SandyBridge(如果是 AMD 的话就是 Bulldozer)及以后的 CPU 上运行,我曾经试图在一台老旧的测试机上运行,结果根本无法启动,浪费了很多时间。
复制的配置
网络配置
这次是1台主服务器和2台备份服务器的3个副本构成。
inet 192.168.163.41/24 brd 192.168.163.255 scope global noprefixroute ens224
valid_lft forever preferred_lft forever
inet 192.168.163.42/24 brd 192.168.163.255 scope global noprefixroute ens224
valid_lft forever preferred_lft forever
inet 192.168.163.43/24 brd 192.168.163.255 scope global noprefixroute ens224
valid_lft forever preferred_lft forever
麻煩所以該網絡完全被從外界隔離,只允許數據庫通信,防火牆對該區段設置全通。
$ sudo firewall-cmd --list-all --zone=drop
drop (active)
target: DROP
icmp-block-inversion: no
interfaces: ens224
sources:
services:
ports:
protocols:
forward: no
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
rule family="ipv4" source address="192.168.163.0/24" accep
配置
在阅读公式文件的同时进行设置(暂时不考虑AccessControl)
$ cat /etc/mongod.conf
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# Where and how to store data.
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
# engine:
# wiredTiger:
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
#security:
#operationProfiling:
replication:
replSetName: "staging_rs"
#sharding:
## Enterprise-Only Options
#auditLog:
#snmp:
即使这样说,只有两处进行了更改。
$ diff /etc/mongod.conf /tmp/mongod.conf
29c29,30
< bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
---
> bindIp: 127.0.0.1 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
>
35,36c36
< replication:
< replSetName: "staging_rs"
---
> #replication:
官方文件要求在`bindIp`中记录localhost、主机名等信息,但即使按照此设定,仍然导致退出代码48。而且由于这段代码在物理上是(以下略),所以设置为0.0.0.0来完全开放。如果您知道原因,请告诉我。
确认沟通
暂时可以通过 hosts 文件来访问主机名。
$ cat /etc/hosts
127.0.0.1 node1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 node1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.163.41 node1
192.168.163.42 node2
192.168.163.43 node3
试试连接
$ mongosh --host node2
Current Mongosh Log ID: 63760d87b8ca90e34cfd2773
Connecting to: mongodb://node2:27017/?directConnection=true&appName=mongosh+1.6.0
Using MongoDB: 6.0.3
Using Mongosh: 1.6.0
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.
------
The server generated these startup warnings when booting
2022-11-17T05:31:28.076-05:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
2022-11-17T05:31:28.076-05:00: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'
2022-11-17T05:31:28.076-05:00: vm.max_map_count is too low
------
------
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
------
test> exit
$ mongosh --host node3
Current Mongosh Log ID: 63760e92516565ee42298ecc
Connecting to: mongodb://node3:27017/?directConnection=true&appName=mongosh+1.6.0
Using MongoDB: 6.0.3
Using Mongosh: 1.6.0
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
------
The server generated these startup warnings when booting
2022-11-17T05:31:28.106-05:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
2022-11-17T05:31:28.107-05:00: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'
2022-11-17T05:31:28.107-05:00: vm.max_map_count is too low
------
------
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
------
test> exit
看起来连接没有问题。
试试同时进行
連結到自己
$ mongosh
Current Mongosh Log ID: 63760f4babfcbf3520a2f501
Connecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.6.0
Using MongoDB: 6.0.3
Using Mongosh: 1.6.0
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
------
The server generated these startup warnings when booting
2022-11-17T05:31:28.082-05:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
2022-11-17T05:31:28.082-05:00: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'
2022-11-17T05:31:28.082-05:00: vm.max_map_count is too low
------
------
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
------
test>
rs.initiate() 的一个名字错误
test> rs.initiate()
{
info2: 'no configuration specified. Using a default configuration for the set',
me: 'STMNG001:27017',
ok: 1
}
staging_rs [direct: other] test> rs.conf()
{
_id: 'staging_rs',
version: 1,
term: 1,
members: [
{
_id: 0,
host: 'STMNG001:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
secondaryDelaySecs: Long("0"),
votes: 1
}
],
protocolVersion: Long("1"),
writeConcernMajorityJournalDefault: true,
settings: {
chainingAllowed: true,
heartbeatIntervalMillis: 2000,
heartbeatTimeoutSecs: 10,
electionTimeoutMillis: 10000,
catchUpTimeoutMillis: -1,
catchUpTakeoverDelayMillis: 30000,
getLastErrorModes: {},
getLastErrorDefaults: { w: 1, wtimeout: 0 },
replicaSetId: ObjectId("63760f6dcc9599202db21af9")
}
}
※ 这是一个错误。
如果在 rs.initiate() 中加入参数并进行一次性初始化就好了,但是一旦进行了一次初始化,initiate 就不会再工作了。
staging_rs [direct: primary] test> rs.initiate( {
... _id : "staging_rs",
... members: [
... { _id: 0, host: "node1:27017" },
... { _id: 1, host: "node2:27017" },
... { _id: 2, host: "node3:27017" }
... ]
... })
MongoServerError: already initialized
既然已经搞砸了,那就只能继续执行 rs.add(因为删除副本集信息看起来相当麻烦)。
staging_rs [direct: primary] test> rs.add(
... { _id: 1, host: "node2:27017" }
... )
{
ok: 1,
'$clusterTime': {
clusterTime: Timestamp({ t: 1668682278, i: 1 }),
signature: {
hash: Binary(Buffer.from("0000000000000000000000000000000000000000", "hex"), 0),
keyId: Long("0")
}
},
operationTime: Timestamp({ t: 1668682278, i: 1 })
}
staging_rs [direct: primary] test> rs.add(
... { _id: 2, host: "node3:27017" }
... )
{
ok: 1,
'$clusterTime': {
clusterTime: Timestamp({ t: 1668682321, i: 1 }),
signature: {
hash: Binary(Buffer.from("0000000000000000000000000000000000000000", "hex"), 0),
keyId: Long("0")
}
},
operationTime: Timestamp({ t: 1668682321, i: 1 })
}
通过命令 rs.status() 来检查状态。
staging_rs [direct: primary] test> rs.status()
{
set: 'staging_rs',
date: ISODate("2022-11-17T10:53:07.205Z"),
myState: 1,
term: Long("2"),
syncSourceHost: '',
syncSourceId: -1,
heartbeatIntervalMillis: Long("2000"),
majorityVoteCount: 2,
writeMajorityCount: 2,
votingMembersCount: 3,
writableVotingMembersCount: 3,
optimes: {
lastCommittedOpTime: { ts: Timestamp({ t: 1668682379, i: 1 }), t: Long("2") },
lastCommittedWallTime: ISODate("2022-11-17T10:52:59.238Z"),
readConcernMajorityOpTime: { ts: Timestamp({ t: 1668682379, i: 1 }), t: Long("2") },
appliedOpTime: { ts: Timestamp({ t: 1668682379, i: 1 }), t: Long("2") },
durableOpTime: { ts: Timestamp({ t: 1668682379, i: 1 }), t: Long("2") },
lastAppliedWallTime: ISODate("2022-11-17T10:52:59.238Z"),
lastDurableWallTime: ISODate("2022-11-17T10:52:59.238Z")
},
lastStableRecoveryTimestamp: Timestamp({ t: 1668682339, i: 1 }),
electionCandidateMetrics: {
lastElectionReason: 'electionTimeout',
lastElectionDate: ISODate("2022-11-17T10:43:19.198Z"),
electionTerm: Long("2"),
lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1668681761, i: 1 }), t: Long("1") },
numVotesNeeded: 1,
priorityAtElection: 1,
electionTimeoutMillis: Long("10000"),
newTermStartDate: ISODate("2022-11-17T10:43:19.207Z"),
wMajorityWriteAvailabilityDate: ISODate("2022-11-17T10:43:19.225Z")
},
members: [
{
_id: 0,
name: 'node1:27017',
health: 1,
state: 1,
stateStr: 'PRIMARY',
uptime: 590,
optime: { ts: Timestamp({ t: 1668682379, i: 1 }), t: Long("2") },
optimeDate: ISODate("2022-11-17T10:52:59.000Z"),
lastAppliedWallTime: ISODate("2022-11-17T10:52:59.238Z"),
lastDurableWallTime: ISODate("2022-11-17T10:52:59.238Z"),
syncSourceHost: '',
syncSourceId: -1,
infoMessage: '',
electionTime: Timestamp({ t: 1668681799, i: 1 }),
electionDate: ISODate("2022-11-17T10:43:19.000Z"),
configVersion: 5,
configTerm: 2,
self: true,
lastHeartbeatMessage: ''
},
{
_id: 1,
name: 'node2:27017',
health: 1,
state: 2,
stateStr: 'SECONDARY',
uptime: 108,
optime: { ts: Timestamp({ t: 1668682379, i: 1 }), t: Long("2") },
optimeDurable: { ts: Timestamp({ t: 1668682379, i: 1 }), t: Long("2") },
optimeDate: ISODate("2022-11-17T10:52:59.000Z"),
optimeDurableDate: ISODate("2022-11-17T10:52:59.000Z"),
lastAppliedWallTime: ISODate("2022-11-17T10:52:59.238Z"),
lastDurableWallTime: ISODate("2022-11-17T10:52:59.238Z"),
lastHeartbeat: ISODate("2022-11-17T10:53:05.733Z"),
lastHeartbeatRecv: ISODate("2022-11-17T10:53:05.734Z"),
pingMs: Long("0"),
lastHeartbeatMessage: '',
syncSourceHost: 'node1:27017',
syncSourceId: 0,
infoMessage: '',
configVersion: 5,
configTerm: 2
},
{
_id: 2,
name: 'node3:27017',
health: 1,
state: 2,
stateStr: 'SECONDARY',
uptime: 65,
optime: { ts: Timestamp({ t: 1668682379, i: 1 }), t: Long("2") },
optimeDurable: { ts: Timestamp({ t: 1668682379, i: 1 }), t: Long("2") },
optimeDate: ISODate("2022-11-17T10:52:59.000Z"),
optimeDurableDate: ISODate("2022-11-17T10:52:59.000Z"),
lastAppliedWallTime: ISODate("2022-11-17T10:52:59.238Z"),
lastDurableWallTime: ISODate("2022-11-17T10:52:59.238Z"),
lastHeartbeat: ISODate("2022-11-17T10:53:05.735Z"),
lastHeartbeatRecv: ISODate("2022-11-17T10:53:06.255Z"),
pingMs: Long("0"),
lastHeartbeatMessage: '',
syncSourceHost: 'node2:27017',
syncSourceId: 1,
infoMessage: '',
configVersion: 5,
configTerm: 2
}
],
ok: 1,
'$clusterTime': {
clusterTime: Timestamp({ t: 1668682379, i: 1 }),
signature: {
hash: Binary(Buffer.from("0000000000000000000000000000000000000000", "hex"), 0),
keyId: Long("0")
}
},
operationTime: Timestamp({ t: 1668682379, i: 1 })
}
似乎正在创建一个由1个主节点和2个副本节点组成的普通MongoDB ReplicaSet。
确认动作
数据库确认
staging_rs [direct: primary] test> show dbs
admin 80.00 KiB
config 176.00 KiB
local 436.00 KiB
创建一个新项目
staging_rs [direct: primary] test> use hogedb
switched to db hogedb
插入
staging_rs [direct: primary] hogedb> db.hoge.insert({fuga: "FUGA"})
DeprecationWarning: Collection.insert() is deprecated. Use insertOne, insertMany, or bulkWrite.
{
acknowledged: true,
insertedIds: { '0': ObjectId("6376142de6809436052f37ce") }
}
被斥责了但不在意,只是确认一下。
staging_rs [direct: primary] hogedb> show dbs
admin 80.00 KiB
config 208.00 KiB
hogedb 40.00 KiB
local 444.00 KiB
staging_rs [direct: primary] hogedb> db
hogedb
staging_rs [direct: primary] hogedb> db.hoge.find()
[ { _id: ObjectId("6376142de6809436052f37ce"), fuga: 'FUGA' } ]
好像有正確进入 SECONDARY 数据库,所以需要确认一下。
staging_rs [direct: secondary] test> show dbs
admin 80.00 KiB
config 248.00 KiB
hogedb 40.00 KiB
local 436.00 KiB
看起来已经很好地同步了。
如果时间允许的话,由于篇幅有些长,请确保能够连接到客户端并能从副本中读取。
请原生中文以一种方式改写以下内容,只需要一种选项:
公共关系
如果想要在最近取得AS番号的株式会社猫耳草(AS 146981)工作的人,请点击这里: