实战mongodb3.06 Relica Sets+sharding集群
實戰mongodb3.06 Relica Sets+sharding集群
前? 言
???????? MongoDB 的Sharding機制解決了海量存儲和動態擴容的問題,但離實際生產環境所需要的高可靠、高可用還有些距離,例如Shard Server的單點故障就無法解決,所以提出”ReplicatSets +Sharding”的解決方案。本方案是某某公司真實實例介紹采用MongoDB復制集和Sharding高可能用方案。本方案采用MongoDB 3.06版本。
MongoDB3.0以上版本提升7到10倍的寫入效率以及增加80%的數據壓縮率,還能減少95%的運維成本 。
MonoDB3.0新特性主括:可插入式的存儲引擎API、支持WiredTiger存儲引擎、MMAPv1提升、復制集全面提升、集群方面的改進、提升了安全性。
MongoDB3.0以上版本擁有大幅度的改進,本作者所以采用最新的3.06版本講解。本實例采用最新的配置文件,在官方網可以看到。
1.Relica Sets+sharding架構
Replica Sets+sharding解決方案內容如下:
?? Shard服務器:使用Replica Sets確保每個數據節點都具有備份、自動容錯轉移、自動恢復能力
?? 配置服務器:使用3個配置服務器確保元數據完整性。
?? 路由進程:使用3個路由進程實現負載均衡,提高客戶端接入性能。
配置完成的Replica Sets+sharding環境如下圖所示。
???
2.搭建一個高可用架構
?????采用Replica Sets+sharding 架構,可以避免單機Sharding架構中的ShardServer單點故障,這方案組合解決的sharding架構中的高可用問題。
各服務器開放的監聽端口如圖所示。
?
?
?
主機 | IP | 服務及端口 |
Mongodb01 | 172.16.202.201 | Mongod shard1_1?? 11731 |
Mongod shard2_1?? 11732 | ||
Mongod shard3_1?? 11733 | ||
Mongod config???? 30000 | ||
Mongos?? 1??????? ?60000 | ||
Mongodb02 | 172.16.202.202 | Mongod shard1_2?? 11731 |
Mongod shard2_2?? 11732 | ||
Mongod shard3_2?? 11733 | ||
Mongod config???? 30000 | ||
Mongos?? 2?????? ?60000 | ||
Mongodb03 | 172.16.202.203 | Mongod shard1_3?? 11731 |
Mongod shard2_3?? 11732 | ||
Mongod shard3_3?? 11733 | ||
Mongod config???? 30000 | ||
Mongos?? 3?????? ?60000 |
?
2.1.創建mongo用戶
在三臺服務器中創建mongo用戶,如下面的代碼所示
[root@mongodb01 ~]# useradd mongo
[root@mongodb01 ~]#passwd mongo
?[root@mongodb01 ~]# su - mongo
[mongo@mongodb01 ~]$
2.2.創建數據目錄
? 首先要在mongo用戶下創建shard server和Config Server的數據目錄,用于存儲數據,創建logs的日志目錄、創建config存放配置文件目錄。
在mongodb01上創建shard server和Config Server的數據目錄、logs的日志目錄、config存放配置文件目錄。
[mongo@mongodb01 ~]$ mkdir -p /home/mongo/data/shard1_1
[mongo@mongodb01 ~]$ mkdir -p /home/mongo/data/shard2_1
[mongo@mongodb01 ~]$ mkdir -p /home/mongo/data/shard3_1
[mongo@mongodb01 ~]$ mkdir -p /home/mongo/data/config
[mongo@mongodb01 ~]$ mkdir -p /home/mongo/data/logs
[mongo@mongodb01 ~]$ mkdir -p /home/mongo/config?
如以上代碼所示,目錄/home/mongo/data/shard1_1供 shard1主節點使用,目錄/home/mongo/data/shard2_1供shard2仲裁使用,目錄/home/mongo/data/shard3_1 供shard3副本使用,目錄/home/mongo/data/config 供整個ReplicaSets+sharding架構中的1個configServer使用,目錄/home/mongo/data/logs供日志使用,
目錄/home/mongo/config供配置文件使用。
在mongodb02上創建shard server和Config Server的數據目錄、logs的日志目錄、config存放配置文件目錄。
[mongo@mongodb02 ~]$ mkdir -p /home/mongo/data/shard1_2
[mongo@mongodb02 ~]$ mkdir -p /home/mongo/data/shard2_2
[mongo@mongodb02 ~]$ mkdir -p /home/mongo/data/shard3_2
[mongo@mongodb02 ~]$ mkdir -p /home/mongo/data/config
[mongo@mongodb02 ~]$ mkdir -p /home/mongo/data/logs
[mongo@mongodb02 ~]$ mkdir -p /home/mongo/config?
如以上代碼所示,目錄/home/mongo/data/shard1_2供 shard1副本使用,目錄/home/mongo/data/shard2_2供shard2主節點使用,目錄/home/mongo/data/shard3_1 供shard3仲裁使用,目錄/home/mongo/data/config 供整個ReplicaSets+sharding架構中的1個configServer使用,目錄/home/mongo/data/logs供日志使用,
目錄/home/mongo/config供配置文件使用。
在mongodb03上創建shard server和Config Server的數據目錄、logs的日志目錄、config存放配置文件目錄。
[mongo@mongodb03 ~]$ mkdir -p /home/mongo/data/shard1_3
[mongo@mongodb03 ~]$ mkdir -p /home/mongo/data/shard2_3
[mongo@mongodb03 ~]$ mkdir -p /home/mongo/data/shard3_3
[mongo@mongodb03 ~]$ mkdir -p /home/mongo/data/config
[mongo@mongodb03 ~]$ mkdir -p /home/mongo/data/logs
[mongo@mongodb03 ~]$ mkdir -p /home/mongo/config?
如以上代碼所示,目錄/home/mongo/data/shard1_3供 shard1仲裁使用,目錄/home/mongo/data/shard2_3供shard2副本使用,目錄/home/mongo/data/shard3_3 供shard3主節點使用,目錄/home/mongo/data/config 供整個ReplicaSets+sharding架構中的1個configServer使用,目錄/home/mongo/data/logs供日志使用,
目錄/home/mongo/config供配置文件使用。
2.3.配置Replica Sets
在三臺服務器上解壓mongodb-linux-x86_64-3.0.6.tgz
??? [mongo@mongodb01 ~]$ tar zxvfmongodb-linux-x86_64-3.0.6.tgz
[mongo@mongodb01 ~]$ mvmongodb-linux-x86_64-3.0.6 mongodb
2.3.1.配置shard1所用到的Relica Set 1
?? #注意配置文件縮進
? 在mongodb01上操作,如下的代碼所示:
[mongo@mongodb01 ~]$ cd config/
[mongo@mongodb01 config]$ cat shard1_1.conf
systemLog:
?destination: file
?##Log
?path:/home/mongo/data/logs/shard1_1.log
?logAppend: true
storage:
?journal:
? enabled: true
?dbPath: /home/mongo/data/shard1_1
?directoryPerDB: true
?engine: wiredTiger
?wiredTiger:
? engineConfig:
?? cacheSizeGB: 1
?? directoryForIndexes:true
? collectionConfig:
?? blockCompressor: snappy
processManagement:
? fork: true?
net:
?bindIp: 172.16.202.201
?port: 11731
replication:
?oplogSizeMB: 500
?replSetName: shard1
sharding:
?clusterRole: shardsvr
#sercurity:
?#authorization: enabled
?#keyFile:/home/mongo/key/security
?
[mongo@mongodb01 config]$ /home/mongo/mongodb/bin/mongod -f/home/mongo/config/shard1_1.conf
如以上代碼所示,在mongodb01啟動Replica Set1中的1個成員節點,復制集名字是shard1,監聽端口是11731。
? 在mongodb02上操作,如下的代碼所示:
[mongo@mongodb02 ~]$ cd config/
[mongo@mongodb02 config]$ cat shard1_2.conf
systemLog:
?destination: file
?##Log
?path: /home/mongo/data/logs/shard1_2.log
?logAppend: true
storage:
?journal:
? enabled: true
?dbPath: /home/mongo/data/shard1_2
?directoryPerDB: true
?engine: wiredTiger
?wiredTiger:
? engineConfig:
?? cacheSizeGB: 1
?? directoryForIndexes:true
? collectionConfig:
?? blockCompressor: snappy
processManagement:
? fork: true?
net:
?bindIp: 172.16.202.202
?port: 11731
replication:
?oplogSizeMB: 500
?replSetName: shard1
sharding:
?clusterRole: shardsvr
#sercurity:
?#authorization: enabled
?#keyFile:/home/mongo/key/security
?
[mongo@mongodb02 config]$ /home/mongo/mongodb/bin/mongod -f/home/mongo/config/shard1_2.conf
如以上代碼所示,在mongodb2啟動Replica Set1中的1個成員節點,復制集名字是shard1,監聽端口是11731。
? 在mongodb03上操作,如下的代碼所示:
[mongo@mongodb03~]$ cd config/
[mongo@mongodb03 config]$ cat shard1_3.conf
systemLog:
?destination: file
?##Log
?path: /home/mongo/data/logs/shard1_3.log
?logAppend: true
storage:
?journal:
? enabled: true
?dbPath: /home/mongo/data/shard1_3
?directoryPerDB: true
?engine: wiredTiger
?wiredTiger:
? engineConfig:
?? cacheSizeGB: 1
?? directoryForIndexes:true
? collectionConfig:
?? blockCompressor: snappy
processManagement:
? fork: true?
net:
?bindIp: 172.16.202.203
?port: 11731
replication:
?oplogSizeMB: 500
?replSetName: shard1
sharding:
?clusterRole: shardsvr
#sercurity:
?#authorization: enabled
?#keyFile:/home/mongo/key/security
?
[mongo@mongodb03 config]$ /home/mongo/mongodb/bin/mongod -f/home/mongo/config/shard1_3.conf
如以上代碼所示,在mongodb3啟動Replica Set1中的1個成員節點,復制集名字是shard1,監聽端口是11731。
連接mongodb01的11731端口的mongod,初始化Replicat Set1,如下代碼所示:
?
[mongo@mongodb01 ~]$ /home/mongo/mongodb/bin/mongo 172.16.202.201:11731
MongoDB shell version: 3.0.6
connecting to: 172.16.202.201:11731/test
> config={_id:'shard1',members:[{_id:0,host:'172.16.202.201:11731',priority:2},{_id:1,host:'172.16.202.202:11731'},{_id:2,host:'172.16.202.203:11731',arbiterOnly:true}]}
{
???????? "_id": "shard1",
???????? "members": [
?????????????????? {
??????????????????????????? "_id": 0,
??????????????????????????? "host": "172.16.202.201:11731",
??????????????????????????? "priority": 2
?????????????????? },
?????????????????? {
??????????????????????????? "_id": 1,
??????????????????????????? "host": "172.16.202.202:11731"
?????????????????? },
?????????????????? {
??????????????????????????? "_id": 2,
??????????????????????????? "host": "172.16.202.203:11731",
??????????????????????????? "arbiterOnly": true
?????????????????? }
???????? ]
}
>rs.initiate(config)
{ "ok" : 1 }
?
以上代碼通過執行rs.initiate(config)命令來初始化shard1的復制集Replica Set 1。
?
2.3.2.配置shard2所用到的Relica Set 2
? 在mongodb01上操作,如下的代碼所示:
[mongo@mongodb01 ~]$ cd config/
[mongo@mongodb01 config]$ cat shard2_1.conf
systemLog:
?destination: file
?##Log
?path: /home/mongo/data/logs/shard2_1.log
?logAppend: true
storage:
?journal:
? enabled: true
?dbPath: /home/mongo/data/shard2_1
?directoryPerDB: true
?engine: wiredTiger
?wiredTiger:
? engineConfig:
?? cacheSizeGB: 1
?? directoryForIndexes:true
? collectionConfig:
?? blockCompressor: snappy
processManagement:
? fork: true?
net:
?bindIp: 172.16.202.201
?port: 11732
replication:
?oplogSizeMB: 500
?replSetName: shard2
sharding:
?clusterRole: shardsvr
#sercurity:
?#authorization: enabled
?#keyFile:/home/mongo/key/security
?
[mongo@mongodb01 config]$ /home/mongo/mongodb/bin/mongod -f/home/mongo/config/shard2_1.conf
如以上代碼所示,在mongodb01啟動Replica Set1中的1個成員節點,復制集名字是shard2,監聽端口是11732。
? 在mongodb02上操作,如下的代碼所示:
[mongo@mongodb02 ~]$ cd config/
[mongo@mongodb02 config]$ cat shard2_2.conf
systemLog:
?destination: file
?##Log
?path: /home/mongo/data/logs/shard2_2.log
?logAppend: true
storage:
?journal:
? enabled: true
?dbPath: /home/mongo/data/shard2_2
?directoryPerDB: true
?engine: wiredTiger
?wiredTiger:
? engineConfig:
?? cacheSizeGB: 1
?? directoryForIndexes:true
? collectionConfig:
?? blockCompressor: snappy
processManagement:
? fork: true?
net:
?bindIp: 172.16.202.202
?port: 11732
replication:
?oplogSizeMB: 500
?replSetName: shard2
sharding:
?clusterRole: shardsvr
#sercurity:
?#authorization: enabled
?#keyFile:/home/mongo/key/security
?
[mongo@mongodb02 config]$ /home/mongo/mongodb/bin/mongod -f/home/mongo/config/shard2_2.conf
如以上代碼所示,在mongodb2啟動Replica Set1中的1個成員節點,復制集名字是shard2,監聽端口是11732。
? 在mongodb03上操作,如下的代碼所示:
[mongo@mongodb03~]$ cd config/
[mongo@mongodb03 config]$ cat shard2_3.conf
systemLog:
?destination: file
?##Log
?path: /home/mongo/data/logs/shard2_3.log
?logAppend: true
storage:
?journal:
? enabled: true
?dbPath: /home/mongo/data/shard2_3
?directoryPerDB: true
?engine: wiredTiger
?wiredTiger:
? engineConfig:
?? cacheSizeGB: 1
?? directoryForIndexes:true
? collectionConfig:
?? blockCompressor: snappy
processManagement:
? fork: true?
net:
?bindIp: 172.16.202.203
?port: 11732
replication:
?oplogSizeMB: 500
?replSetName: shard2
sharding:
?clusterRole: shardsvr
#sercurity:
?#authorization: enabled
?#keyFile:/home/mongo/key/security
?
[mongo@mongodb03 config]$ /home/mongo/mongodb/bin/mongod -f/home/mongo/config/shard2_3.conf
如以上代碼所示,在mongodb3啟動Replica Set1中的1個成員節點,復制集名字是shard2,監聽端口是11732。
連接mongodb02的11732端口的mongod,初始化Replicat Set1,如下代碼所示:
?
[mongo@mongodb02 config]$/home/mongo/mongodb/bin/mongo 172.16.202.202:11732
MongoDB shell version: 3.0.6
connecting to: 172.16.202.202:11732/test
> config={_id:'shard2',members:[{_id:0,host:'172.16.202.201:11732',arbiterOnly:true},{_id:1,host:'172.16.202.202:11732',priority:2},{_id:2,host:'172.16.202.203:11732'}]}
{
???????? "_id": "shard2",
???????? "members": [
?????????????????? {
??????????????????????????? "_id": 0,
??????????????????????????? "host": "172.16.202.201:11732",
??????????????????????????? "arbiterOnly": true
?????????????????? },
?????????????????? {
??????????????????????????? "_id": 1,
??????????????????????????? "host": "172.16.202.202:11732",
??????????????????????????? "priority": 2
?????????????????? },
?????????????????? {
??????????????????????????? "_id": 2,
??????????????????????????? "host": "172.16.202.203:11732"
?????????????????? }
???????? ]
}
>?rs.initiate(config)
{ "ok" : 1 }
?
?
以上代碼通過執行rs.initiate(config)命令來初始化shard2的復制集Replica Set 1。
?
?
2.3.3.配置shard3所用到的Relica Set 3
? 在mongodb01上操作,如下的代碼所示:
[mongo@mongodb01 ~]$ cd config/
[mongo@mongodb01 config]$ cat shard3_1.conf
systemLog:
?destination: file
?##Log
?path: /home/mongo/data/logs/shard3_1.log
?logAppend: true
storage:
?journal:
? enabled: true
?dbPath: /home/mongo/data/shard3_1
?directoryPerDB: true
?engine: wiredTiger
?wiredTiger:
? engineConfig:
?? cacheSizeGB: 1
?? directoryForIndexes:true
? collectionConfig:
?? blockCompressor: snappy
processManagement:
? fork: true?
net:
?bindIp: 172.16.202.201
?port: 11733
replication:
?oplogSizeMB: 500
?replSetName: shard3
sharding:
?clusterRole: shardsvr
#sercurity:
?#authorization: enabled
?#keyFile:/home/mongo/key/security
?
?[mongo@mongodb01 config]$/home/mongo/mongodb/bin/mongod -f /home/mongo/config/shard3_1.conf
如以上代碼所示,在mongodb01啟動Replica Set1中的1個成員節點,復制集名字是shard3,監聽端口是11733。
? 在mongodb02上操作,如下的代碼所示:
[mongo@mongodb02 ~]$ cd config/
[mongo@mongodb02 config]$ cat shard3_2.conf
systemLog:
?destination: file
?##Log
?path: /home/mongo/data/logs/shard3_2.log
?logAppend: true
storage:
?journal:
? enabled: true
?dbPath: /home/mongo/data/shard3_2
?directoryPerDB: true
?engine: wiredTiger
?wiredTiger:
? engineConfig:
?? cacheSizeGB: 1
?? directoryForIndexes:true
? collectionConfig:
?? blockCompressor: snappy
processManagement:
? fork: true?
net:
?bindIp: 172.16.202.202
?port: 11733
replication:
?oplogSizeMB: 500
?replSetName: shard3
sharding:
?clusterRole: shardsvr
#sercurity:
?#authorization: enabled
?#keyFile:/home/mongo/key/security
?
[mongo@mongodb02 config]$ /home/mongo/mongodb/bin/mongod -f/home/mongo/config/shard3_2.conf
如以上代碼所示,在mongodb2啟動Replica Set1中的1個成員節點,復制集名字是shard3,監聽端口是11733。
? 在mongodb03上操作,如下的代碼所示:
[mongo@mongodb03~]$ cd config/
[mongo@mongodb03 config]$ cat shard3_3.conf
systemLog:
?destination: file
?##Log
?path: /home/mongo/data/logs/shard3_3.log
?logAppend: true
storage:
?journal:
? enabled: true
?dbPath: /home/mongo/data/shard3_3
?directoryPerDB: true
?engine: wiredTiger
?wiredTiger:
? engineConfig:
?? cacheSizeGB: 1
?? directoryForIndexes:true
? collectionConfig:
?? blockCompressor: snappy
processManagement:
? fork: true?
net:
?bindIp: 172.16.202.203
?port: 11733
replication:
?oplogSizeMB: 500
?replSetName: shard3
sharding:
?clusterRole: shardsvr
#sercurity:
?#authorization: enabled
?#keyFile:/home/mongo/key/security
?
[mongo@mongodb03 config]$ /home/mongo/mongodb/bin/mongod -f/home/mongo/config/shard3_3.conf
如以上代碼所示,在mongodb3啟動Replica Set1中的1個成員節點,復制集名字是shard3,監聽端口是11733。
連接mongodb03的11733端口的mongod,初始化Replicat Set1,如下代碼所示:
[mongo@mongodb03 config]$/home/mongo/mongodb/bin/mongo 172.16.202.203:11733
MongoDB shell version: 3.0.6
connecting to: 172.16.202.203:11733/test
> config={_id:'shard3',members:[{_id:0,host:'172.16.202.201:11733'},{_id:1,host:'172.16.202.202:11733',arbiterOnly:true},{_id:2,host:'172.16.202.203:11733',priority:2}]}
{
???????? "_id": "shard3",
???????? "members": [
?????????????????? {
??????????????????????????? "_id": 0,
??????????????????????????? "host": "172.16.202.201:11733"
?????????????????? },
?????????????????? {
??????????????????????????? "_id": 1,
??????????????????????????? "host": "172.16.202.202:11733",
??????????????????????????? "arbiterOnly": true
?????????????????? },
?????????????????? {
??????????????????????????? "_id": 2,
??????????????????????????? "host": "172.16.202.203:11733",
??????????????????????????? "priority": 2
?????????????????? }
???????? ]
}
> rs.initiate(config)
{ "ok" : 1 }
?
?
?
?
以上代碼通過執行rs.initiate(config)命令來初始化shard2的復制集Replica Set 1。
2.3.4.查看復制集狀態
shard1:PRIMARY> rs.status()
{
???????? "set": "shard1",
???????? "date": ISODate("2015-11-25T10:53:06.091Z"),
???????? "myState": 1,
???????? "members": [
?????????????????? {
??????????????????????????? "_id": 0,
??????????????????????????? "name": "172.16.202.201:11731",
??????????????????????????? "health": 1,
??????????????????????????? "state": 1,
??????????????????????????? "stateStr": "PRIMARY", #主庫
??????????????????????????? "uptime": 3009,
??????????????????????????? "optime": Timestamp(1448448493, 1),
??????????????????????????? "optimeDate": ISODate("2015-11-25T10:48:13Z"),
??????????????????????????? "electionTime": Timestamp(1448448497, 1),
??????????????????????????? "electionDate": ISODate("2015-11-25T10:48:17Z"),
??????????????????????????? "configVersion": 1,
??????????????????????????? "self": true
?????????????????? },
?????????????????? {
??????????????????????????? "_id": 1,
??????????????????????????? "name": "172.16.202.202:11731",
??????????????????????????? "health": 1,
??????????????????????????? "state": 2,
??????????????????????????? "stateStr": "SECONDARY", #復本
??????????????????????????? "uptime": 292,
??????????????????????????? "optime": Timestamp(1448448493, 1),
??????????????????????????? "optimeDate": ISODate("2015-11-25T10:48:13Z"),
??????????????????????????? "lastHeartbeat": ISODate("2015-11-25T10:53:05.389Z"),
??????????????????????????? "lastHeartbeatRecv": ISODate("2015-11-25T10:53:05.391Z"),
??????????????????????????? "pingMs": 0,
??????????????????????????? "lastHeartbeatMessage": "could not find member to sync from",
??????????????????????????? "configVersion": 1
?????????????????? },
?????????????????? {
??????????????????????????? "_id": 2,
??????????????????????????? "name": "172.16.202.203:11731",
??????????????????????????? "health": 1,
??????????????????????????? "state": 7,
??????????????????????????? "stateStr": "ARBITER", ?#仲裁
?????????????????? ???????? "uptime" : 292,
??????????????????????????? "lastHeartbeat": ISODate("2015-11-25T10:53:05.391Z"),
??????????????????????????? "lastHeartbeatRecv": ISODate("2015-11-25T10:53:05.390Z"),
??????????????????????????? "pingMs": 0,
??????????????????????????? "configVersion": 1
?????????????????? }
???????? ],
???????? "ok": 1
}
?
?
2.4.配置3臺Config Server
三臺上執行操作如下代碼所示:
[mongo@mongodb01 config]$ catconfig.conf
systemLog:
?destination: file
?##Log
?path: /home/mongo/data/logs/config.log
?logAppend: true
storage:
?journal:
?enabled: true
?dbPath:/home/mongo/data/config
?directoryPerDB: true
processManagement:
?fork: true?
net:
?#bindIp:172.16.202.201 #這里根據自己來是否綁定IP
?port: 30000
sharding:
?clusterRole: configsvr
?
?[mongo@mongodb01 config]$ /home/mongo/mongodb/bin/mongos-f /home/mongo/config/config.conf
如以上代碼所示,在三臺服務器上分別執行啟動ConfigServer進程,并指定此進程監聽是30000
2.5.配置3臺Route Process
三臺上執行操作如下代碼所示:
[mongo@mongodb01 config]$ catmongos.conf
systemLog:
?destination: file
?##Log
?path: /home/mongo/data/logs/mongo.log
?logAppend: true
?
processManagement:
? fork: true?
net:
?#bindIp:172.16.202.201
?port: 6000
sharding:
?configDB: 172.16.202.201:30000,172.16.202.202:30000,172.16.202.203:30000
?
[mongo@mongodb01config]$ /home/mongo/mongodb/bin/mongos -f /home/mongo/config/mongos.conf
?
如以上代碼所示,在三臺服務器上分別啟動路由控制器,并指定此進程監聽端口是60000,同時指定三臺服務器上的Config Server的IP和端口。
2.6.配置Shard Cluster
連接到其中一臺機器的端口60000的mongos進程,并切換到admin數據到開始配置Sharding環境,如下面的代碼所示:
?
[mongo@mongodb01 logs]$/home/mongo/mongodb/bin/mongo 172.16.202.201:60000
MongoDB shell version: 3.0.6
connecting to: 172.16.202.201:60000/test
mongos> use admin
switched to db admin
mongos>db.runCommand({addshard:"shard1/172.16.202.201:11731,172.16.202.202:11731,172.16.202.203:11731",name:"shard1"});
{ "shardAdded" :"shard1", "ok" : 1 }
mongos>db.runCommand({addshard:"shard2/172.16.202.201:11732,172.16.202.202:11732,172.16.202.203:11732",name:"shard2"});
{ "shardAdded" :"shard2", "ok" : 1 }
mongos>db.runCommand({addshard:"shard3/172.16.202.201:11733,172.16.202.202:11733,172.16.202.203:11733",name:"shard3"});
{ "shardAdded" :"shard3", "ok" : 1 }
以上代碼通過執行以下命令:
db.runCommand({addshard:"shard1/172.16.202.201:11731,172.16.202.202:11731,172.16.202.203:11731",name:"shard1"});
將Replica Set 1在三臺服務器上的3個成員節點作為Shard Server 1添加進sharding環境中。
以上代碼通過執行以下命令:
db.runCommand({addshard:"shard2/172.16.202.201:11732,172.16.202.202:11732,172.16.202.203:11732",name:"shard2"});
將Replica Set 2在三臺服務器上的3個成員節點作為Shard Server 2添加進sharding環境中。
以上代碼通過執行以下命令:
db.runCommand({addshard:"shard3/172.16.202.201:11733,172.16.202.202:11733,172.16.202.203:11733",name:"shard3"});
將Replica Set 3在三臺服務器上的3個成員節點作為Shard Server 3添加進sharding環境中。
接下來激活分片,如下面的代碼所示:??? 采用hash分片
[mongo@mongodb01 logs]$/home/mongo/mongodb/bin/mongo 172.16.202.201:60000
MongoDB shell version: 3.0.6
connecting to: 172.16.202.201:60000/test
mongos> use admin
switched to db admin
mongos> db.runCommand({enablesharding:"logs"})
{"ok" : 1 }
mongos>db.runCommand({shardcollection:"logs.users",key:{id:"hashed"}})
{ "collectionsharded" :"logs.users", "ok" : 1 }
如以上代碼所示,首先執行db.runCommand({enablesharding:"logs"})命令激活logs庫上的分片功能;然后執行
db.runCommand({shardcollection:"logs.methodlog",key:{_id:1}})
命令激活users表的分片功能。
?
?
?
2.7.查看分片
?
mongos> sh.status()
--- Sharding Status ---
?sharding version: {
???????? "_id": 1,
???????? "minCompatibleVersion": 5,
???????? "currentVersion": 6,
???????? "clusterId": ObjectId("56559b076bc525804dde4141")
}
?shards:
???????? {? "_id" : "shard1",? "host" :"shard1/172.16.202.201:11731,172.16.202.202:11731" }
???????? {? "_id" : "shard2",? "host" :"shard2/172.16.202.202:11732,172.16.202.203:11732" }
???????? {? "_id" : "shard3",? "host" : "shard3/172.16.202.201:11733,172.16.202.203:11733"}
?balancer:
???????? Currentlyenabled:? yes
???????? Currentlyrunning:? no
???????? Failedbalancer rounds in last 5 attempts:? 0
???????? MigrationResults for the last 24 hours:
?????????????????? 2: Success
?????????????????? 1: Failed with error 'could not acquire collection lock for logs.users tomigrate chunk [{ : MinKey },{ : MaxKey }) :: caused by :: Lock for migratingchunk [{ : MinKey }, { : MaxKey }) in logs.users is taken.', from shard1 toshard2
?????????????????? 2: Failed with error 'migration already in progress', from shard1 to shard2
?databases:
???????? {? "_id" : "admin",? "partitioned" : false,? "primary" : "config" }
???????? {? "_id" :"logs",?"partitioned" : true,?"primary" : "shard1" }
?????????????????? logs.users
??????????????????????????? shard key: { "id" :"hashed" }
??????????????????????????? chunks:
???????????????????????????????????? shard1????? 2
???????????????????????????????????? shard2????? 2
???????????????????????????????????? shard3????? 2
??????????????????????????? {"id" : { "$minKey" : 1 } } -->> { "id" :NumberLong("-6148914691236517204") } on : shard1 Timestamp(3, 2)
??????????????????????????? {"id" : NumberLong("-6148914691236517204") } -->> {"id" : NumberLong("-3074457345618258602") } on : shard1Timestamp(3, 3)
??????????????????????????? {"id" : NumberLong("-3074457345618258602") } -->> {"id" : NumberLong(0) } on : shard3 Timestamp(3, 4)
??????????????????????????? {"id" : NumberLong(0) } -->> { "id" :NumberLong("3074457345618258602") } on : shard3 Timestamp(3, 5)
??????????????????????????? {"id" : NumberLong("3074457345618258602") } -->> {"id" : NumberLong("6148914691236517204") } on : shard2Timestamp(3, 6)
??????????????????????????? {"id" : NumberLong("6148914691236517204") } -->> {"id" : { "$maxKey" : 1 } } on : shard2 Timestamp(3, 7)
?
?
轉載于:https://blog.51cto.com/jxzhfei/1722243
總結
以上是生活随笔為你收集整理的实战mongodb3.06 Relica Sets+sharding集群的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: appium+python自动化57-c
- 下一篇: 大规模快速发展