京东6.18大促主会场领京享红包更优惠

 找回密码
 立即注册

QQ登录

只需一步,快速开始

docker compose部署mongodb 分片集群的操作方法

2024-11-4 03:12| 发布者: 284cc| 查看: 187| 评论: 0

摘要: 目次分片机制分片概念分片集群架构分片集群部署部署规划主机预备shardconfig servermongos开始部署搭建 shard 副本集初始化副本集搭建 config server 集群搭建 mongos 集群检察 分片状态验证分片机制 分片概念 分片
目次

分片机制

分片概念

分片(sharding)是指将数据库拆分,将其分散在不同的呆板上的过程。将数据分散到不同的呆板上,不须要功能强盛的服务器就可以存储更多的数据和处置惩罚更大的负载。基本头脑就是将聚集切成小块,这些块分散到多少片里,每个片只负责总数据的一部分,末了通过一个均衡器来对各个分片举行均衡(数据迁移)。通过一个名为mongos的路由进程举行操作,mongos知道数据和片的对应关系(通过配置服务器)。大部分使用场景都是办理磁盘空间的题目,对于写入有大概跨分片,查询则尽量避免跨分片查询。

mongodb分片的主要使用场景:

  • 数据量过大,单机磁盘空间不足;
  • 单个mongod不能满足写数据的性能要求,须要通过分片让写压力分散到各个分片上面;
  • 把大量数据放到内存里提高性能,通过分片使用分片服务器自身的资源。

mongodb分片优势**:**

淘汰单个分片须要处置惩罚的请求数,提高集群的存储容量和吞吐量 好比,当插入一条数据时,应用只须要访问存储这条数据的分片 淘汰单分片存储的数据,提高数据可用性,提高大型数据库查询服务的性能。 当MongoDB单点数据库服务器存储和性能成为瓶颈,大概须要部署大型应用以充分利用内存时,可以使用分片技术

分片集群架构

组件说明:

  • **Config Server:配置服务器,**存储了整个 分片群集的配置信息,此中包括 chunk信息。
  • **Shard:分片服务器,**用于存储实际的数据块,每一个shard都负责存储集群中的一部分数据。例如一个集群有3个分片,假设界说分片的规则为hash,那么整个集群的数据会按照相应规划分割到3个分片当中。恣意一个分片挂掉,则整个集群数据不可用。以是在实际生产环境中一个shard server角色一样平常由一个3节点的replicaSet承担,防止分片的单点故障。
  • **mongos:前端路由,**整个集群的入口。客户端应用通过mongos连接到整个集群,mongos让整个集群看上去像单一数据库,客户端应用可以透明使用

整个mongo分片集群的功能:

  • 请求分流:通过路由节点将请求分发到对应的分片和块中
  • 数据分流:内部提供均衡器包管数据的匀称分布,这是数据均匀分布式、请求均匀分布的条件
  • 块的拆分:mongodb的单个chunk的最大容量为64M大概10w的数据,当到达这个阈值,触发块的拆分,一分为二
  • 块的迁移:为包管数据在分片节点服务器分片节点服务器匀称分布,块会在节点之间迁移。一样平常相差8个分块的时候触发

分片集群部署

部署规划

shard 3 个副本集
config server 3 个副本集
mongos 3 个副本集

主机预备

shard

IProleportshardname
192.168.142.157shard127181shard1
192.168.142.157shard227182shard1
192.168.142.157shard327183shard1
192.168.142.155shard127181shard2
192.168.142.155shard227182shard2
192.168.142.155shard327183shard2
192.168.142.156shard127181shard3
192.168.142.156shard227182shard3
192.168.142.156shard327183shard3

config server

IProleportconfig name
192.168.142.157config server127281config1
192.168.142.157config server227282config1
192.168.142.157config server327283config1
192.168.142.155config server127281config2
192.168.142.155config server227282config2
192.168.142.155config server327283config2
192.168.142.156config server127281config3
192.168.142.156config server227282config3
192.168.142.156config server327283config3

mongos

IProleport
192.168.142.155mongos27381
192.168.142.155mongos27382
192.168.142.155mongos27383

开始部署

创建搭建分片集群的文件夹

[code]mkdir /docker/mongo-zone/{configsvr,shard,mongos} -p[/code]

进入 [code]/docker/mongo-zone/[/code] 文件夹

configsvr 副本集文件夹预备

[code]mkdir configsvr/{configsvr1,configsvr2,configsvr3}/{data,logs} -p[/code]

shard 副本集文件夹预备

[code]mkdir shard/{shard1,shard2,shard3}/{data,logs} -p[/code]

mongos 副本集文件夹预备

[code]mkdir mongos/{mongos1,mongos2,mongos3}/{data,logs} -p[/code]

生成密钥

[code]openssl rand -base64 756 > mongo.key[/code]

发放给其他主机

[code]scp mongo.key slave@192.168.142.156:/home/slave scp mongo.key slave02@192.168.142.155:/home/slave02[/code] [code]mv /home/slave02/mongo.key .mv /home/slave/mongo.key .[/code] [code]chown root:root mongo.key[/code]

搭建 shard 副本集

[code]cd /docker/mongo-zone/shard/shard1[/code]

docker-compose.yml

[code]services: mongo-shard1: image: mongo:7.0 container_name: mongo-shard1 restart: always volumes: - /docker/mongo-zone/shard/shard1/data:/data/db - /docker/mongo-zone/shard/shard1/logs:/var/log/mongodb - /docker/mongo-zone/mongo.key:/etc/mongo.key ports: - "27181:27181" environment: MONGO_INITDB_ROOT_USERNAME: root MONGO_INITDB_ROOT_PASSWORD: 123456 MONGO_INITDB_REPLICA_SET_NAME: shard1 MONGO_INITDB_DATABASE: admin command: - /bin/sh - -c - | chmod 400 /etc/mongo.key chown 999:999 /etc/mongo.key mongod --shardsvr --directoryperdb --replSet shard1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27181 mongo-shard2: image: mongo:7.0 container_name: mongo-shard2 restart: always volumes: - /docker/mongo-zone/shard/shard2/data:/data/db - /docker/mongo-zone/shard/shard2/logs:/var/log/mongodb - /docker/mongo-zone/mongo.key:/etc/mongo.key ports: - "27182:27182" environment: MONGO_INITDB_ROOT_USERNAME: root MONGO_INITDB_ROOT_PASSWORD: 123456 MONGO_INITDB_REPLICA_SET_NAME: shard1 MONGO_INITDB_DATABASE: admin command: - /bin/sh - -c - | chmod 400 /etc/mongo.key chown 999:999 /etc/mongo.key mongod --shardsvr --directoryperdb --replSet shard1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27182 mongo-shard3: image: mongo:7.0 container_name: mongo-shard3 restart: always volumes: - /docker/mongo-zone/shard/shard3/data:/data/db - /docker/mongo-zone/shard/shard3/logs:/var/log/mongodb - /docker/mongo-zone/mongo.key:/etc/mongo.key ports: - "27183:27183" environment: MONGO_INITDB_ROOT_USERNAME: root MONGO_INITDB_ROOT_PASSWORD: 123456 MONGO_INITDB_REPLICA_SET_NAME: shard1 MONGO_INITDB_DATABASE: admin command: - /bin/sh - -c - | chmod 400 /etc/mongo.key chown 999:999 /etc/mongo.key mongod --shardsvr --directoryperdb --replSet shard1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27183[/code]

其他三台主机的操作和上面一样,参考上面表格
修改 docker-compose.yml 三处地方即可

[code]MONGO_INITDB_REPLICA_SET_NAME–replSet[/code]

初始化副本集

[code]docker exec -it mongo-shard1 mongosh --port 27181[/code] [code]use admin[/code] [code]rs.initiate()[/code]

添加 root 用户

[code]db.createUser({user:"root",pwd:"123456",roles:[{role:"root",db:"admin"}]})[/code]

登录 root 用户

[code]db.auth("root","123456")[/code]

添加其他节点

[code]rs.add({host:"192.168.142.155:27182",priority:2}) rs.add({host:"192.168.142.155:27183",priority:3})[/code]

检察集群状态

[code]rs.status()[/code] [code]{ set: 'shard1', date: ISODate('2024-10-15T03:25:48.706Z'), myState: 1, term: Long('2'), syncSourceHost: '', syncSourceId: -1, heartbeatIntervalMillis: Long('2000'), majorityVoteCount: 2, writeMajorityCount: 2, votingMembersCount: 3, writableVotingMembersCount: 3, optimes: { lastCommittedOpTime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') }, lastCommittedWallTime: ISODate('2024-10-15T03:25:43.400Z'), readConcernMajorityOpTime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') }, appliedOpTime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') }, durableOpTime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') }, lastAppliedWallTime: ISODate('2024-10-15T03:25:43.400Z'), lastDurableWallTime: ISODate('2024-10-15T03:25:43.400Z') }, lastStableRecoveryTimestamp: Timestamp({ t: 1728962730, i: 1 }), electionCandidateMetrics: { lastElectionReason: 'priorityTakeover', lastElectionDate: ISODate('2024-10-15T03:21:50.316Z'), electionTerm: Long('2'), lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1728962500, i: 1 }), t: Long('1') }, lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1728962500, i: 1 }), t: Long('1') }, numVotesNeeded: 2, priorityAtElection: 2, electionTimeoutMillis: Long('10000'), priorPrimaryMemberId: 0, numCatchUpOps: Long('0'), newTermStartDate: ISODate('2024-10-15T03:21:50.320Z'), wMajorityWriteAvailabilityDate: ISODate('2024-10-15T03:21:50.327Z') }, members: [ { _id: 0, name: '4590140ce686:27181', health: 1, state: 2, stateStr: 'SECONDARY', uptime: 250, optime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') }, optimeDurable: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') }, optimeDate: ISODate('2024-10-15T03:25:43.000Z'), optimeDurableDate: ISODate('2024-10-15T03:25:43.000Z'), lastAppliedWallTime: ISODate('2024-10-15T03:25:43.400Z'), lastDurableWallTime: ISODate('2024-10-15T03:25:43.400Z'), lastHeartbeat: ISODate('2024-10-15T03:25:47.403Z'), lastHeartbeatRecv: ISODate('2024-10-15T03:25:47.403Z'), pingMs: Long('0'), lastHeartbeatMessage: '', syncSourceHost: '192.168.142.157:27182', syncSourceId: 1, infoMessage: '', configVersion: 5, configTerm: 2 }, { _id: 1, name: '192.168.142.157:27182', health: 1, state: 1, stateStr: 'PRIMARY', uptime: 435, optime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') }, optimeDate: ISODate('2024-10-15T03:25:43.000Z'), lastAppliedWallTime: ISODate('2024-10-15T03:25:43.400Z'), lastDurableWallTime: ISODate('2024-10-15T03:25:43.400Z'), syncSourceHost: '', syncSourceId: -1, infoMessage: '', electionTime: Timestamp({ t: 1728962510, i: 1 }), electionDate: ISODate('2024-10-15T03:21:50.000Z'), configVersion: 5, configTerm: 2, self: true, lastHeartbeatMessage: '' }, { _id: 2, name: '192.168.142.157:27183', health: 1, state: 2, stateStr: 'SECONDARY', uptime: 7, optime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') }, optimeDurable: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') }, optimeDate: ISODate('2024-10-15T03:25:43.000Z'), optimeDurableDate: ISODate('2024-10-15T03:25:43.000Z'), lastAppliedWallTime: ISODate('2024-10-15T03:25:43.400Z'), lastDurableWallTime: ISODate('2024-10-15T03:25:43.400Z'), lastHeartbeat: ISODate('2024-10-15T03:25:47.405Z'), lastHeartbeatRecv: ISODate('2024-10-15T03:25:47.906Z'), pingMs: Long('0'), lastHeartbeatMessage: '', syncSourceHost: '192.168.142.157:27182', syncSourceId: 1, infoMessage: '', configVersion: 5, configTerm: 2 } ], ok: 1 }[/code]

搭建 config server 集群

操作和上面差不多,下面只提供 docker-compose.yml 文件

[code]services: mongo-config1: image: mongo:7.0 container_name: mongo-config1 restart: always volumes: - /docker/mongo-zone/configsvr/configsvr1/data:/data/db - /docker/mongo-zone/configsvr/configsvr1/logs:/var/log/mongodb - /docker/mongo-zone/mongo.key:/etc/mongo.key ports: - "27281:27281" environment: MONGO_INITDB_ROOT_USERNAME: root MONGO_INITDB_ROOT_PASSWORD: 123456 MONGO_INITDB_REPLICA_SET_NAME: config1 MONGO_INITDB_DATABASE: admin command: - /bin/sh - -c - | chmod 400 /etc/mongo.key chown 999:999 /etc/mongo.key mongod --configsvr --directoryperdb --replSet config1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27281 mongo-config2: image: mongo:7.0 container_name: mongo-config2 restart: always volumes: - /docker/mongo-zone/configsvr/configsvr2/data:/data/db - /docker/mongo-zone/configsvr/configsvr2/logs:/var/log/mongodb - /docker/mongo-zone/mongo.key:/etc/mongo.key ports: - "27282:27282" environment: MONGO_INITDB_ROOT_USERNAME: root MONGO_INITDB_ROOT_PASSWORD: 123456 MONGO_INITDB_REPLICA_SET_NAME: config1 MONGO_INITDB_DATABASE: admin command: - /bin/sh - -c - | chmod 400 /etc/mongo.key chown 999:999 /etc/mongo.key mongod --configsvr --directoryperdb --replSet config1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27282 mongo-config3: image: mongo:7.0 container_name: mongo-config3 restart: always volumes: - /docker/mongo-zone/configsvr/configsvr3/data:/data/db - /docker/mongo-zone/configsvr/configsvr3/logs:/var/log/mongodb - /docker/mongo-zone/mongo.key:/etc/mongo.key ports: - "27283:27283" environment: MONGO_INITDB_ROOT_USERNAME: root MONGO_INITDB_ROOT_PASSWORD: 123456 MONGO_INITDB_REPLICA_SET_NAME: config1 MONGO_INITDB_DATABASE: admin command: - /bin/sh - -c - | chmod 400 /etc/mongo.key chown 999:999 /etc/mongo.key mongod --configsvr --directoryperdb --replSet config1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27283[/code]

搭建 mongos 集群

操作和上面差不多,下面只提供 docker-compose.yml 文件

[code]services: mongo-mongos1: image: mongo:7.0 container_name: mongo-mongos1 restart: always volumes: - /docker/mongo-zone/mongos/mongos1/data:/data/db - /docker/mongo-zone/mongos/mongos1/logs:/var/log/mongodb - /docker/mongo-zone/mongo.key:/etc/mongo.key ports: - "27381:27381" environment: MONGO_INITDB_ROOT_USERNAME: root MONGO_INITDB_ROOT_PASSWORD: 123456 MONGO_INITDB_DATABASE: admin command: - /bin/sh - -c - | chmod 400 /etc/mongo-mongos1.key chown 999:999 /etc/mongo-mongos1.key mongos --configdb config1/192.168.142.157:27281,192.168.142.157:27282,192.168.142.157:27283 --bind_ip_all --keyFile /etc/mongo-mongos1.key --port 27381 mongo-mongos2: image: mongo:7.0 container_name: mongo-mongos2 restart: always volumes: - /docker/mongo-zone/mongos/mongos2/data:/data/db - /docker/mongo-zone/mongos/mongos2/logs:/var/log/mongodb - /docker/mongo-zone/mongo.key:/etc/mongo.key ports: - "27382:27382" environment: MONGO_INITDB_ROOT_USERNAME: root MONGO_INITDB_ROOT_PASSWORD: 123456 MONGO_INITDB_DATABASE: admin command: - /bin/sh - -c - | chmod 400 /etc/mongo-mongos1.key chown 999:999 /etc/mongo-mongos1.key mongos --configdb config2/192.168.142.155:27281,192.168.142.155:27282,192.168.142.155:27283 --bind_ip_all --keyFile /etc/mongo-mongos1.key --port 27382 mongo-mongos3: image: mongo:7.0 container_name: mongo-mongos3 restart: always volumes: - /docker/mongo-zone/mongos/mongos3/data:/data/db - /docker/mongo-zone/mongos/mongos3/logs:/var/log/mongodb - /docker/mongo-zone/mongo.key:/etc/mongo.key ports: - "27383:27383" environment: MONGO_INITDB_ROOT_USERNAME: root MONGO_INITDB_ROOT_PASSWORD: 123456 MONGO_INITDB_DATABASE: admin command: - /bin/sh - -c - | chmod 400 /etc/mongo-mongos1.key chown 999:999 /etc/mongo-mongos1.key mongos --configdb config3/192.168.142.156:27281,192.168.142.156:27282,192.168.142.156:27283 --bind_ip_all --keyFile /etc/mongo-mongos1.key --port 27383[/code]

它不再须要单独生成密钥,将 config server 的密钥文件拷贝过来即可,切记一定要使用 config server 的密钥文件,不然会登录不进去

[code]docker exec -it mongo-mongos1 mongosh --port 27381 -u root -p 123456 --authenticationDatabase admin[/code] [code]use admin[/code]

没有用户就照上面的方法再创建一个

[code]db.auth("root","123456")[/code]

添加分片

[code]sh.addShard("shard1/192.168.142.157:27181,192.168.142.157:27182,192.168.142.157:27183") sh.addShard("shard3/192.168.142.156:27181,192.168.142.156:27182,192.168.142.156:27183") sh.addShard("shard2/192.168.142.155:27181,192.168.142.155:27182,192.168.142.155:27183")[/code]

此时现在,大概会报错 [code]找不到 192.168.142.157:27181 主机 不在 shard1 [/code]
可是它明明就在 shard1 里面

[code][direct: mongos] admin> sh.addShard("shard1/192.168.142.157:27181,192.168.142.157:27182,192.168.142.157:27183") MongoServerError[OperationFailed]: in seed list shard1/192.168.142.157:27181,192.168.142.157:27182,192.168.142.157:27183, host 192.168.142.157:27181 does not belong to replica set shard1; found { compression: [ "snappy", "zstd", "zlib" ], topologyVersion: { processId: ObjectId('670e225373d36364f75d8336'), counter: 7 }, hosts: [ "b170b4e78bc6:27181", "192.168.142.157:27182", "192.168.142.157:27183" ], setName: "shard1", setVersion: 5, isWritablePrimary: true, secondary: false, primary: "192.168.142.157:27183", me: "192.168.142.157:27183", electionId: ObjectId('7fffffff0000000000000003'), lastWrite: { opTime: { ts: Timestamp(1728984093, 1), t: 3 }, lastWriteDate: new Date(1728984093000), majorityOpTime: { ts: Timestamp(1728984093, 1), t: 3 }, majorityWriteDate: new Date(1728984093000) }, isImplicitDefaultMajorityWC: true, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1728984102377), logicalSessionTimeoutMinutes: 30, connectionId: 57, minWireVersion: 0, maxWireVersion: 21, readOnly: false, ok: 1.0, $clusterTime: { clusterTime: Timestamp(1728984093, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $configTime: Timestamp(0, 1), $topologyTime: Timestamp(0, 1), operationTime: Timestamp(1728984093, 1) }[/code]

原来题目出在

那么这个时候,要么使用这一串不知名的东西,要么就改这个节点的名字
实现方式比力简朴,就是先移除这个节点,再重新添加,我省事就不赘述了

重新添加

[code]sh.addShard("shard1/b170b4e78bc6:27181,192.168.142.157:27182,192.168.142.157:27183") sh.addShard("shard3/cbfa7ed4415f:27181,192.168.142.156:27182,192.168.142.156:27183") sh.addShard("shard2/444e6ad7d88c:27181,192.168.142.155:27182,192.168.142.155:27183")[/code]

检察 分片状态

[code]sh.status()[/code] [code]shardingVersion { _id: 1, clusterId: ObjectId('670e2ed1c3ccdfa3427b6b97') } --- shards [ { _id: 'shard1', host: 'shard1/192.168.142.157:27182,192.168.142.157:27183,b170b4e78bc6:27181', state: 1, topologyTime: Timestamp({ t: 1728984938, i: 3 }) }, { _id: 'shard2', host: 'shard2/192.168.142.155:27182,192.168.142.155:27183,444e6ad7d88c:27181', state: 1, topologyTime: Timestamp({ t: 1728985069, i: 1 }) }, { _id: 'shard3', host: 'shard3/192.168.142.156:27182,192.168.142.156:27183,cbfa7ed4415f:27181', state: 1, topologyTime: Timestamp({ t: 1728985021, i: 3 }) } ] --- active mongoses [ { '7.0.14': 3 } ] --- autosplit { 'Currently enabled': 'yes' } --- balancer { 'Currently enabled': 'yes', 'Currently running': 'no', 'Failed balancer rounds in last 5 attempts': 0, 'Migration Results for the last 24 hours': 'No recent migrations' } --- databases [ { database: { _id: 'config', primary: 'config', partitioned: true }, collections: { 'config.system.sessions': { shardKey: { _id: 1 }, unique: false, balancing: true, chunkMetadata: [ { shard: 'shard1', nChunks: 1 } ], chunks: [ { min: { _id: MinKey() }, max: { _id: MaxKey() }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 0 }) } ], tags: [] } } } ][/code]

着重检察

[code]shards [ { _id: 'shard1', host: 'shard1/192.168.142.157:27182,192.168.142.157:27183,b170b4e78bc6:27181', state: 1, topologyTime: Timestamp({ t: 1728984938, i: 3 }) }, { _id: 'shard2', host: 'shard2/192.168.142.155:27182,192.168.142.155:27183,444e6ad7d88c:27181', state: 1, topologyTime: Timestamp({ t: 1728985069, i: 1 }) }, { _id: 'shard3', host: 'shard3/192.168.142.156:27182,192.168.142.156:27183,cbfa7ed4415f:27181', state: 1, topologyTime: Timestamp({ t: 1728985021, i: 3 }) } ][/code]

节点都齐备就表示分片搭建完成

验证

数据库分片配置

[code]注意: 这些操作都在 mongos 上实行[/code] [code]use test[/code]

对数据库启动分片

[code]sh.enableSharding("test")[/code]

返回效果

[code]{  ok: 1,  '$clusterTime': {    clusterTime: Timestamp({ t: 1728985516, i: 9 }),    signature: {      hash: Binary.createFromBase64('QWe6Dj8TwrM1aVVHmnOtihKsFm0=', 0),      keyId: Long('7425924310763569175')    }  },  operationTime: Timestamp({ t: 1728985516, i: 3 })}[/code]

对test库的test聚集的_id举行哈希分片

[code]sh.enableBalancing("test.test")[/code]

返回效果

[code]_id_hashed[/code] [code]sh.shardCollection("test.test", {"_id": "hashed" })[/code] [code]{  collectionsharded: 'test.test',  ok: 1,  '$clusterTime': {    clusterTime: Timestamp({ t: 1728985594, i: 48 }),    signature: {      hash: Binary.createFromBase64('SqkMn9xNXjnsNfNd4WTFiHajLPc=', 0),      keyId: Long('7425924310763569175')    }  },  operationTime: Timestamp({ t: 1728985594, i: 48 })}[/code]

让当前分片支持均衡

[code]sh.enableBalancing("test.test")[/code] [code]{ acknowledged: true, insertedId: null, matchedCount: 1, modifiedCount: 0, upsertedCount: 0 }[/code]

开启均衡

[code]sh.startBalancer()[/code] [code]{ ok: 1, '$clusterTime': { clusterTime: Timestamp({ t: 1728985656, i: 4 }), signature: { hash: Binary.createFromBase64('jTVkQGDtAHtLTjhZkBc3CQx+tzM=', 0), keyId: Long('7425924310763569175') } }, operationTime: Timestamp({ t: 1728985656, i: 4 }) }[/code]

创建用户
就在 test 库下

[code]db.createUser({user:"shardtest",pwd:"shardtest",roles:[{role:'dbOwner',db:'test'}]})[/code]

插入数据测试

[code]for (i = 1; i <= 300; i=i+1){db.test.insertOne({'name': "test"})}[/code]

检察详细分片信息

[code]检察详细分片信息[/code]

效果

[code]shardingVersion{ _id: 1, clusterId: ObjectId('670e2ed1c3ccdfa3427b6b97') }---shards[  {    _id: 'shard1',    host: 'shard1/192.168.142.157:27182,192.168.142.157:27183,b170b4e78bc6:27181',    state: 1,    topologyTime: Timestamp({ t: 1728984938, i: 3 })  },  {    _id: 'shard2',    host: 'shard2/192.168.142.155:27182,192.168.142.155:27183,444e6ad7d88c:27181',    state: 1,    topologyTime: Timestamp({ t: 1728985069, i: 1 })  },  {    _id: 'shard3',    host: 'shard3/192.168.142.156:27182,192.168.142.156:27183,cbfa7ed4415f:27181',    state: 1,    topologyTime: Timestamp({ t: 1728985021, i: 3 })  }]---active mongoses[  {    _id: '3158a5543d69:27381',    advisoryHostFQDNs: [],    created: ISODate('2024-10-15T09:03:06.663Z'),    mongoVersion: '7.0.14',    ping: ISODate('2024-10-15T09:51:18.345Z'),    up: Long('2891'),    waiting: true  },  {    _id: 'c5a08ca76189:27381',    advisoryHostFQDNs: [],    created: ISODate('2024-10-15T09:03:06.647Z'),    mongoVersion: '7.0.14',    ping: ISODate('2024-10-15T09:51:18.119Z'),    up: Long('2891'),    waiting: true  },  {    _id: '5bb8b2925f52:27381',    advisoryHostFQDNs: [],    created: ISODate('2024-10-15T09:03:06.445Z'),    mongoVersion: '7.0.14',    ping: ISODate('2024-10-15T09:51:18.075Z'),    up: Long('2891'),    waiting: true  }]---autosplit{ 'Currently enabled': 'yes' }---balancer{  'Currently enabled': 'yes',  'Currently running': 'no',  'Failed balancer rounds in last 5 attempts': 0,  'Migration Results for the last 24 hours': 'No recent migrations'}---databases[  {    database: { _id: 'config', primary: 'config', partitioned: true },    collections: {      'config.system.sessions': {        shardKey: { _id: 1 },        unique: false,        balancing: true,        chunkMetadata: [ { shard: 'shard1', nChunks: 1 } ],        chunks: [          { min: { _id: MinKey() }, max: { _id: MaxKey() }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 0 }) }        ],        tags: []      }    }  },  {    database: {      _id: 'test',      primary: 'shard2',      partitioned: false,      version: {        uuid: UUID('3b193276-e88e-42e1-b053-bcb61068a865'),        timestamp: Timestamp({ t: 1728985516, i: 1 }),        lastMod: 1      }    },    collections: {      'test.test': {        shardKey: { _id: 'hashed' },        unique: false,        balancing: true,        chunkMetadata: [          { shard: 'shard1', nChunks: 2 },          { shard: 'shard2', nChunks: 2 },          { shard: 'shard3', nChunks: 2 }        ],        chunks: [          { min: { _id: MinKey() }, max: { _id: Long('-6148914691236517204') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 0 }) },          { min: { _id: Long('-6148914691236517204') }, max: { _id: Long('-3074457345618258602') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 1 }) },          { min: { _id: Long('-3074457345618258602') }, max: { _id: Long('0') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 2 }) },          { min: { _id: Long('0') }, max: { _id: Long('3074457345618258602') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 3 }) },          { min: { _id: Long('3074457345618258602') }, max: { _id: Long('6148914691236517204') }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 4 }) },          { min: { _id: Long('6148914691236517204') }, max: { _id: MaxKey() }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 5 }) }        ],        tags: []      }    }  }][/code]

重点检察

[code]chunks: [ { min: { _id: MinKey() }, max: { _id: Long('-6148914691236517204') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 0 }) }, { min: { _id: Long('-6148914691236517204') }, max: { _id: Long('-3074457345618258602') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 1 }) }, { min: { _id: Long('-3074457345618258602') }, max: { _id: Long('0') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 2 }) }, { min: { _id: Long('0') }, max: { _id: Long('3074457345618258602') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 3 }) }, { min: { _id: Long('3074457345618258602') }, max: { _id: Long('6148914691236517204') }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 4 }) }, { min: { _id: Long('6148914691236517204') }, max: { _id: MaxKey() }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 5 }) } ],[/code]

我们可以清楚的看到 [code]shard1 shard2 shard3[/code]

检察该表分片数据信息

[code]db.test.getShardDistribution()[/code] [code]Shard shard2 at shard2/192.168.142.155:27182,192.168.142.155:27183,444e6ad7d88c:27181 { data: '3KiB', docs: 108, chunks: 2, 'estimated data per chunk': '1KiB', 'estimated docs per chunk': 54 } --- Shard shard1 at shard1/192.168.142.157:27182,192.168.142.157:27183,b170b4e78bc6:27181 { data: '3KiB', docs: 89, chunks: 2, 'estimated data per chunk': '1KiB', 'estimated docs per chunk': 44 } --- Shard shard3 at shard3/192.168.142.156:27182,192.168.142.156:27183,cbfa7ed4415f:27181 { data: '3KiB', docs: 103, chunks: 2, 'estimated data per chunk': '1KiB', 'estimated docs per chunk': 51 } --- Totals { data: '10KiB', docs: 300, chunks: 6, 'Shard shard2': [ '36 % data', '36 % docs in cluster', '37B avg obj size on shard' ], 'Shard shard1': [ '29.66 % data', '29.66 % docs in cluster', '37B avg obj size on shard' ], 'Shard shard3': [ '34.33 % data', '34.33 % docs in cluster', '37B avg obj size on shard' ] }[/code]

我们可以看到 三个 shard 都均匀分了这个些数据

检察sharding状态

[code]db.printShardingStatus()[/code]

关闭聚集分片

[code]sh.disableBalancing("test.test")[/code]

效果

[code]{  acknowledged: true,  insertedId: null,  matchedCount: 1,  modifiedCount: 1,  upsertedCount: 0}[/code]

到此这篇关于docker compose部署mongodb 分片集群的文章就介绍到这了,更多相干docker compose mongodb 分片集群内容请搜索脚本之家从前的文章或继续浏览下面的相干文章希望各人以后多多支持脚本之家!


来源:https://www.jb51.net/server/3288693gf.htm
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!
关闭

站长推荐上一条 /6 下一条

QQ|手机版|小黑屋|梦想之都-俊月星空 ( 粤ICP备18056059号 )|网站地图

GMT+8, 2025-7-1 18:48 , Processed in 0.042286 second(s), 19 queries .

Powered by Mxzdjyxk! X3.5

© 2001-2025 Discuz! Team.

返回顶部