mongodb数据迁移2种方式比较

尝试了2种方式对数据进行迁移,一种是rsync,直接拉取数据;另一种是使用mongodump/mongorestore

1.rsync
操作步骤:
1.2:
[mongodb]
path = /data1/mongodb/data
hosts allow = 192.168.1.0/24
read only = no
write only = no
1.3:
rsync -avz root@192.168.1.2::mongodb/dbname /data/mongodb-linux-x86_64-1.8.1/data/
chown -R mongodb:mongodb /data/mongodb-linux-x86_64-1.8.1/data/

使用时间:50分钟
到目标服务器数据:50G
优点:使用时间短
缺点:需要配置rsync,数据占用的空间大(数据原封不动的拉取过来,包括碎片)

2.mongodump/mongorestore
操作步骤:
mongodump:
/data/PRG/mongodb/bin/mongodump –host 192.168.1.2:27017 -d dbname -uuername -ppasswd -o /data/mongodb-linux-x86_64-1.8.1/data/ –directoryperdb
mongorestore:
/data/mongodb-linux-x86_64-1.8.1/bin/mongorestore –dbpath /data/mongodb-linux-x86_64-1.8.1/data/ –directoryperdb /data/dbname/
chown -R mongodb:mongodb /data/mongodb-linux-x86_64-1.8.1/data/

使用时间:35(mongodump)+90(mongorestore)
到目标服务器数据:20G(需要的空间大大减小,拉取过程中相当于做了一次碎片整理)
优点:迁移到新服务器的数据经过了整理,需要空间大大减小
缺点:需要时间长

数据迁移时需要停mongo进行操作,而2种方式各有优缺点,如果可以忽略操作时间内的数据的话,那么使用第2种方式会比较好(已经有不少例子因为碎片带来严重的后果)

mongodb sharding cluster(分片集群)

MongoDB的auto-sharding功能是指mongodb通过mongos自动建立一个水平扩展的数据库集群系统,将数据库分表存储在sharding的各个节点上。

通过把Sharding和Replica Sets相结合,可以搭建一个分布式的,高可用性,自动水平扩展的集群。

要构建MongoDB Sharding Cluster,需要三种角色:

Shard Server: mongod 实例, 使用 Replica Sets,确保每个数据节点都具有备份、自动容错转移、自动恢复能力。用于存储实际的数据块,实际生产环境中一个shard server角色可由几台机器组个一个relica set承担,防止主机单点故障

Config Server: mongod 实例,使用 3 个配置服务器,确保元数据完整性(two-phase commit)。存储了整个 Cluster Metadata,其中包括 chunk 信息。

Route Server: mongos 实例,配合 LVS,实现负载平衡,提高接入性能(high performance)。前端路由,客户端由此接入,且让整个集群看上去像单一数据库,前端应用可以透明使用。

环境如下:

192.168.198.131

shard1:10001

shard2:10002

shard3:10003

config1:20000

192.168.198.129

shard1:10001

shard2:10002

shard3:10003

config2:20000

192.168.198.132

shard1:10001

shard2:10002

shard3:10003

config3:20000

192.168.198.133

mongos:27017

分别在三台服务器上安装mongod服务,安装如下:

# wget http://fastdl.mongodb.org/linux/mongodb-linux-x86_64-2.0.3.tgz

# tar zxvf mongodb-linux-x86_64-2.0.3.tgz -C ../software/

# ln -s mongodb-linux-x86_64-2.0.3 /usr/local/mongodb

# useradd mongodb

# mkdir -p /data/mongodb/shard1

# mkdir -p /data/mongodb/shard2

# mkdir -p /data/mongodb/shard3

# mkdir -p /data/mongodb/config1

配置shard1的replica set

192.168.198.131

# cd /usr/local/mongodb/bin

# ./mongod –shardsvr –replSet shard1 –port 10001 –dbpath /data/mongodb/shard1 –oplogSize 100 –logpath /data/mongodb/shard1/shard1.log –logappend –fork

192.168.198.129

# ./mongod –shardsvr –replSet shard1 –port 10001 –dbpath /data/mongodb/shard1 –oplogSize 100 –logpath /data/mongodb/shard1/shard1.log –logappend –fork

192.168.198.132

# ./mongod –shardsvr –replSet shard1 –port 10001 –dbpath /data/mongodb/shard1 –oplogSize 100 –logpath /data/mongodb/shard1/shard1.log –logappend –fork

连接到192.168.198.131

# ./mongo –port 10001

> config={_id:”shard1″,members:[

… {_id:0,host:”192.168.198.131:10001″},

… {_id:1,host:”192.168.198.129:10001″},

… {_id:2,host:”192.168.198.132:10001″}]

… }

> rs.initiate(config)

{

“info” : “Config now saved locally. Should come online in about a minute.”,

“ok” : 1

}

PRIMARY> rs.status()

{

“set” : “shard1″,

“date” : ISODate(“2012-03-02T02:37:55Z”),

“myState” : 1,

“members” : [

{

“_id” : 0,

“name” : “192.168.198.131:10001”,

“health” : 1,

“state” : 1,

“stateStr” : “PRIMARY”,

“optime” : {

“t” : 1330655827000,

“i” : 1

},

“optimeDate” : ISODate(“2012-03-02T02:37:07Z”),

“self” : true

},

{

“_id” : 1,

“name” : “192.168.198.129:10001”,

“health” : 1,

“state” : 2,

“stateStr” : “SECONDARY”,

“uptime” : 36,

“optime” : {

“t” : 1330655827000,

“i” : 1

},

“optimeDate” : ISODate(“2012-03-02T02:37:07Z”),

“lastHeartbeat” : ISODate(“2012-03-02T02:37:53Z”),

“pingMs” : 0

},

{

“_id” : 2,

“name” : “192.168.198.132:10001”,

“health” : 1,

“state” : 2,

“stateStr” : “SECONDARY”,

“uptime” : 36,

“optime” : {

“t” : 1330655827000,

“i” : 1

},

“optimeDate” : ISODate(“2012-03-02T02:37:07Z”),

“lastHeartbeat” : ISODate(“2012-03-02T02:37:53Z”),

“pingMs” : 466553

}

],

“ok” : 1

}

配置shard2的replica set

192.168.198.129

# ./mongod –shardsvr –replSet shard2 –port 10002 –dbpath /data/mongodb/shard2 –oplogSize 100 –logpath /data/mongodb/shard2/shard2.log –logappend –fork

192.168.198.131

# ./mongod –shardsvr –replSet shard2 –port 10002 –dbpath /data/mongodb/shard2 –oplogSize 100 –logpath /data/mongodb/shard2/shard2.log –logappend –fork

192.168.198.132

# ./mongod –shardsvr –replSet shard2 –port 10002 –dbpath /data/mongodb/shard2 –oplogSize 100 –logpath /data/mongodb/shard2/shard2.log –logappend –fork

连接到192.168.198.129

# ./mongo –port 10002

> config={_id:”shard2″,members:[

… {_id:0,host:”192.168.198.129:10002″},

… {_id:1,host:”192.168.198.131:10002″},

… {_id:2,host:”192.168.198.132:10002″}]

… }

{

“_id” : “shard2″,

“members” : [

{

“_id” : 0,

“host” : “192.168.198.129:10002”

},

{

“_id” : 1,

“host” : “192.168.198.131:10002”

},

{

“_id” : 2,

“host” : “192.168.198.132:10002”

}

]

}

> rs.initiate(config)

{

“info” : “Config now saved locally. Should come online in about a minute.”,

“ok” : 1

}

> rs.status()

{

“set” : “shard2″,

“date” : ISODate(“2012-03-02T02:53:17Z”),

“myState” : 1,

“members” : [

{

“_id” : 0,

“name” : “192.168.198.129:10002”,

“health” : 1,

“state” : 1,

“stateStr” : “PRIMARY”,

“optime” : {

“t” : 1330656717000,

“i” : 1

},

“optimeDate” : ISODate(“2012-03-02T02:51:57Z”),

“self” : true

},

{

“_id” : 1,

“name” : “192.168.198.131:10002”,

“health” : 1,

“state” : 2,

“stateStr” : “SECONDARY”,

“uptime” : 73,

“optime” : {

“t” : 1330656717000,

“i” : 1

},

“optimeDate” : ISODate(“2012-03-02T02:51:57Z”),

“lastHeartbeat” : ISODate(“2012-03-02T02:53:17Z”),

“pingMs” : 1

},

{

“_id” : 2,

“name” : “192.168.198.132:10002”,

“health” : 1,

“state” : 2,

“stateStr” : “SECONDARY”,

“uptime” : 73,

“optime” : {

“t” : 1330656717000,

“i” : 1

},

“optimeDate” : ISODate(“2012-03-02T02:51:57Z”),

“lastHeartbeat” : ISODate(“2012-03-02T02:53:17Z”),

“pingMs” : 209906

}

],

“ok” : 1

}

配置shard3的replica set

192.168.198.132

# ./mongod –shardsvr –replSet shard3 –port 10003 –dbpath /data/mongodb/shard3 –oplogSize 100 –logpath /data/mongodb/shard3/shard3.log –logappend –fork

192.168.198.129

# ./mongod –shardsvr –replSet shard3 –port 10003 –dbpath /data/mongodb/shard3 –oplogSize 100 –logpath /data/mongodb/shard3/shard3.log –logappend –fork

192.168.198.131

# ./mongod –shardsvr –replSet shard3 –port 10003 –dbpath /data/mongodb/shard3 –oplogSize 100 –logpath /data/mongodb/shard3/shard3.log –logappend –fork

连接到192.168.198.132

# ./mongo –port 10003

> config={_id:”shard3″,members:[

… {_id:0,host:”192.168.198.132:10003″},

… {_id:1,host:”192.168.198.131:10003″},

… {_id:2,host:”192.168.198.129:10003″}]

… }

{

“_id” : “shard3″,

“members” : [

{

“_id” : 0,

“host” : “192.168.198.132:10003”

},

{

“_id” : 1,

“host” : “192.168.198.131:10003”

},

{

“_id” : 2,

“host” : “192.168.198.129:10003”

}

]

}

> rs.initiate(config)

{

“info” : “Config now saved locally. Should come online in about a minute.”,

“ok” : 1

}

> rs.status()

{

“set” : “shard3″,

“date” : ISODate(“2012-03-02T03:04:52Z”),

“myState” : 1,

“members” : [

{

“_id” : 0,

“name” : “192.168.198.132:10003”,

“health” : 1,

“state” : 1,

“stateStr” : “PRIMARY”,

“optime” : {

“t” : 1330657451000,

“i” : 1

},

“optimeDate” : ISODate(“2012-03-02T03:04:11Z”),

“self” : true

},

{

“_id” : 1,

“name” : “192.168.198.131:10003”,

“health” : 1,

“state” : 2,

“stateStr” : “SECONDARY”,

“uptime” : 39,

“optime” : {

“t” : 1330657451000,

“i” : 1

},

“optimeDate” : ISODate(“2012-03-02T03:04:11Z”),

“lastHeartbeat” : ISODate(“2012-03-02T03:04:52Z”),

“pingMs” : 0

},

{

“_id” : 2,

“name” : “192.168.198.129:10003”,

“health” : 1,

“state” : 2,

“stateStr” : “SECONDARY”,

“uptime” : 39,

“optime” : {

“t” : 1330657451000,

“i” : 1

},

“optimeDate” : ISODate(“2012-03-02T03:04:11Z”),

“lastHeartbeat” : ISODate(“2012-03-02T03:04:52Z”),

“pingMs” : 0

}

],

“ok” : 1

}

配置config

192.168.198.131

# ./mongod –configsvr –dbpath /data/mongodb/config1 –port 20000 –logpath /data/mongodb/config1/config1.log –logappend –fork

192.168.198.129

# ./mongod –configsvr –dbpath /data/mongodb/config1 –port 20000 –logpath /data/mongodb/config1/config1.log –logappend –fork

192.168.198.132

# ./mongod –configsvr –dbpath /data/mongodb/config1 –port 20000 –logpath /data/mongodb/config1/config1.log –logappend –fork

配置mongos

# ./mongos –configdb 192.168.198.131:20000,192.168.198.129:20000,192.168.198.132:20000 –port 27017 –chunkSize 1 –logpath /data/mongodb/mongos.log –logappend –fork

配置shard cluster

# ./mongo –port 27017

mongos> use admin

switched to db admin

加入shards

mongos> db.runCommand({addshard:”shard1/192.168.198.131:10001,192.168.198.129:10001,192.168.198.132:10001″});

{ “shardAdded” : “shard1”, “ok” : 1 }

mongos> db.runCommand({addshard:”shard2/192.168.198.131:10002,192.168.198.129:10002,192.168.198.132:10002″});

{ “shardAdded” : “shard2”, “ok” : 1 }

mongos> db.runCommand({addshard:”shard3/192.168.198.131:10003,192.168.198.129:10003,192.168.198.132:10003″});

{ “shardAdded” : “shard3”, “ok” : 1 }

列出shards

mongos> db.runCommand({listshards:1})

{

“shards” : [

{

“_id” : “shard1”,

“host” : “shard1/192.168.198.129:10001,192.168.198.131:10001,192.168.198.132:10001”

},

{

“_id” : “shard2”,

“host” : “shard2/192.168.198.129:10002,192.168.198.131:10002,192.168.198.132:10002”

},

{

“_id” : “shard3”,

“host” : “shard3/192.168.198.129:10003,192.168.198.131:10003,192.168.198.132:10003”

}

],

“ok” : 1

}

激活数据库分片

mongos> db.runCommand({enablesharding:”test”});

{ “ok” : 1 }

通过以上命令,可以将数据库test跨shard,如果不执行,数据库只会存放在一个shard,一旦激活数据库分片,数据库中的不同的collection将被存放在不同的shard上,但一个collection仍旧存放在同一个shard上,要使collection也分片,需对collection做些其他操作。

collection分片

mongos> db.runCommand({shardcollection:”test.data”,key:{_id:1}})

{ “collectionsharded” : “test.data”, “ok” : 1 }

分片的collection只能有一个在分片key上的唯一索引,其他唯一索引不被允许。

查看shard信息

mongos> printShardingStatus()

— Sharding Status —

sharding version: { “_id” : 1, “version” : 3 }

shards:

{ “_id” : “shard1”, “host” : “shard1/192.168.198.129:10001,192.168.198.131:10001,192.168.198.132:10001” }

{ “_id” : “shard2”, “host” : “shard2/192.168.198.129:10002,192.168.198.131:10002,192.168.198.132:10002” }

{ “_id” : “shard3”, “host” : “shard3/192.168.198.129:10003,192.168.198.131:10003,192.168.198.132:10003” }

databases:

{ “_id” : “admin”, “partitioned” : false, “primary” : “config” }

{ “_id” : “test”, “partitioned” : true, “primary” : “shard1” }

test.data chunks:

shard1 1

{ “_id” : { $minKey : 1 } } –>> { “_id” : { $maxKey : 1 } } on : shard1 { “t” : 1000, “i” : 0 }

mongos> use test

switched to db test

mongos> db.data.stats()

{

“sharded” : true,

“flags” : 1,

“ns” : “test.data”,

“count” : 0,

“numExtents” : 1,

“size” : 0,

“storageSize” : 8192,

“totalIndexSize” : 8176,

“indexSizes” : {

“_id_” : 8176

},

“avgObjSize” : 0,

“nindexes” : 1,

“nchunks” : 1,

“shards” : {

“shard1” : {

“ns” : “test.data”,

“count” : 0,

“size” : 0,

“storageSize” : 8192,

“numExtents” : 1,

“nindexes” : 1,

“lastExtentSize” : 8192,

“paddingFactor” : 1,

“flags” : 1,

“totalIndexSize” : 8176,

“indexSizes” : {

“_id_” : 8176

},

“ok” : 1

}

},

“ok” : 1

}

测试:插入大量数据

mongos> for (var i=1;i<=500000;i++) db.data.save ({_id:i,value:”www.strongd.net”})

mongos> printShardingStatus()

— Sharding Status —

sharding version: { “_id” : 1, “version” : 3 }

shards:

{ “_id” : “shard1”, “host” : “shard1/192.168.198.129:10001,192.168.198.131:10001,192.168.198.132:10001” }

{ “_id” : “shard2”, “host” : “shard2/192.168.198.129:10002,192.168.198.131:10002,192.168.198.132:10002” }

{ “_id” : “shard3”, “host” : “shard3/192.168.198.129:10003,192.168.198.131:10003,192.168.198.132:10003” }

databases:

{ “_id” : “admin”, “partitioned” : false, “primary” : “config” }

{ “_id” : “test”, “partitioned” : true, “primary” : “shard1” }

test.data chunks:

shard1 6

shard2 5

shard3 11

too many chunks to print, use verbose if you want to force print

mongos> db.data.stats()

{

“sharded” : true,

“flags” : 1,

“ns” : “test.data”,

“count” : 500000,

“numExtents” : 19,

“size” : 22000084,

“storageSize” : 43614208,

“totalIndexSize” : 14062720,

“indexSizes” : {

“_id_” : 14062720

},

“avgObjSize” : 44.000168,

“nindexes” : 1,

“nchunks” : 22,

“shards” : {

“shard1” : {

“ns” : “test.data”,

“count” : 112982,

“size” : 4971232,

“avgObjSize” : 44.00021242321786,

“storageSize” : 11182080,

“numExtents” : 6,

“nindexes” : 1,

“lastExtentSize” : 8388608,

“paddingFactor” : 1,

“flags” : 1,

“totalIndexSize” : 3172288,

“indexSizes” : {

“_id_” : 3172288

},

“ok” : 1

},

“shard2” : {

“ns” : “test.data”,

“count” : 124978,

“size” : 5499056,

“avgObjSize” : 44.00019203379795,

“storageSize” : 11182080,

“numExtents” : 6,

“nindexes” : 1,

“lastExtentSize” : 8388608,

“paddingFactor” : 1,

“flags” : 1,

“totalIndexSize” : 3499328,

“indexSizes” : {

“_id_” : 3499328

},

“ok” : 1

},

“shard3” : {

“ns” : “test.data”,

“count” : 262040,

“size” : 11529796,

“avgObjSize” : 44.000137383605555,

“storageSize” : 21250048,

“numExtents” : 7,

“nindexes” : 1,

“lastExtentSize” : 10067968,

“paddingFactor” : 1,

“flags” : 1,

“totalIndexSize” : 7391104,

“indexSizes” : {

“_id_” : 7391104

},

“ok” : 1

}

},

“ok” : 1

}

mongodb的NUMA问题

mongodb日志显示如下:

WARNING: You are running on a NUMA machine.

We suggest launching mongod like this to avoid performance problems:

numactl –interleave=all mongod [other options]

解决方案:

1.在原启动命令前面加numactl –interleave=all

如# numactl –interleave=all ${MONGODB_HOME}/bin/mongod –config conf/mongodb.conf

2.修改内核参数

echo 0 > /proc/sys/vm/zone_reclaim_mode

http://www.mongodb.org/display/DOCS/NUMA

下面注释转自网络

一、NUMA和SMP

NUMA和SMP是两种CPU相关的硬件架构。在SMP架构里面,所有的CPU争用一个总线来访问所有内存,优点是资源共享,而缺点是总线争用激烈。随着PC服务器上的CPU数量变多(不仅仅是CPU核数),总线争用的弊端慢慢越来越明显,于是Intel在Nehalem CPU上推出了NUMA架构,而AMD也推出了基于相同架构的Opteron CPU。

NUMA最大的特点是引入了node和distance的概念。对于CPU和内存这两种最宝贵的硬件资源,NUMA用近乎严格的方式划分了所属的资源组(node),而每个资源组内的CPU和内存是几乎相等。资源组的数量取决于物理CPU的个数(现有的PC server大多数有两个物理CPU,每个CPU有4个核);distance这个概念是用来定义各个node之间调用资源的开销,为资源调度优化算法提供数据支持。

二、NUMA相关的策略

1、每个进程(或线程)都会从父进程继承NUMA策略,并分配有一个优先node。如果NUMA策略允许的话,进程可以调用其他node上的资源。

2、NUMA的CPU分配策略有cpunodebind、physcpubind。cpunodebind规定进程运行在某几个node之上,而physcpubind可以更加精细地规定运行在哪些核上。

3、NUMA的内存分配策略有localalloc、preferred、membind、interleave。localalloc规定进程从当前node上请求分配内存;而preferred比较宽松地指定了一个推荐的node来获取内存,如果被推荐的node上没有足够内存,进程可以尝试别的node。membind可以指定若干个node,进程只能从这些指定的node上请求分配内存。interleave规定进程从指定的若干个node上以RR算法交织地请求分配内存。

三、NUMA和swap的关系

可能大家已经发现了,NUMA的内存分配策略对于进程(或线程)之间来说,并不是公平的。在现有的Redhat Linux中,localalloc是默认的NUMA内存分配策略,这个配置选项导致资源独占程序很容易将某个node的内存用尽。而当某个node的内存耗尽时,Linux又刚好将这个node分配给了某个需要消耗大量内存的进程(或线程),swap就妥妥地产生了。尽管此时还有很多page cache可以释放,甚至还有很多的free内存。

四、解决swap问题

虽然NUMA的原理相对复杂,实际上解决swap却很简单:只要在启动MySQL之前使用numactl –interleave来修改NUMA策略即可。

值得注意的是,numactl这个命令不仅仅可以调整NUMA策略,也可以用来查看当前各个node的资源是用情况,是一个很值得研究的命令。