From zpool man page:
A pool can have any number of virtual devices at the top of the config‐
uration (known as "root vdevs"). Data is dynamically distributed across
all top-level devices to balance data among devices. As new virtual
devices are added, ZFS automatically places data on the newly available
devices.
So, to add some capacity to our pool, we can add one more raidz to it. Let's look at this process. Firstly, let's create test pool from three "disks":
freebsd# foreach i ( 1 2 3 4 5 6)
foreach? dd if=/dev/zero of=/tmp/disk$i bs=100M count=1
foreach? end
freebsd# zpool create testpool raidz /tmp/disk1 /tmp/disk2 /tmp/disk3
Now, let's see what we have:
freebsd# zpool status testpool
pool: testpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
/tmp/disk1 ONLINE 0 0 0
/tmp/disk2 ONLINE 0 0 0
/tmp/disk3 ONLINE 0 0 0
errors: No known data errors
freebsd# zpool list testpool
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
testpool 286M 156K 286M 0% ONLINE -
freebsd# zfs list testpool
NAME USED AVAIL REFER MOUNTPOINT
testpool 95.9K 158M 28.0K /testpool
Now let's expand this pool:
freebsd# zpool add testpool raidz /tmp/disk4 /tmp/disk5 /tmp/disk6
So, we recieved the following configuration:
freebsd# zpool status testpool
pool: testpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
/tmp/disk1 ONLINE 0 0 0
/tmp/disk2 ONLINE 0 0 0
/tmp/disk3 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
/tmp/disk4 ONLINE 0 0 0
/tmp/disk5 ONLINE 0 0 0
/tmp/disk6 ONLINE 0 0 0
errors: No known data errors
freebsd# zpool list testpool
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
testpool 572M 210K 572M 0% ONLINE -
freebsd# zfs list testpool
NAME USED AVAIL REFER MOUNTPOINT
testpool 114K 349M 28.0K /testpool
Let's compare this to raidz from 6 disks:
freebsd# zpool create testpool raidz /tmp/disk1 /tmp/disk2 /tmp/disk3 /tmp/disk4 /tmp/disk5 /tmp/disk6
freebsd# zpool status testpool
pool: testpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
/tmp/disk1 ONLINE 0 0 0
/tmp/disk2 ONLINE 0 0 0
/tmp/disk3 ONLINE 0 0 0
/tmp/disk4 ONLINE 0 0 0
/tmp/disk5 ONLINE 0 0 0
/tmp/disk6 ONLINE 0 0 0
errors: No known data errors
freebsd# zpool list testpool
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
testpool 572M 147K 572M 0% ONLINE -
freebsd# zfs list testpool
NAME USED AVAIL REFER MOUNTPOINT
testpool 112K 443M 34.9K /testpool
So, we lost more then 21% of space, but expanded our pool without downtime. Firstly, the lost of more then 20% space suprised me, but 20% was expectable (in second case only 5 disks, in first - 4, so 20% is the difference).
Let's see, where did 1 percent go. 443 MB is useful in second case, it means about 11.4 MB (100 /*disk size*/ - 443/*useful space*/ / 5 /*useful disks*/) metadata per disk. In first case we have 349 MB. It is provided by two pools with 2 useful disks in each. So, wasted space per disk is 12.75 MB (100 - 349/2/2). It seems, in this configuration we have a bit more metadata.
Комментариев нет:
Отправить комментарий