(2 votes, average: 3.00 out of 5)
Loading...
Oracle Solaris 11.4 ZFS Device Removal Example
One of the new features in the recent Solaris 11.4 release(that rely rocks), is, ZFS Device Removal. Below I am going to demonstrated one example, on how you can use ZFS Device Removal. The example below show how migrated a pool from raidz1 => mirrored pool. First, lets create a test directory with test files, do so by running the below.mkdir test && cd testLets prepare / create test files to use in this test.
for i in {1..7}; do mkfile 175m file$i;doneNext, lets create a test pool.
zpool create testPool raidz1 /root/test/file1 /root/test/file2 /root/test/file3Lets see the newly create raidz1 pool.
zpool status testPool pool: testPool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM testPool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 /root/test/file1 ONLINE 0 0 0 /root/test/file2 ONLINE 0 0 0 /root/test/file3 ONLINE 0 0 0 errors: No known data errorsThe goal of the next exercise is to convert the testPool from raidz1 to a mirrored configuration. To accomplish that, we are going to add a new mirror to the existing pool.
zpool add testPool mirror /root/test/file4 /root/test/file5 mirror /root/test/file6 /root/test/file7 vdev verification failed: use -f to override the following errors: mismatched replication level: pool uses raidz and new vdev is mirror Unable to build pool from specified devices: invalid vdev configurationSo running the above gives you a warning to not mix raid types, typicality not a good practice in a normal environment. So lets force adding the newly mirrored raid disks (as this is a pre- requisite to the migration/removal), by adding a -f.
zpool add -f testPool mirror /root/test/file4 /root/test/file5 mirror /root/test/file6 /root/test/file7Lets take a look on the pool.
zpool status testPool pool: testPool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM testPool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 /root/test/file1 ONLINE 0 0 0 /root/test/file2 ONLINE 0 0 0 /root/test/file3 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 /root/test/file4 ONLINE 0 0 0 /root/test/file5 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 /root/test/file6 ONLINE 0 0 0 /root/test/file7 ONLINE 0 0 0 errors: No known data errorsAs you can see from the zpool status output above. the zpool now contains a mix of raidz and a mirror. We are now ready for prime time test. so lets remove the raidz raid set. You do that by simply running the below.
zpool remove testPool raidz1-0Now, lets take a look on the zpool status. As you can see below, we are only left with the mirrored configuration.
zpool status testPool pool: testPool state: ONLINE scan: resilvered 17.5K in 1s with 0 errors on Tue Aug 28 12:33:14 2018 config: NAME STATE READ WRITE CKSUM testPool ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 /root/test/file4 ONLINE 0 0 0 /root/test/file5 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 /root/test/file6 ONLINE 0 0 0 /root/test/file7 ONLINE 0 0 0 errors: No known data errorsThats all it takes to trigger the ZFS device removal option. Cleaning up. Just run the below to remove your pool and remove the testing files.
zpool destroy testPool rm file[1-7]You might also like – Articles related to Oracle Solaris 11.4/Solaris 12. Like what you’re reading? please provide feedback, any feedback is appreciated.
0
0
votes
Article Rating
Hey Eli, Great blog about Solaris ZFS device removal. It works great with real devices too. ?
Thanks, Cindy
Great article, i heard that even with data inside zfs redistributes the data in the removal process if there is enough space in the remaining devices.