Summary Reference: Managing ZFS

This article summarizes operations but does not replace a good reference such as Oracle’s or FreeBSD’s. For building your own ZFS file server, refer to our post here.

CREATING A STORAGE POOL

The steps for creating a storage pool with a mirrored set of vdevs, are as follows:

zpool create <poolname> mirror <vdev1> <vdev2>

CREATING A DATASET

Once a pool is created, we must create and configure a dataset:

zfs create <poolname>/<datasetname>
$ zfs set mountpoint=<mountingpoint> <poolname>/<datasetname>

Set other properties than mountpoint the same way, for instance share.nsf=on to enable NSF, or quota=<10G> to set quota to 10G. To get properties use get followed by the property.

ADDING CAPACITY

By adding vdevs to a pool at top level, the capacity can be dynamically increased. For instance, the following adds a new mirrored pair to an existing pool:

zpool add <poolname> mirror <vdev3> <vdev4>

A log or cache can be added to an existing pool as follows:

$ zpool add <poolname> log mirror <vdev4> <vdev5>
$ zpool add <poolname> cache mirror <vdev6> <vdev7>

You can perform a dry run using the -n option/switch of the spool add command. Also note that in above command mirror simply stand for a higher level vdev in the vdev tree, which happens to be of mirror type.

After adding a vdev to a redundant set, the system automatically “re-silvers” the new disk to exactly mirror the existing one(s). Let this operation complete before attempting addition of a next disk.

Adding vdevs to existing vdevs can be done using the attach command:

$ zpool attach <poolname> <existing vdev> <vdev to be attached>

When adding or replacing disks or spares, make sure that the new disk has equal or more capacity than the smallest in the pool. Beware that in some instances a same 4TB disk from one vendor can have a different capacity than that of another vendor. To replace a disk with a new one, use:

$ zpool replace <poolname> <vdevtoreplace> <newvdev>

SPLITTING AND MOVING POOLS

To remove vdevs, use remove and refer the redundant vdev name, such as mirror-1, or if not redundant, by individual vdev name:

$ zpool remove <poolname> mirror-1

Redundant vdevs can also be split apart into two separate pools that are clones of each other. By default of each top-level vdev, the last vdev disk is split off. Or to control which vdevs are split off, add them to the command below:

$ zpool split <poolname> <newpoolname>

The split-off vdevs are detached and given a new pool GUID so that it can be imported on the same system and given a new name, or imported on another system. An entire pool can also be exported, for import on another host:

$ zpool export <poolname>

Import a specific pool with:

$ zpool import <newpoolname>

If this result in a conflict in mounting points, import with the -R switch followed by a new mounting point. The -R switch can also be used at the time of split. Note that splitting and importing on another system can be used to perform backups. After completion, the temporary pool can be destroyed and its vdevs re-attached in the original system.

Removal of a device from a pool, and subsequently placing back is handled automatically without the need for any intervention. For example, a stand-by spare vdev is added and removed automatically.

Other Procedures

Refer to other articles on certain other useful operations:

In-Service Larger Disk Upgrade

REFERENCES

All about ZFS (freebsd)
Oracle Solaris ZFS Data Management (Oracle; pdf)
Solaris 11.3 ZFS (Oracle)
Stystem Administration (OpenZFS)
Hardware (OpenZFS)

Disclaimer: We are not responsible for any data loss as the result of reading our article We stress that you must confirm the accuracy of information in this article yourself and test your server thoroughly before placing any critical data on it.
updated: 20170129; 20170309; 20170704
photo: none

Leave a Reply

Your email address will not be published. Required fields are marked *