Project

General

Profile

ZFS installation and Configuration on SFA server

[page mostly completed, needs proofreading]

pagg04, version 2.

Copy installation scripts:

[root@pagg04 ~]# git clone <TBD_URL> zfs-install

For now I copied from local directory:
[root@pagg04 ~]# git clone zfs-install-save zfs-install

[root@pagg04 ~]# cd zfs-install
[root@pagg04 zfs-install]# ls -l
total 16
drwxr-xr-x 2 root root 4096 Apr 29 17:40 files
drwxr-xr-x 2 root root 4096 Apr 29 17:40 install
drwxr-xr-x 2 root root 4096 Apr 29 17:48 logs
drwxr-xr-x 2 root root 4096 Apr 29 17:40 scripts

install rpms

Groupinstall "Development tools:"

# ./install/install.devtools.sh

Install tools: screen emacs lsscsi:

# ./install/install.extras.sh

Install yum repositories: zfs-release
Install dkms from epel
Install kernel-devel and zfs from zfs-release

# ./install/install.zfsrepo.sh

This will install repositories AND zfs.
You need to install repositories only once (epel-release-6-8.noarch.rpm).

You may check update spl and zfs later:

# yum list zfs spl

0 packages excluded due to repository protections
Installed Packages
spl.x86_64                                                           0.6.5.3-1.el6                                                           @zfs
zfs.x86_64                                                           0.6.5.3-1.el6                                                           @zfs
Available Packages
spl.x86_64                                                           0.6.5.4-1.el6                                                           zfs 
zfs.x86_64                                                           0.6.5.4-1.el6                                                           zfs 

Several packages are installed:

=====================================================================================================================================================================
 Package                                    Arch                                Version                                       Repository                        Size
=====================================================================================================================================================================
Updating:
 spl                                        x86_64                              0.6.5.4-1.el6                                 zfs                               26 k
 zfs                                        x86_64                              0.6.5.4-1.el6                                 zfs                              324 k
Updating for dependencies:
 libnvpair1                                 x86_64                              0.6.5.4-1.el6                                 zfs                               28 k
 libuutil1                                  x86_64                              0.6.5.4-1.el6                                 zfs                               33 k
 libzfs2                                    x86_64                              0.6.5.4-1.el6                                 zfs                              113 k
 libzfs2-devel                              x86_64                              0.6.5.4-1.el6                                 zfs                              291 k
 libzpool2                                  x86_64                              0.6.5.4-1.el6                                 zfs                              402 k
 spl-dkms                                   noarch                              0.6.5.4-1.el6                                 zfs                              449 k
 zfs-dkms                                   noarch                              0.6.5.4-1.el6                                 zfs                              1.9 M
 zfs-test                                   x86_64                              0.6.5.4-1.el6                                 zfs                               46 k

Transaction Summary
=====================================================================================================================================================================
Upgrade      10 Package(s)

Set zfs version in script build-zfs.sh . Build spl and zfs.

# ./scripts/build-zfs.sh

Script will call configure, make. It will produce a lot of output and take some time (10-15 min IIRC). Each build ends with:
DKMS: build completed.
...
DKMS: install completed.
...

I triggered rebuild for the current kernel (you may specify kernel version explicitly).

[root@pagg04 ~]# dkms install spl/0.6.5.6 -k 
[root@pagg04 ~]# dkms install zfs/0.6.5.6 -k 

configure aliases for drives in enclosure(s) to be used in zfs

# cd files

Find out what is pci address of raid card, update files/vdev_id.sas.conf


[root@pagg04 files]# lspci | fgrep SCSI
06:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)
83:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)
[root@pagg04 files]# 

[root@pagg04 files]# cat vdev_id.sas.conf 
# Two LSI raid cards serving two enclosures
#
multipath     no
topology      sas_direct
phys_per_port 4
#       PCI_ID  HBA PORT  CHANNEL NAME
channel 06:00.0 0         e2s
channel 83:00.0 0         e3s

This will provide mapping of drives connected to card on PCI slot 06:00.0 port 0 to names "Enclosure 2 Slot NN" like e2s12 and so forth.

Alternatively, you can create vdev_id.conf by defining aliases by running sas2ircu and parsing its out by scripts/make_vdev.awk (described in older version of this installation guide). I save the copy of vdev_id.conf file to vdev_id.alias.conf .

cp -p vdev_id.sas.conf /etc/zfs/vdev_id.conf

Run udeadm trigger to create disk aliases in /dev/disk/by-vdev :

# udevadm trigger --verbose  --type=devices --subsystem-match=block > ../logs/udevadmin.trigger.out

Note: simple "udevadm trigger" will hang the terminal on pagg04. I do not observe this elsewhere.

udevadm created aliases in /dev/disk/by-vdev for all drives connected to raid cards:

[root@pagg04 zfs-install]# ls -fl /dev/disk/by-vdev/e2s{0..99} 2> /dev/null 
lrwxrwxrwx 1 root root 9 Apr 28 19:33 /dev/disk/by-vdev/e2s0 -> ../../sdb
lrwxrwxrwx 1 root root 9 Apr 28 19:33 /dev/disk/by-vdev/e2s1 -> ../../sdc
lrwxrwxrwx 1 root root 9 Apr 28 19:33 /dev/disk/by-vdev/e2s2 -> ../../sdd
lrwxrwxrwx 1 root root 9 Apr 28 19:33 /dev/disk/by-vdev/e2s3 -> ../../sde
lrwxrwxrwx 1 root root 9 Apr 28 19:33 /dev/disk/by-vdev/e2s4 -> ../../sdf
lrwxrwxrwx 1 root root 9 Apr 28 19:33 /dev/disk/by-vdev/e2s5 -> ../../sdg
lrwxrwxrwx 1 root root 9 Apr 28 19:33 /dev/disk/by-vdev/e2s6 -> ../../sdh
lrwxrwxrwx 1 root root 9 Apr 28 19:33 /dev/disk/by-vdev/e2s7 -> ../../sdi
lrwxrwxrwx 1 root root 9 Apr 28 19:33 /dev/disk/by-vdev/e2s8 -> ../../sdj
lrwxrwxrwx 1 root root 9 Apr 28 19:33 /dev/disk/by-vdev/e2s9 -> ../../sdk
lrwxrwxrwx 1 root root 9 Apr 28 19:33 /dev/disk/by-vdev/e2s10 -> ../../sdl
lrwxrwxrwx 1 root root 9 Apr 28 19:33 /dev/disk/by-vdev/e2s11 -> ../../sdm
lrwxrwxrwx 1 root root 9 Apr 28 19:33 /dev/disk/by-vdev/e2s12 -> ../../sdn
lrwxrwxrwx 1 root root 9 Apr 28 19:33 /dev/disk/by-vdev/e2s13 -> ../../sdo
lrwxrwxrwx 1 root root 9 Apr 28 19:33 /dev/disk/by-vdev/e2s21 -> ../../sdp
lrwxrwxrwx 1 root root 9 Apr 28 19:33 /dev/disk/by-vdev/e2s22 -> ../../sdq
lrwxrwxrwx 1 root root 9 Apr 28 19:33 /dev/disk/by-vdev/e2s23 -> ../../sdr

[root@pagg04 zfs-install]# ls -fl /dev/disk/by-vdev/e3s{0..99} 2> /dev/null 
lrwxrwxrwx 1 root root  9 Apr 28 19:33 /dev/disk/by-vdev/e3s2 -> ../../sds
lrwxrwxrwx 1 root root  9 Apr 28 19:33 /dev/disk/by-vdev/e3s3 -> ../../sdt
lrwxrwxrwx 1 root root  9 Apr 28 19:33 /dev/disk/by-vdev/e3s4 -> ../../sdu
lrwxrwxrwx 1 root root  9 Apr 28 19:33 /dev/disk/by-vdev/e3s10 -> ../../sdv
lrwxrwxrwx 1 root root  9 Apr 28 19:33 /dev/disk/by-vdev/e3s11 -> ../../sdw
lrwxrwxrwx 1 root root  9 Apr 28 19:33 /dev/disk/by-vdev/e3s12 -> ../../sdx
lrwxrwxrwx 1 root root  9 Apr 28 19:33 /dev/disk/by-vdev/e3s13 -> ../../sdy
lrwxrwxrwx 1 root root  9 Apr 28 19:33 /dev/disk/by-vdev/e3s14 -> ../../sdz
lrwxrwxrwx 1 root root 10 Apr 28 19:33 /dev/disk/by-vdev/e3s15 -> ../../sdaa
lrwxrwxrwx 1 root root 10 Apr 28 19:33 /dev/disk/by-vdev/e3s16 -> ../../sdab
lrwxrwxrwx 1 root root 10 Apr 28 19:33 /dev/disk/by-vdev/e3s17 -> ../../sdac
lrwxrwxrwx 1 root root 10 Apr 28 19:33 /dev/disk/by-vdev/e3s18 -> ../../sdad
lrwxrwxrwx 1 root root 10 Apr 28 19:33 /dev/disk/by-vdev/e3s19 -> ../../sdae
lrwxrwxrwx 1 root root 10 Apr 28 19:33 /dev/disk/by-vdev/e3s20 -> ../../sdaf
lrwxrwxrwx 1 root root 10 Apr 28 19:33 /dev/disk/by-vdev/e3s21 -> ../../sdag
lrwxrwxrwx 1 root root 10 Apr 28 19:33 /dev/disk/by-vdev/e3s22 -> ../../sdah
lrwxrwxrwx 1 root root 10 Apr 28 19:33 /dev/disk/by-vdev/e3s23 -> ../../sdai

It may worth to reload spl and zfs modules or reboot the node.

3) create zfs pools and zfs filesystems.

Create zfs pools (zpool) and zfs filesystems for HDD, SSD. Disk/raid configuration are set at make-zpool.sh
Present configuration has raidz2(12+2) HDD and raidz2(4+2) SSD. Drives in vdevs alternate between raid cards/enclosures.

# ./scripts/make-zpool.sh 
# ./scripts/make-zfs.sh

We created zpools:

[root@pagg04 zfs-install]# zpool list
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zp    22.9T  2.14M  22.9T         -     0%     0%  1.00x  ONLINE  -
zssd  2.17T   346K  2.17T         -     0%     0%  1.00x  ONLINE  -

[root@pagg04 zfs-install]# zpool list -v
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zp  22.9T  2.14M  22.9T         -     0%     0%  1.00x  ONLINE  -
  raidz2  11.4T   924K  11.4T         -     0%     0%
    e2s0      -      -      -         -      -      -
    e3s10      -      -      -         -      -      -
    e2s1      -      -      -         -      -      -
    e3s11      -      -      -         -      -      -
    e2s2      -      -      -         -      -      -
    e3s12      -      -      -         -      -      -
    e2s3      -      -      -         -      -      -
    e3s13      -      -      -         -      -      -
    e2s4      -      -      -         -      -      -
    e3s14      -      -      -         -      -      -
    e2s5      -      -      -         -      -      -
    e3s15      -      -      -         -      -      -
    e2s6      -      -      -         -      -      -
    e3s16      -      -      -         -      -      -
  raidz2  11.4T  1.24M  11.4T         -     0%     0%
    e2s7      -      -      -         -      -      -
    e3s17      -      -      -         -      -      -
    e2s8      -      -      -         -      -      -
    e3s18      -      -      -         -      -      -
    e2s9      -      -      -         -      -      -
    e3s19      -      -      -         -      -      -
    e2s10      -      -      -         -      -      -
    e3s20      -      -      -         -      -      -
    e2s11      -      -      -         -      -      -
    e3s21      -      -      -         -      -      -
    e2s12      -      -      -         -      -      -
    e3s22      -      -      -         -      -      -
    e2s13      -      -      -         -      -      -
    e3s23      -      -      -         -      -      -
zssd  2.17T   346K  2.17T         -     0%     0%  1.00x  ONLINE  -
  raidz2  2.17T   346K  2.17T         -     0%     0%
    e2s21      -      -      -         -      -      -
    e3s2      -      -      -         -      -      -
    e2s22      -      -      -         -      -      -
    e3s3      -      -      -         -      -      -
    e2s23      -      -      -         -      -      -
    e3s4      -      -      -         -      -      -

zfs filesystems:

[root@pagg04 zfs-install]# zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
zp                1.53M  18.2T   236K  /volumes/zp
zp/sfa             709K  18.2T   236K  /volumes/zp/sfa
zp/sfa/aggread     236K  18.2T   236K  /volumes/aggread
zp/sfa/aggwrite    236K  18.2T   236K  /volumes/aggwrite
zssd               192K  1.40T  32.0K  /volumes/zssd
zssd/sfa          63.9K  1.40T  32.0K  /volumes/zssd/sfa
zssd/sfa/aggpack  32.0K  1.40T  32.0K  /volumes/aggpack
[root@pagg04 zfs-install]#

ZFS filesystems were mounted automatically when they are created.

We use zfs mount for zfs filesystems. If you wish to use legacy mount point, add to fstab:

zp/sfa/aggwrite    /volumes/aggwrite     zfs     noatime,xattr   0 0
zp/sfa/aggread        /volumes/aggread     zfs     noatime,xattr   0 0

Try:

[root@pagg04 zfs-install]# zpool iostat -y 5
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zp          2.14M  22.9T      0      0      0      0
zssd         346K  2.17T      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
zp          2.14M  22.9T      0      0      0      0
zssd         346K  2.17T      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
^C

Misc

Configuration files saved at

    zfs-install/files