Project

General

Profile

ZFS installation - v0

pagg04:

You may find files I used to zfs installation on pagg04 at home/aik/zfs-install,
please make a copy of scripts.

[root@pagg04 zfs-install]# ls -ltr
total 20
drwxr-xr-x 2 root root 4096 Sep 18 16:30 files
drwxr-xr-x 4 root root 4096 Oct 26 10:32 rpms
drwxr-xr-x 2 root root 4096 Oct 26 10:42 install
drwxr-xr-x 6 root root 4096 Nov 22 13:32 scripts

You can do (1) and (2) in any order.

1) install rpms

Groupinstall "Development tools:" 
# ./install/install.devtools.sh

Install tools good to have : expect screen emacs lsscsi:
# ./install/install.extras.sh
"expect" is not needed.

Install yum repositories:
   epel  - it is not installed, I do not need it
   zfs-release
Install dkms from epel
Install kernel-devel and zfs :
# ./install/install.zfsrepo.sh

This will install repositories and zfs.
You need to install repositories only once. 
You may check update spl and zfs later:
# yum list zfs spl

0 packages excluded due to repository protections
Installed Packages
spl.x86_64                                                           0.6.5.3-1.el6                                                           @zfs
zfs.x86_64                                                           0.6.5.3-1.el6                                                           @zfs
Available Packages
spl.x86_64                                                           0.6.5.4-1.el6                                                           zfs 
zfs.x86_64                                                           0.6.5.4-1.el6                                                           zfs 
[root@pagg04 scripts]# 

Build spl and zfs. Configure, make. This may take some time (10-15 min IIRC).
# ./build-zfs.sh

2) configure aliases for drives in enclosure(s) to be used in zfs

# cd scripts

Read through README file.

List sas2 configuration for controllers 0, 1 and save to file.

# sas2ircu 0 DISPLAY > sas.0.out.2015-11-17
# sas2ircu 1 DISPLAY > sas.1.out.2015-11-17

Create zfs vdev configuration by parsing sas2ircu output with awk:
cat sas.[01].out.2015-11-17 | awk -f make_vdev.awk  > vdev_id.conf.2015-11-17

I edited vdev_id.conf to
a) comment out two system disks
b) add few comment lines to mark system drives, SSD, HDD and enclosures. See comments in produced configuration file, README and example below.

Copy configuration to /etc/zfs :
# cp -p  vdev_id.conf.2015-11-17  /etc/zfs/vdev_id.conf

Create zfs configuration file with sas2ircu and awk. The script uses make_vdev.awk to parse sas2ircu output.
The comment at the end of the line has  drive type, ID, etc. It is not used by zfs. It is there to keep original configuration. 
# ./make.vdev_id.conf.sh

Script creates convinient aliases for drive names like:
   alias e2s08   /dev/disk/by-path/pci-0000:02:00.0-sas-0x5000cca0711c6995-lun-0 
to name the disk drive 8 in enclosure 2, and so forth.

Run udeadm to create disk aliases in  /dev/disk/by-vdev 
# udevadm trigger

[root@pagg04 scripts]# ls -l /dev/disk/by-vdev | fgrep -v part
total 0
lrwxrwxrwx 1 root root  9 Nov 22 13:32 e2s02 -> ../../sdc
lrwxrwxrwx 1 root root  9 Nov 22 13:32 e2s03 -> ../../sdd
lrwxrwxrwx 1 root root  9 Nov 22 13:32 e2s04 -> ../../sde
lrwxrwxrwx 1 root root  9 Nov 20 17:02 e2s10 -> ../../sdk
...

3) create zfs pools and zfs filesystems.

Create zfs pools (zpool) for HDD, SSD
# ./make-zpool.sh 

Create zfs filesystems:
# ./make-zfs.sh

Zfs filesystems were mounted automatically when they are created.

Files:
    zfs-install/files 
has saved file
    vdev_id.conf

We use zfs mount for zfs filesystems. If you wish to use legacy mount point, add to fstab:
# sfa legacy mount points - using zfs mount instead
#zp/sfa/aggwrite    /volumes/aggwrite     zfs     noatime,xattr   0 0
#zp/sfa/aggread        /volumes/aggread     zfs     noatime,xattr   0 0

We created zpools:

[root@pagg04 zfs-install]# zpool list
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zp    22.9T  4.06T  18.8T         -    14%    17%  1.00x  ONLINE  -
zssd  2.17T  60.1G  2.11T         -    29%     2%  1.00x  ONLINE  -

[root@pagg04 zfs-install]# zpool list -v
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zp  22.9T  4.06T  18.8T         -    14%    17%  1.00x  ONLINE  -
  raidz2  11.4T  2.03T  9.41T         -    15%    17%
    e2s10      -      -      -         -      -      -
    e3s00      -      -      -         -      -      -
    e2s11      -      -      -         -      -      -
    e3s01      -      -      -         -      -      -
    e2s12      -      -      -         -      -      -
    e3s02      -      -      -         -      -      -
    e2s13      -      -      -         -      -      -
    e3s03      -      -      -         -      -      -
    e2s14      -      -      -         -      -      -
    e3s04      -      -      -         -      -      -
    e2s15      -      -      -         -      -      -
    e3s05      -      -      -         -      -      -
    e2s16      -      -      -         -      -      -
    e3s06      -      -      -         -      -      -
  raidz2  11.4T  2.03T  9.41T         -    14%    17%
    e2s17      -      -      -         -      -      -
    e3s07      -      -      -         -      -      -
    e2s18      -      -      -         -      -      -
    e3s08      -      -      -         -      -      -
    e2s19      -      -      -         -      -      -
    e3s09      -      -      -         -      -      -
    e2s20      -      -      -         -      -      -
    e3s10      -      -      -         -      -      -
    e2s21      -      -      -         -      -      -
    e3s11      -      -      -         -      -      -
    e2s22      -      -      -         -      -      -
    e3s12      -      -      -         -      -      -
    e2s23      -      -      -         -      -      -
    e3s13      -      -      -         -      -      -
zssd  2.17T  60.1G  2.11T         -    29%     2%  1.00x  ONLINE  -
  raidz2  2.17T  60.1G  2.11T         -    29%     2%
    e2s02      -      -      -         -      -      -
    e3s21      -      -      -         -      -      -
    e2s03      -      -      -         -      -      -
    e3s22      -      -      -         -      -      -
    e2s04      -      -      -         -      -      -
    e3s23      -      -      -         -      -      -
[root@pagg04 zfs-install]# 

zfs filesystems, ignore 'test' filsystems below I used for benchmarking:

[root@pagg04 zfs-install]# zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
zp                3.33T  14.8T   236K  /volumes/zp
zp/sfa            1.91T  14.8T   236K  /volumes/zp/sfa
zp/sfa/aggread    1.30T  14.8T  1.30T  /volumes/aggread
zp/sfa/aggwrite    626G  14.8T   626G  /volumes/aggwrite
zp/tests          1.42T  14.8T   236K  /volumes/zp/tests
zp/tests/ior       818G  14.8T   818G  /volumes/zp/tests/ior
zp/tests/iozone    641G  14.8T   641G  /volumes/zp/tests/iozone
zssd              40.0G  1.36T   192K  /volumes/zssd
zssd/sfa          13.1G  1.36T   192K  /volumes/zssd/sfa
zssd/sfa/aggpack  13.1G  1.36T  13.1G  /volumes/aggpack
zssd/tests        26.5G  1.36T  15.2G  /volumes/zssd/tests
zssd/tests/ior    11.3G  1.36T  11.3G  /volumes/zssd/tests/ior
[root@pagg04 zfs-install]# 

[root@pagg04 zfs-install]# cat /etc/zfs/vdev_id.conf
# This file is automatically generated at Tue Nov 17 15:54:50 CST 2015
#
# vdev_id.conf
#
# The alias for the drive is in the form eXsYY,
#   where X is 'Enclosure #' and YY is the 'Slot #' reported by sas2ircu.
#
# The pci part in the disk name by path is hardcoded in the script as /dev/disk/by-path/pci-0000:02:00.0-sas-0x<SAS-ADDR>-lun-0
#   where <SAS-ADDR> is sas address.
#
# Disk properties listed in the comments at the end of the line 'Drive Type', 'Manufacturer', 'Model Number', 'Serial No'
#   were taken during vdev_id.conf file generation and may not be valid anymore after drives were moved around.
#

# System drives: 
#alias    e2s00    /dev/disk/by-path/pci-0000:02:00.0-sas-0x50030480017d1c0c-lun-0  #      SATA_HDD           ATA        HGST HTS721010A9        
#alias    e2s01    /dev/disk/by-path/pci-0000:02:00.0-sas-0x50030480017d1c0d-lun-0  #      SATA_HDD           ATA        HGST HTS721010A9        

#  3 SSD drives in enclosure 2: 
alias    e2s02    /dev/disk/by-path/pci-0000:02:00.0-sas-0x50030480017d1c0e-lun-0  #      SATA_SSD           ATA        INTEL SSDSC2BA40     
alias    e2s03    /dev/disk/by-path/pci-0000:02:00.0-sas-0x50030480017d1c0f-lun-0  #      SATA_SSD           ATA        INTEL SSDSC2BA40      
alias    e2s04    /dev/disk/by-path/pci-0000:02:00.0-sas-0x50030480017d1c10-lun-0  #      SATA_SSD           ATA        INTEL SSDSC2BA40      

# 14 HDD drives in enclosure 2:
alias    e2s10    /dev/disk/by-path/pci-0000:02:00.0-sas-0x5000cca0711c3c25-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e2s11    /dev/disk/by-path/pci-0000:02:00.0-sas-0x5000cca0711c63e9-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e2s12    /dev/disk/by-path/pci-0000:02:00.0-sas-0x5000cca0711c4e91-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e2s13    /dev/disk/by-path/pci-0000:02:00.0-sas-0x5000cca0711c1615-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e2s14    /dev/disk/by-path/pci-0000:02:00.0-sas-0x5000cca0711c3c1d-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e2s15    /dev/disk/by-path/pci-0000:02:00.0-sas-0x5000cca0711b697d-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e2s16    /dev/disk/by-path/pci-0000:02:00.0-sas-0x5000cca0711c4619-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e2s17    /dev/disk/by-path/pci-0000:02:00.0-sas-0x5000cca0711ba8b1-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e2s18    /dev/disk/by-path/pci-0000:02:00.0-sas-0x5000cca0711c4529-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e2s19    /dev/disk/by-path/pci-0000:02:00.0-sas-0x5000cca071199341-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GG*
alias    e2s20    /dev/disk/by-path/pci-0000:02:00.0-sas-0x5000cca0711c4879-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e2s21    /dev/disk/by-path/pci-0000:02:00.0-sas-0x5000cca0711bd001-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e2s22    /dev/disk/by-path/pci-0000:02:00.0-sas-0x5000cca0711c445d-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e2s23    /dev/disk/by-path/pci-0000:02:00.0-sas-0x5000cca0711c2511-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*

# 14 HDD drives in enclosure 3:
alias    e3s00    /dev/disk/by-path/pci-0000:03:00.0-sas-0x5000cca0711c6e41-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e3s01    /dev/disk/by-path/pci-0000:03:00.0-sas-0x5000cca0711c6185-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e3s02    /dev/disk/by-path/pci-0000:03:00.0-sas-0x5000cca0711c471d-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e3s03    /dev/disk/by-path/pci-0000:03:00.0-sas-0x5000cca0711c164d-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e3s04    /dev/disk/by-path/pci-0000:03:00.0-sas-0x5000cca0711ca1e1-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e3s05    /dev/disk/by-path/pci-0000:03:00.0-sas-0x5000cca0711c5ed1-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e3s06    /dev/disk/by-path/pci-0000:03:00.0-sas-0x5000cca0711bbc81-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e3s07    /dev/disk/by-path/pci-0000:03:00.0-sas-0x5000cca0711c0e3d-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e3s08    /dev/disk/by-path/pci-0000:03:00.0-sas-0x5000cca0711a2499-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GG*
alias    e3s09    /dev/disk/by-path/pci-0000:03:00.0-sas-0x5000cca0711c0cd9-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e3s10    /dev/disk/by-path/pci-0000:03:00.0-sas-0x5000cca0711c4521-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e3s11    /dev/disk/by-path/pci-0000:03:00.0-sas-0x5000cca0711ca1ed-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e3s12    /dev/disk/by-path/pci-0000:03:00.0-sas-0x5000cca0711c6995-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*
alias    e3s13    /dev/disk/by-path/pci-0000:03:00.0-sas-0x5000cca0711c3525-lun-0  #       SAS_HDD       HITACHI        HUC109090CSS600                 W8GH*

#  3 SSD drives in enclosure 3: 
alias    e3s21    /dev/disk/by-path/pci-0000:03:00.0-sas-0x5003048001edbba1-lun-0  #      SATA_SSD           ATA        INTEL SSDSC2BA40      BTHV51940*
alias    e3s22    /dev/disk/by-path/pci-0000:03:00.0-sas-0x5003048001edbba2-lun-0  #      SATA_SSD           ATA        INTEL SSDSC2BA40      BTHV51940*
alias    e3s23    /dev/disk/by-path/pci-0000:03:00.0-sas-0x5003048001edbba3-lun-0  #      SATA_SSD           ATA        INTEL SSDSC2BA40      BTHV51940*
[root@pagg04 zfs-install]# 

Commands:

# zpool list 
# zpool list -v
# zfs list

# zpool iostat -y 5
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zp          4.06T  18.8T      0      0      0      0
zssd        60.1G  2.11T      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
zp          4.06T  18.8T      0     26      0   108K
zssd        60.1G  2.11T      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
^C