solaris 11 exercise zfs (1)

Basic operations.

Your machine has 3 disks of 300MB
If your machine has no available disks you
can create 3 files of 300MB in the /dev/dsk
directory and use those.
Perform the following 3 commands only if your
machine has no available disks.
# mkfile 300m /dev/dsk/disk1
# mkfile 300m /dev/dsk/disk2
# mkfile 300m /dev/dsk/disk3

You will perform the following basic operations:
1. Create a mirrored zpool of 2 disks and 1 spare.
2. Create 2 zfs filesystems in the pool.
3. Set quota and reservations.
4. Rename the pool by exporting and importing.

Locate the 3 disks first, format.

1. Create a mirrored zpool of 2 disks with 1 spare.
# zpool create pool1 mirror c4t1d0 c4t3d0 spare c4t4d0
# zpool status pool1
pool: pool1
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c4t1d0 ONLINE 0 0 0
c4t3d0 ONLINE 0 0 0
spares
c4t4d0 AVAIL

errors: No known data errors

2. Create 2 zfs filesystems in the pool.
# zfs create pool1/fs1
# zfs create pool1/fs2

3. Set quota and reservations
# zfs set quota=100m pool1/fs1
# zfs set reservation=100m pool1/fs1

check the available space in the pool

# df -h |grep pool1
pool1 254M 33K 154M 1% /pool1
pool1/fs1 100M 31K 100M 1% /pool1/fs1
pool1/fs2 254M 31K 154M 1% /pool1/fs2

4. Rename the pool.
# zpool export pool1
# zpool import pool1 newpool

Shadow Migration.
user3 and user4 work togethers

user3:
create a zfs filesystem and put some files in it and
share it with NFS.

# zfs create software/source
# cp -r /var/adm/* /software/source/
# share -F nfs -o ro /software/source
# dfshares
RESOURCE SERVER ACCESS TRANSPORT
solaris11-3:/software/source solaris11-3 - -

user4:
Check whether shadow-migration is installed.
# pkg list shadow-migration
NAME (PUBLISHER) VERSION IFO
system/file-system/shadow-migration 0.5.11-0.175.0.0.0.2.1 i--

# svcadm enable shadowd
# svcs shadowd
STATE STIME FMRI
online 21:42:19 svc:/system/filesystem/shadowd:default

Create a zfs filesystem and mount it with the shadow argument.

# zfs create -o shadow=nfs://192.168.4.153/software/source \
software/destination

# ls /software/destination

Replication.

setup simple replication.
user3 and user4 work together.
# zfs create software/replisource
# zfs send software/replisource@snap1 | ssh root@192.168.4.153 \
zfs receive software/replidest

setup automatic replication.
user3 and user4 work together.

1. setup automatic rootlogin at the destination.
user4:
# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
26:d7:aa:4e:50:6d:26:ac:78:e8:56:d7:27:d9:de:29 root@solaris11-4

2. copy the new public key to the destination server.
# cd $HOME/.ssh
# scp id_rsa.pub root@192.168.4.153:/root/.ssh/authorized_keys
Password: ******
id_rsa.pub 100% |******************************************************************| 398 00:00

3. Create a simple script to automate the replication.
- source filesystem is software/replisource
- destination filesystem is software/replidest
- atime should be set to off on the destination

user4:
=============================================
#!/usr/bin/bash
# create the baseline transfer
zfs snapshot software/replisource@snap0
zfs send software/replisource@snap0 | ssh root@192.168.4.153 zfs recv -F software/replidest
# switch of atime on destination
ssh root@192.168.4.153 zfs set atime=off software/replidest

#loop to increment
while true
do

#manage destinationsnaps
echo manage destinationsnaps
ssh root@192.168.4.153 zfs destroy software/replidest@snap1
ssh root@192.168.4.153 zfs rename software/replidest@snap0 software/replidest@snap1
echo done

#manage localsnaps
echo manage source snaps
zfs destroy software/replisource@snap1
zfs rename software/replisource@snap0 software/replisource@snap1
zfs snapshot software/replisource@snap0
echo done

#incremental replication
echo increment
zfs send -i software/replisource@snap1 software/replisource@snap0 | ssh root@192.168.4.153 zfs receive -F software/replidest
echo done
sleep 5
done
==========================

Run the script.
# chmod +x repli.sh
# ./repli.sh

Now add files to the source filesystem and read the contents of the
destination to check the replication.

This entry was posted in solaris. Bookmark the permalink.

Comments are closed.