enx{mac} instead of eth0 (raspberry pi)

edit:

/lib/udev/rules/73-usb-net-by-mac.rules

change:

ACTION==”add”, SUBSYSTEM==”net”, SUBSYSTEMS==”usb”, NAME==”", \
ATTR{address}==”?[014589cd]:*”, \
TEST!=”/etc/udev/rules.d/80-net-setup-link.rules”, \
IMPORT{builtin}=”net_id”, NAME=”$env{ID_NET_NAME_MAC}”
Change the NAME at the end as follows:

into:

ACTION==”add”, SUBSYSTEM==”net”, SUBSYSTEMS==”usb”, NAME==”", \
ATTR{address}==”?[014589cd]:*”, \
TEST!=”/etc/udev/rules.d/80-net-setup-link.rules”, \
IMPORT{builtin}=”net_id”, NAME=”eth0″

Posted in Uncategorized | Leave a comment

commvault cvcsa

CVCSA

Posted in Uncategorized | Leave a comment

Protected: CommVault CVCSA

This post is password protected. To view it please enter your password below:

Posted in Uncategorized | Enter your password to view comments.

netapp snapmirror vault and svmdr in 10 steps

netapp snapmirror vault and svmdr in 10 steps

Posted in Uncategorized | Leave a comment

netapp svmdr nfs example

In this example you will setup SVMDR, with identity-preserve. Clusters cl1 and cl2 have already been peered.
You will exclude a volume from replication and perform a DR action with resync in both directions.

1. Create the two SVMs drsource and drdest.
cl1::> vserver create -vserver drsource -subtype default
-rootvolume rv -aggregate n1_aggr1
-rootvolume-security-style unix

cl2::> vserver create -vserver drdest -subtype dp-destination

2. Create a volume and export the volume with nfs.
cl1::> vol create -vserver drsource -volume drsource_vol
-aggregate n1_aggr1 -size 100m -junction-path /drsource_vol

cl1::> export-policy create -vserver drsource -policyname drsource

cl1::>export-policy rule create -vserver drsource -policyname drsource
-clientmatch 0.0.0.0/0 -rorule any

-rwrule any -superuser any

cl1::> vol modify -vserver drsource -volume drsource_vol -policy drsource
cl1::> vol modify -vserver drsource -volume rv
-policy drsource

3. Create a LIF and access the volume from a linux client.
cl1::> net int create -vserver drsource -lif lif1 -role data
-data-protocol nfs -home-node cl1-01 -home-port e0d -address 192.168.4.226
-netmask 255.255.255.0

cl1::> nfs on -vserver drsource

From the linux client:
root@pi158:~# mount 192.168.4.226:/ /mnt/226
root@pi158:~# cd /mnt/226/drsource_vol
root@pi158:/mnt/226/drsource_vol# touch a b d

4. Peer the vservers and setup a snapmirror relationship and initialize.
cl1::> vserver peer create -vserver drsource -peer-vserver drdest
-applications snapmirror -peer-cluster cl2

cl2::> vserver peer accept -vserver drdest -peer-vserver drsource

cl2::> snapmirror policy create -vserver drdest -policy drpol -type async-mirror

cl2::> snapmirror create -source-path drsource: -destination-path drdest: -schedule 5min -type DP -policy drpol
-preserve-identity true

cl2::> snapmirror initialize -destination-path drdest:

5. Perform a DR action by bringing the source lif offline and breaking the relationship.
On the linux client run a loop that lists the content of the volume.
root@pi158:/mnt# while true; do ls 226/drsource_vol/; sleep 3; done

#stop the source vserver on cluster 1
cl1::> vserver stop -vserver drsource

#your loop will freeze and no longer display files

#break the snapmirror relationship on cluster 2
cl2::> snapmirror break -destination-path

#online the LIF on the destination SVM.
cl2::> net int modify -vserver drdest -lif lif1 -status-admin up

#Your loop should continue listing files
#If it does not (Stale NFS handle), unmount the volume and remount.
root@pi158:/mnt# umount -lf /mnt/226
root@pi158:/mnt# mount 192.168.4.226:/ /mnt/226

6. Resync.
cl1::> snapmirror create -source-path drdest:
-destination-path drsource: -type dp -identity-preserve true

cl1::> snapmirror resync -destination-path drsource:

7. Stop the current active SVM, resync the original snapmirror and start the source SVM
cl2::> vserver stop -vserver drdest

cl1::> snapmirror break -destination-path drsource:
cl2::> snapmirror resync -destination-path drdest:
cl1::> vserver start drsource
cl1::> net in modify -vserver drsource -lif lif1 -status-admin up

8. Create an unprotected and protected volume.
cl1::> vol create -vserver drsource -volume not_protected
-aggregate n1_aggr1 -size 100m -state online
-vserver-dr-protection unprotected

cl1::> vol create -vserver drsource -volume protected
-aggregate n1_aggr1 -size 100m -state online
-vserver-dr-protection protected

9. Run a snapmirror update and check that the volume is not replicated.
cl2::> snapmirror update -destination-path drdest:

cl2::> vol show -vserver drdest -fields volume
vserver volume
——- ————
drdest drsource_vol
drdest protected
drdest rv
3 entries were displayed.

Posted in Uncategorized | Leave a comment

netapp snapvault cdot example

In this example you will setup an NFS SVM and snapvault the nfs export volume to a second SVM destination volume.
You will setup an XDP snapmirror relationship.
Then you will create file in the source volume and restore them from the destination SVM volume.

1. Create the two SVMs.
cl1::> vserver create vault_src -subtype default
-rootvolume rv -aggregate n1_aggr1 -rootvolume-security-style unix

cl2::> vserver create -vserver vault_dst -subtype default
-rootvolume rv -aggregate aggr1 -rootvolume-security-style unix

2. Peer the SVMs.
cl1::> vserver peer create -vserver vault_src -peer-vserver
vault_dst -peer-cluster cl2 -applications snapmirror

cl2::> vserver peer accept -vserver vault_dst -peer-vserver vault_src

3. Create the volumes.
cl1::> vol create -vserver vault_src -volume vault_src_vol
-aggregate n1_aggr1 -size 500m -junction-path /src

cl2::> vol create -vserver vault_dst -volume vault_dst_vol
-aggregate aggr1 -size 500m -type dp

4. Setup the snapmirror policy.
cl2::> snapmirror policy create -vserver vault_dst
-policy vaultpol

cl2::> snapmirror policy add-rule -vserver vault_dst
-policy vaultpol -snapmirror-label min -keep 5

5. Setup the snapshot policy on the source volume.
cl1::> snapshot policy create -policy vaultsnap -count1 2
-schedule1 5min -snapmirror-label1 min -vserver vault_src
-enabled true

cl1::> vol modify -vserver vault_src -volume vault_src_vol
-snapshot-policy vaultsnap

6. Create an export-policy and rule.
cl1::> export-policy create -vserver vault_src -policyname vault_nfs
cl1::> export-policy rule create -vserver vault_src -policyname vault_nfs
-clientmatch 0.0.0.0/0 -rorule any -rwrule any -superuser any

7. Change the source volume’s export-policy and create nfs.
cl1::> vol modify -vserver vault_src -volume vault_src_vol
-policy vault_nfs

cl1::> vol modify -vserver vault_src -volume rv -policy vault_nfs

cl1::> nfs on -vserver vault_src

8. Create the snapmirror relationship and connect the right policy.
cl2::> snapmirror create -source-path vault_src:vault_src_vol
-destination-path vault_dst:vault_dst_vol -type XDP

cl2::> snapmirror policy create -vserver vault_dst -policy vaultpol

9. Create a LIF for the nfs source SVM.
cl1::> net int create -vserver vault_src -lif lif1 -role data
-data-protocol nfs -home-node cl1-01 -home-port e0d
-address 192.168.0.224 -netmask 255.255.255.0 -status-admin up

10. Connect to the share.
On linux
root@pi158:~# mkdir /mnt/224
root@pi158:~# mount 192.168.0.224:/ /mnt/224
root@pi158:~# cd /mnt/224/src
root@pi158:/mnt/224/src# touch a b c

11. Wait for some time and check the snapshots.
cl2::> snap show -vserver vault_dst -volume vault_dst_vol
5min.2017-05-23_1315 136KB 0% 36%
5min.2017-05-23_1320 152KB 0% 38%
5min.2017-05-23_1325 0B 0% 0%

12. Remove the files you created
root@pi158:/mnt/224/src# rm *

13. Restore the whole volume.
cl1::> snapmirror restore -destination-path vault_src:vault_src_vol
-source-path vault_dst:vault_dst_vol 5min.2017-05-23_1325

14. On linux your file should be back.
root@pi158:/mnt/224/src# ls
a b c

15. On linux remove file b and file c
root@pi158:/mnt/224/src# rm b c

16. Restore file b to file b and file c to file c_restored.
cl1::> snapmirror restore -destination-path vault_src:vault_src_vol
-source-path vault_dst:vault_dst_vol 5min.2017-05-23_1325
-file-list /b,@/b,/c,@/c_restored

Posted in Uncategorized | Leave a comment

netapp snapmirror dr with lun example

Snapmirror DR in a SAN environment.

In this example you will create two SVMs. The src_svm has a src_volume,

the dest_svm will have a dest_volume of the type DP.

The two SVMs will be peered. The cluster cl1 and cl2 are already peered.

We will create a lun in the src_svm and the volume will be snapmirrored to the dest_volume.

The lun will be mapped to a windows machine from both SVMs. DR will be tested.

1. create src_svm

cl1::> vserver create -vserver src_svm -rootvolume rv -aggregate n1_aggr1 -rootvolume-security-style unix

cl1::> vserver create -vserver src_svm -rootvolume rv -aggregate n1_aggr1 -rootvolume-security-style unix

2. create src_volume

cl1::> vol create -vserver src_svm -volume src_volume -aggregate n1_aggr2  -size 2g -state online -policy default -junction-path /src_volume

3. create lun

cl1::> lun create -vserver src_svm  -path /vol/src_volume/lun -size 1g  -ostype windows

4. create iscsi service in the svm

cl1::> iscsi create -vserver src_svm -target-alias src_svm

5. create a lif in the src_svm

cl1::> net int create -vserver src_svm -lif lif1 -role data -data-protocol iscsi -home-node cl1-01 -home-port e0d -address 192.168.0.222 -netmask 255.255.255.0 -status-admin up

6. Login from windows to src_svm 192.168.0.222 using iscsi-initiator.
7. get the windows iqn number

cl1::> iscsi initiator show

(output will be something like)

iqn.1991-05.com.microsoft:win-dhn5u460s93

8. create an igroup in src_svm

cl1::> igroup create -vserver src_svm -igroup win -protocol mixed -ostype windows -initiator iqn.1991-05.com.microsoft:win-dhn5u460s93

cl1::> lun map -vserver src_svm -path /vol/src_volume/lun -igroup win

On windows, rescan disks.

On windows, create a new simple volume in the lun and put some data in it.

9. create destination svm

cl2::> vserver create -vserver dest_svm -subtype default -rootvolume rv -aggregate aggr1 -rootvolume-security-style unix

10. create destination volume

cl2::> vol create -vserver dest_svm -volume dest_volume -aggregate aggr1 -size 2g -state online -type DP

11. peer the vservers
cl2::> vserver peer create -vserver dest_svm -peer-vserver src_svm -applications snapmirror -peer-cluster cl1

12. accept the peering relationship
cl1::> vserver peer accept -vserver src_svm -peer-vserver dest_svm

13. setup snapmirror
cl2::> snapmirror create -source-path src_svm:src_volume -destination-path dest_svm:dest_volume -type DP

14. initialize snapmirror
cl2::> snapmirror initialize -destination-path dest_svm:dest_volume

15. check for the snapmirrored lun
cl2::> lun show -vserver dest_svm

16. create a lif on the dest svm
cl2::> net int create -vserver dest_svm -lif lif1 -role data -data-protocol iscsi -home-node cl2_1 -home-port e0c -address 192.168.0.223 -netmask 255.255.255.0

17. create the iscsi service in the svm
cl2::> vserver iscsi create -vserver dest_svm
18. create an igroup in dest_svm
cl2::> igroup create -vserver dest_svm -igroup win -protocol mixed -ostype windows -initiator iqn.1991-05.com.microsoft:win-dhn5u460s93

19. map the lun
cl2::> lun map -vserver dest_svm -path /vol/dest_volume/lun
-igroup win

20. offline the source lun
cl1::> lun offline -vserver src_svm -path /vol/src_volume/lun

21. break the relationship
cl2::> snapmirror break -destination-path dest_svm:dest_volume

Login from windows to dest_svm using iscsi-initiator.
Access the lun on cl2. If the disk is still read-only, offline it and online it again.

Posted in Uncategorized | Leave a comment

Netapp iscsi DR example

 

In this example we will create two SVMs. The source_svm has a source_volume,
the dest_svm will have a dest_volume of the type DP.
The two SVMs will be peered. The cluster cl1 and cl2 are already peered.

We will create a lun in the source_svm and the volume will be snapmirrored
to the dest_volume.

The lun will be mapped to a windows machine from both SVMs. DR will be tested.

# create source_svm
cl1::> vserver create -vserver source_svm -rootvolume rv -aggregate n1_aggr1 \
-rootvolume-security-style unix

# create source_volume
cl1::> vol create -vserver source_svm -volume source_volume -aggregate n1_aggr2 \
-size 2g -state online -policy default -junction-path /source_volume

# create lun
cl1::> lun create -vserver sourc_svm -path /vol/source_volume/lun -size 1g \
-ostype windows

# create iscsi service in the svm
cl1::> iscsi create -vserver source_svm -target-alias source_svm

# create a lif in the source_svm
cl1::> net int create -vserver source_svm -lif lif1 -role data -data-protocol iscsi \
-home-node cl1-01 -home-port e0d -address 192.168.0.222 -netmask 255.255.255.0 \
-status-admin up

# Login from windows to source_svm using iscsi-initiator.

# get the windows iqn number
cl1::> iscsi initiator show
(output will be something like)
iqn.1991-05.com.microsoft:win-dhn5u460s93

# create an igroup in source_svm
cl1::> igroup create -vserver source_svm -igroup win -protocol mixed -ostype windows \
-initiator iqn.1991-05.com.microsoft:win-dhn5u460s93

cl1::> lun map -vserver source_svm -path /vol/source_volume/lun -igroup win

On windows, rescan disks
On windows, create a new simple volume in the lun and put some data in it

# create destination svm
cl2::> vserver create -vserver dest_svm -subtype default -rootvolume rv -aggregate aggr1 \
-rootvolume-security-style unix

# create destination volume
cl2::> vol create -vserver dest_svm -volume dest_volume -aggregate aggr1 -size 2g \
-state online -type DP

# peer the vservers
cl2::> vserver peer create -vserver dest_svm -peer-vserver source_svm -applications snapmirror \
-peer-cluster cl1

# accept the peering relationship
cl1::> vserver peer accept -vserver source_svm -peer-vserver dest_svm
# setup snapmirror
cl2::> snapmirror create -source-path source_svm:source_volume -destination-path dest_svm:dest_volume \
-type DP

# initialize snapmirror
cl2::> snapmirror initialize -destination-path dest_svm:dest_volume

# check for the snapmirrored lun
cl2::> lun show -vserver dest_svm

# create a lif on the dest svm
cl2::> net int create -vserver dest_svm -lif lif1 -role data -data-protocol iscsi -home-node cl2_1 \
-home-port e0c -address 192.168.0.223 -netmask 255.255.255.0

# create the iscsi service in the svm
cl2::> vserver iscsi create -vserver dest_svm
# create an igroup in dest_svm
cl2::> igroup create -vserver dest_svm -igroup win -protocol mixed -ostype windows \
-initiator iqn.1991-05.com.microsoft:win-dhn5u460s93

# map the lun
cl2::> lun map -vserver dest_svm -path /vol/dest_volume/lun -igroup win

# offline the source lun
cl1::> lun offline -vserver source_svm -path /vol/source_volume/lun

# break the relationship
cl2::> snapmirror break -destination-path dest_svm:dest_volume

# Login from windows to source_svm using iscsi-initiator.
# Access the lun on cl2 (maybe offline and online)

 

 

 

Posted in Uncategorized | Leave a comment

netapp cdot lunexample failover dr

Cluster names: cl1 and cl2

Vserver names:source and dest

Volume names: lvol1 and dest

Ip-addresses 192.168.4.222 and 192.168.4.223

cl1::> vserver create -vserver source -subtype default -rootvolume rv -aggregate n1_aggr1 -rootvolume-security-style unix

cl1::> vol create -vserver source -volume lvol -aggregate n1_aggr2 -size 2g -state online -policy default -unix-permissions —rwxr-xr-x -junction-path /lvolsource -subtype default -rootvolume rv -aggregate n1_aggr1 -rootvolume-security-style unix

cl1::> lun create -vserver sourcee 1gth /vol/lvol/l1 -size 1g  -ostype windows -space-reserve enabled -space-allocation disabled

cl1::> iscsi create -vserver source -target-alias source

cl1::> net int create -vserver source -lif l1 -role data -data-protocol iscsi -home-node cl1-01 -home-port e0d -address 192.168.4.222 -netmask 255.255.255.0 -status-admin up

Login from windows to target….

cl1::> iscsi initiator show

(output is something like…)

source  l1          3 iqn.1991-05.com.microsoft:win-dhn5u460s93

cl1::> igroup create -vserver source -igroup win -protocol mixed -ostype windows -initiator iqn.1991-05.com.microsoft:win-dhn5u460s93

cl1::> lun map -vserver source -path /vol/lvol/l1 -igroup win

On windows, rescan disks

On windows, create a new simple volume and put some data in it 

cl2::> vserver create -vserver dest -subtype default -rootvolume rv -aggregate aggr1 -rootvolume-security-style unix

cl2::> vol create -vserver dest -volume dest -aggregate aggr1 -size 2g -state online -type DP

cl2::> vserver peer create -vserver dest -peer-vserver source -applications snapmirror -peer-cluster cl1

cl1::vserver> vserver peer accept -vserver source -peer-vserver dest

cl2::> snapmirror create -source-path source:lvol -destination-path dest:dest -throttle unlimited -identity-preserve false -type DP -

cl2::> snapmirror initialize -destination-path dest:dest

cl2::> lun show -vserver dest

cl2::> net int create -vserver dest -lif l1 -role data -data-protocol iscsi -home-node cl2_1 -home-port e0c -address 192.168.4.223 -netmask 255.255.255.0

cl2::> vserver iscsi create -vserver dest

cl2::> lun map -vserver dest -path /vol/dest/l1 -igroup win

On windows, login to cl2 target svm 192.168.4.223

cl1::>vserver> lun offline -vserver source -path /vol/lvol/l1

cl2::> snapmirror break -destination-path dest:dest

 

 

Posted in Uncategorized | Leave a comment

netapp performance stuff

Some performance commands: top client/file , network test-path , max-xfer , qos-policy

cl1::> statistics top client show       
cl1 : 5/15/2017 19:43:56
*Estimated
Total
IOPS Protocol   Node Vserver Client
———- ——– —— ——- ————-
34      nfs cl1-01   v_nfs 192.168.4.240
15      nfs cl1-01   v_nfs 192.168.4.158
1      nfs cl1-02   v_nfs 192.168.4.159


cl1::> statistics top file show
cl1 : 5/15/2017 19:41:38
*Estimated
Total
IOPS   Node Vserver    Volume File
———- —— ——- ——— ———————-
22 cl1-01   v_nfs v_nfsdata (unknown : inode 8004)
11 cl1-01   v_nfs v_nfsdata (unknown : inode 24128)
(output snipped)


cl1::*> network test-path -source-node cl1-01 -destination-cluster cl2 -destination-node cl2_1
Initiating path test. It can take up to 15 seconds for results to be displayed.
Test Duration: 2.25 secs
Send Throughput: 3.72 MB/sec
Receive Throughput: 3.72 MB/sec
MB Sent: 8.38
MB Received: 8.38
Avg Latency:  1402.03 ms
Min Latency:  1187.13 ms
Max Latency:  1935.26 ms

cl1::*> vserver nfs show -vserver v_nfs -fields tcp-max-xfer-size
vserver tcp-max-xfer-size
——- —————–
v_nfs   65536

cl1::*> statistics show-periodic -object volume -vserver v_nfs \
-instance v_nfsdata -counter  \
write_ops|read_ops|total_ops|read_latency|write_latency|avg_latency
cl1: volume.v_nfsdata: 5/15/2017 20:00:03
avg     read             total    write    write
latency  latency read_ops      ops  latency      ops
——– ——– ——– ——– ——– ——–
0us      0us        0        0      0us        0
126us    0us        0        2      0us        0
97us     0us        0       49      0us        0
132us    0us        0       12      0us        0
155us    0us        0        0      0us        0
184us    0us        0      194    146us       49
139us    0us        0        2      0us        0
0us      0us        0        0      0us        0
1493us   0us        0        2      0us        0

cl1::*> qos policy-group create -policy-group light -vserver v_nfs -max-throughput 30mbs 
cl1::*> qos policy-group create -policy-group heavy -vserver v_nfs -max-throughput 300mbs
cl1::*> vol modify -vserver v_nfs -volume v_nfsdata -qos-policy-group light      
cl1::*> vol modify -vserver v_nfs -volume v_nfsheavy -qos-policy-group heavy  
cl1::*> qos statistics performance show 

Posted in Uncategorized | Leave a comment

netapp example lun failover snapmirror ontap 9.x

This is an example LUN failover. Two peered clusters (not discussed) Two peered svms. Two volumes in a snapmirror relationship. This is about changing the LUN serial number so that the initiator does not have to do a rescan of its disks.

#create two vservers, one on each cluster and peer them.cl1::> vserver create -vserver SRC -subtype default -rootvolume rv -aggregate n1_aggr1 

cl1::> vserver create -vserver SRC -subtype default -rootvolume rv -aggregate n1_aggr1 
-rootvolume-security-style unix

cl2::> vserver create DST -subtype default -rootvolume rv -aggregate aggr1 \
-rootvolume-security-style unix

cl1::> vserver peer create -vserver SRC -peer-vserver DST -applications snapmirror \
-peer-cluster cl2
cl2::> vserver peer accept -vserver DST -peer-vserver SRC

#create a 5g volume in both vservers. One RW and one DP
cl1::> vol create -volume SRCV -aggregate n1_aggr1 -size 5g -state online \
-junction-path /SRCV
cl2::> vol create -volume DSTV -aggregate aggr1 -size 5g -type dp

#setup a snapmirror relationship and initialize
cl2::> snapmirror create -source-cluster cl1 -source-vserver SRC -source-volume SRCV \
-destination-path DST:DSTV
cl2::> snapmirror initialize -destination-path DST:DSTV

#create a lif for each vserver
cl1::> net int create -vserver SRC -lif l1 -role data -data-protocol iscsi -home-node cl1-01 \
-home-port e0d -address 192.168.4.220 -netmask 255.255.255.0
cl2::> net int create -vserver DST -lif l1 -role data -data-protocol iscsi -home-node cl2_1 \
-home-port e0d -address 192.168.4.221 -netmask 255.255.255.0

#create the iscsi target for both vservers
cl1::> iscsi create -vserver SRC
cl2::> iscsi create -vserver DST

#from windows connect with both targets 220 and 221

#list the initiators on both vservers
cl1::> iscsi initiator show
Tpgroup Initiator
Vserver Name TSIH Name ISID Igroup Name
——- ——– —- ————–
SRC l1 6 iqn.1991-05.com.microsoft:win-dhn5u460s93

cl2::> iscsi initiator show
Tpgroup Initiator
Vserver Name TSIH Name ISID Igroup Name
——- ——- —————– —————–
DST l1 2 iqn.1991-05.com.microsoft:win-dhn5u460s93

#create an igroup for windows on both vservers
cl1::> igroup create -vserver SRC -igroup WING -protocol iscsi
-ostype windows

cl2::> igroup create -vserver DST -igroup WING -protocol iscsi
-ostype windows

#populate the igroups
cl1::> igroup add -vserver SRC -igroup WING -initiator iqn.1991-05.com.microsoft:win-dhn5u460s93
cl2::> igroup add -vserver DST -igroup WING -initiator iqn.1991-05.com.microsoft:win-dhn5u460s93

#create a lun on the source volume
cl1::> lun create -vserver SRC -path /vol/SRCV/LUN1 -size 1g -ostype windows

#run a snapmirror update
cl2::> snapmirror update -destination-path DST:DSTV

#map the lun on the SRC volume to the igroup
cl1::> lun map -vserver SRC -path /vol/SRCV/LUN1 -igroup WING

#on windows, go to computer management and rescan disks
#you should see a new disk
#online it initialize it and create a filesystem in it
#open disk and put a folder in it.

#break the snapmirror relationship
cl2::> snapmirror break -destination-path DST:DSTV

#get the lun serial-hex number
cl1::> lun show -vserver SRC -path /vol/SRCV/LUN1 -fields serial-hex
vserver path serial-hex
——- ————– ————————
DST /vol/DSTV/LUN1 7737555a473f4a73737a337a

#offline the lun on the destination volume (the break command onlined the lun)
cl2::> lun offline -path /vol/DSTV/LUN1

#modify the serial-hex of the destination lun
cl2::> lun modify -path /vol/DSTV/LUN1 -serial-hex 7737555a473f4a73737a337a

#on windows, create a second folder in the disk

#resync the relationship
cl2::> snapmirror update -destination-path DST:DSTV

#FAILOVER
#unmap the lun
cl1::> lun unmap -vserver SRC -path /vol/SRCV/LUN1 -igroup WING

#break the relationship
cl2::> snapmirror break -destination-path DST:DSTV

#map the lun to the igroup
cl2::> lun map -vserver DST -path /vol/DSTV/LUN1 -igroup WING

#online the lun
cl2::> lun online -path /vol/DSTV/LUN1

#resync the snapmirror in the other direction
cl1::> snapmirror resync -destination-path SRC:SRCV -source-path DST:DSTV

#DONE

Posted in Uncategorized | Leave a comment

netapp trimming log files via cronjob

Most (if not all) management on ONTAP can be done from the CLI or SystemManager.
If you want to run some scripts of your own, there is a way to do that.

If you are not familiar with UNIX/Linux, don’t go there.

You need access to the systemshell.
cl1::> security login unlock -username diag
cl1::> security login password -username diag
(enter the password twice)

Go to diag mode and access the systemshell
cl1::> set d

Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y

cl1::*> systemshell -node cl1-01
(system node systemshell)
diag@127.0.0.1′s password: ******

Warning: The system shell provides access to low-level
diagnostic tools that can cause irreparable damage to
the system if not used properly. Use this environment
only when directed to do so by support personnel.

cl1-01%

Note: (As stated above, this is not something you should do if not
really necessary)

Now that you are in the systemshell you can manage the UNIX environment
to a certain extent.

First, you become root.
cl1-01% sudo bash
bash-3.2#

Example case:
I am bothered by all these logfiles taking up space in vol0 of my simulator.
I create a script that gets rid of the log retention and make it executable.

bash-3.2# echo “rm -f /mroot/etc/log/*000*” > /mroot/etc/get_rid
bash-3.2# echo “rm -f /mroot/etc/log/mlog/*000*” >> /mroot/etc/get_rid
bash-3.2# chmod +x /mroot/etc/get_rid

This script will remove all archived logfiles.
Now I decide to do that every day at midnight.

bash-3.2# crontab -e
(now you are in vi)
i
0 0 * * * /mroot/etc/get_rid
~
~

:
:wq!
(now you are no longer in vi and you have created a crontab entry for root)

This crontab entry will remove all files that contain *000* in the name from
/mroot/etc/log and from /mroot/etc/log/mlog. And it will do that at 0 minutes
at 0 hours every day of the month every month of the year every day of the week.

To check whether your entry in cron is ok.
bash-3.2# crontab -l
0 0 * * *  /mroot/etc/get_rid

bash-3.2# exit
cl1-01% exit
logout

cl1::*>

Posted in Uncategorized | Leave a comment

SnapCenter unregister VSC

On the VSC server edit the file \Program Files\Netapp\Virtual Storage Console\etc\scbr\scbr.properties

Remove everything in the file save it and then restart the Netapp SnapManager for Virtual Infrastucture service.

Posted in Uncategorized | Leave a comment

windows RDP allow login

Allow RDP sessions on AD for users.

- Computer (right click -> properties)
- Remote Settings
- Allow connections from computers running any version of RDP
-> Select Users
-> Add User

Start: secpol.msc
-> Local Policies
-> User Right Assignment
-> Allow log on through Remote Desktop Services (double click)
-> Add User or Group
Reboot machine

Posted in Uncategorized | Leave a comment

netapp smb stuff

- List share properties
vserver cifs share properties add -vserver cifs -share-name data1 -share-properties
showsnapshot
attributecache
continuously-available
branchcache
access-based-enumeration
namespace-caching
encrypt-data

- Enable referrals
vserver cifs options modify -vserver cifs -is-referral-enabled true

- Limit maximum connections
vserver cifs share modify -vserver cifs -share-name data1 -max-connections0-per-share 2

- View sessions
vserver cifs session show
net connections active show -service cifs-*

- Enable signing and encryption
vserver cifs security modify -vserver cifs -is-signing-required true
vserver cifs share properties add -vserver cifs -share-name data1 -share-properties encrypt-data

- Enable .snapshot
volume modify -volume data1 -snapdir-access true

(SMB version requirements for CIFS
Clustered Data ONTAP supports ODX with SMB 3.0 and later.
SMB 3.0 must be enabled on the CIFS server before ODX can be enabled:
Enabling ODX also enables SMB 3.0, if it is not already enabled.
Disabling SMB 3.0 also disables ODX.)
(Volume requirements
Source volumes must be a minimum of 1.25 GB.
Deduplication must be enabled on volumes used with copy offload.
Compression must not be enabled on volumes used with copy offload.)

- Enable offload
vserver cifs options modify -vserver cifs -copy-offload-enabled true

- Access control
vserver cifs share access-control show -vserver cifs -share data1
vserver cifs share access-control modify -vserver cifs -share data1 -user-or-group everyone -permission Change
vserver cifs share  access-control create -vserver cifs -share data1 -user-group peter -user-group-type unix-user -permission read

- Ldap
vserver service name-service ldap client create -client-config cifs_client -ad-domain netapp.local

- Enable ABE
vserver cifs share properties add -vserver cifs -share-name data1 -share-properties access-based-enumeration

- Homedirs
vserver cifs home-directory search
vserver cifs home-directory search-path add -vserver cifs -path /home
vserver cifs share create -vserver cifs -share-name %w -path %d/%w -share-properties homedirectory,oplocks,browsable

Posted in Uncategorized | Leave a comment

netapp flexgroup snapmirror example

In the following example we set up:

1. a flexgroup on cluster cl1 svm flex1
2. a destination volume on cluster cl2single svm flex-2
3. a snapmirror relation between source and destination volume
4. a reversed relationship

Prereq: Clusters and SVMs are peered for snapmirror.

1.
The flexgroup volume created on cl1 will use 3 aggregates with a total of 16
constituents.

cl1::*> flexgroup deploy -vserver flex1 -size 20g -type RW -space-guarantee none
cl1::*> vol show -vserver flex1
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
——— ———— ———— ———- —- ———- ———- —–
flex1     fg           -            online     RW         20GB    17.63GB   11%
flex1     fg__0001     n1_aggr2     online     RW       1.25GB     1.10GB   11%
flex1     fg__0002     n1_aggr2     online     RW       1.25GB     1.10GB   11%
flex1     fg__0003     n1_aggr2     online     RW       1.25GB     1.10GB   11%
flex1     fg__0004     n1_aggr2     online     RW       1.25GB     1.10GB   11%
flex1     fg__0005     n1_aggr3     online     RW       1.25GB     1.10GB   11%
flex1     fg__0006     n1_aggr3     online     RW       1.25GB     1.10GB   11%
flex1     fg__0007     n1_aggr3     online     RW       1.25GB     1.10GB   11%
flex1     fg__0008     n1_aggr3     online     RW       1.25GB     1.10GB   11%
flex1     fg__0009     n2_aggr1     online     RW       1.25GB     1.10GB   11%
flex1     fg__0010     n2_aggr1     online     RW       1.25GB     1.10GB   11%
flex1     fg__0011     n2_aggr1     online     RW       1.25GB     1.10GB   11%
flex1     fg__0012     n2_aggr1     online     RW       1.25GB     1.10GB   12%
flex1     fg__0013     n2_aggr2     online     RW       1.25GB     1.10GB   12%
flex1     fg__0014     n2_aggr2     online     RW       1.25GB     1.10GB   12%
flex1     fg__0015     n2_aggr2     online     RW       1.25GB     1.10GB   12%
flex1     fg__0016     n2_aggr2     online     RW       1.25GB     1.10GB   11%

2.
The destination volume on the second cluster will use 8 constituents per aggregate.
The volume is given two aggregates. The volume is of the type DP because it will
be used as a destinationvolume in a snapmirror relationship.

cl2single::*> vol create -vserver flex-2 -volume fg_dp -aggr-list n1_aggr1,n1_aggr2 -aggr-list-multiplier 8 -type DP -size 20g
cl2single::*> vol show -vserver flex-2
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
——— ———— ———— ———- —- ———- ———- —–
flex-2    fg_dp        -            online     DP         20GB    18.63GB    6%
flex-2    fg_dp__0001  n1_aggr1     online     DP       1.25GB     1.17GB    6%
flex-2    fg_dp__0002  n1_aggr2     online     DP       1.25GB     1.17GB    6%
flex-2    fg_dp__0003  n1_aggr1     online     DP       1.25GB     1.17GB    6%
flex-2    fg_dp__0004  n1_aggr2     online     DP       1.25GB     1.17GB    6%
flex-2    fg_dp__0005  n1_aggr1     online     DP       1.25GB     1.17GB    6%
flex-2    fg_dp__0006  n1_aggr2     online     DP       1.25GB     1.17GB    6%
flex-2    fg_dp__0007  n1_aggr1     online     DP       1.25GB     1.17GB    6%
flex-2    fg_dp__0008  n1_aggr2     online     DP       1.25GB     1.17GB    6%
flex-2    fg_dp__0009  n1_aggr1     online     DP       1.25GB     1.17GB    6%
flex-2    fg_dp__0010  n1_aggr2     online     DP       1.25GB     1.17GB    6%
flex-2    fg_dp__0011  n1_aggr1     online     DP       1.25GB     1.17GB    6%
flex-2    fg_dp__0012  n1_aggr2     online     DP       1.25GB     1.16GB    7%
flex-2    fg_dp__0013  n1_aggr1     online     DP       1.25GB     1.16GB    7%
flex-2    fg_dp__0014  n1_aggr2     online     DP       1.25GB     1.16GB    7%
flex-2    fg_dp__0015  n1_aggr1     online     DP       1.25GB     1.16GB    7%
flex-2    fg_dp__0016  n1_aggr2     online     DP       1.25GB     1.17GB    6%
The relationship between Flexgroup volumes can only be of the type XDP.

cl2single::*> snapmirror create -source-path flex1:fg -destination-path flex-2:fg_dp -throttle unlimited -type XDP
cl2single::*> snapmirror initialize flex-2:fg_dp

3.
First we break and delete the relationship.

cl2single::> snapmirror break -destination-path flex-2:fg_dp
cl2single::> snapmirror delete -destination-path flex-2:fg_dp

 

Then we release the destination-path, create a new relationa and resync.

cl1::*> snapmirror release -relationship-info-only true -destination-path cl2single://flex-2/fg_dp
cl1::*> snapmirror create -source-path cl2single://flex-2/fg_dp -destination-path flex1:fg
cl1::*> snapmirror resync flex1:fg
cl1::*> snapmirror show -expand

Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
———– —- ———— ——- ————– ——— ——- ——–
flex-2:fg_dp XDP flex1:fg Snapmirrored Idle – true -
flex-2:fg_dp__0001 XDP flex1:fg__0001 Snapmirrored Idle – true -
flex-2:fg_dp__0002 XDP flex1:fg__0002 Snapmirrored Idle – true -
flex-2:fg_dp__0003 XDP flex1:fg__0003 Snapmirrored Idle – true -
flex-2:fg_dp__0004 XDP flex1:fg__0004 Snapmirrored Idle – true -
flex-2:fg_dp__0005 XDP flex1:fg__0005 Snapmirrored Idle – true -
flex-2:fg_dp__0006 XDP flex1:fg__0006 Snapmirrored Idle – true -
flex-2:fg_dp__0007 XDP flex1:fg__0007 Snapmirrored Idle – true -
flex-2:fg_dp__0008 XDP flex1:fg__0008 Snapmirrored Idle – true -
flex-2:fg_dp__0009 XDP flex1:fg__0009 Snapmirrored Idle – true -
flex-2:fg_dp__0010 XDP flex1:fg__0010 Snapmirrored Idle – true -
flex-2:fg_dp__0011 XDP flex1:fg__0011 Snapmirrored Idle – true -
flex-2:fg_dp__0012 XDP flex1:fg__0012 Snapmirrored Idle – true -
flex-2:fg_dp__0013 XDP flex1:fg__0013 Snapmirrored Idle – true -
flex-2:fg_dp__0014 XDP flex1:fg__0014 Snapmirrored Idle – true -
flex-2:fg_dp__0015 XDP flex1:fg__0015 Snapmirrored Idle – true -
flex-2:fg_dp__0016 XDP flex1:fg__0016 Snapmirrored Idle – true -
17 entries were displayed.

Posted in Uncategorized | Leave a comment

ontap 9 list all api calls

To list the api calls you can use to manage/monitor
ontap 9, you can either use a clustershell command (1)
or you can run a query using Netapp SDK. (2)
In the second case you will have to enable http
on the clusternodes

cl1::*> system services web modify -http-enabled true

(1)
cl1::*> security login role show-ontapi
ONTAPI Command
————————————
aggr-add storage aggregate add-disks
aggr-create storage aggregate create
aggr-destroy storage aggregate delete
aggr-get-filer-info storage aggregate show
aggr-get-iter storage aggregate show
aggr-mirror storage aggregate mirror
aggr-offline storage aggregate offline
(snipped)

In this example I use the perl script
that comes with the samples when
you download netapp-manageability-sdk-5.4P1.zip

(2)
[linux] ./apitest.pl cl1 admin adminpassword system-api-list

OUTPUT:

action-test-key-optionality-defaultaction

active-directory-account-get-iter

aggr-add

(snipped)

Posted in Uncategorized | Leave a comment

netapp iscsi svmdr

In this post:

We create a source svm and a destination svm of the type
dp-destination (svmdr)

The source svm maps a lun to a linux (centos) machine.
The source svm is snapmirror to the destination svm.

The relationship is broken, the source svm is stopped and
the destination svm is started.

The client accesses the lun that is presented from the
destination svm.

Prerequisite: the two clusters are peered.

1. create source svm
cl1::> vserver create -vserver vm_iscsi -subtype \
default -rootvolume rv -aggregate n1_aggr1 \
-rootvolume-security-style unix

2. create source volume
cl1::> vol create -vserver vm_iscsi -volume \
lvol -aggregate n1_aggr1 -size 1g

3. install iscsi-utils on linux and get the initiator-id
[root]# yum -y install iscsi-initiator-utils
[root]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:37f0c74df813

4. create igroup
cl1::> igroup create -vserver vm_iscsi -igroup ling \
-protocol iscsi -ostype linux -initiator \
iqn.1994-05.com.redhat:d3afd1eed5f8

5. create iscsi in the svm
cl1::> iscsi create -vserver vm_iscsi

6. create lun
1::> lun create -vserver vm_iscsi -path /vol/lvol/l1 \
-size 800m -ostype linux -space-reserve disabled

Created a LUN of size 800m (838860800)

7. create two iscsi lifs (1 per node)
cl1::> net int create -vserver vm_iscsi -lif l212 \
-role data -data-protocol iscsi -home-node cl1-01 \
-home-port e0d -address 192.168.4.212 -netmask \
255.255.255.0 -status-admin up

cl1::> net int create -vserver vm_iscsi -lif l213 \
-role data -data-protocol iscsi -home-node cl1-02 \
-home-port e0d -address 192.168.4.213 -netmask \
255.255.255.0 -status-admin up

8. map the lun
cl1::> lun map -vserver vm_iscsi -path /vol/lvol/l1 -igroup ling

9. access the lun from linux
(first login to the svm target)
[root]# iscsiadm -m discovery -t sendtargets \
-p 192.168.4.212

[root]# iscsiadm -m node –login

(restart iscsi and check for a new disk)
[root@lin70 iscsi]# service iscsi restart
Redirecting to /bin/systemctl restart iscsi.service
[root@lin70 iscsi]# fdisk -l |grep sd
Disk /dev/sdc: 838 MB, 838860800 bytes, 1638400 sectors

10. partition the disk, create fs, mountpoint and mount
[root]# fdisk /dev/sdc
(create a primary partition that spans the whole disk)

[root]# mkfs /dev/sdc1
[root]# mkdir /iscsidisk
[root]# mount /dev/sdc1 /iscsidisk

This concludes the setup on the source svm and linux.

11. create dp-destination svm
cl2::> vserver create -vserver vm_iscsi_dr -subtype \
dp-destination

12. create a peer relation.
(cluster peering is already set up)
cl2::> vserver peer create -vserver vm_iscsi_dr \
-peer-vserver vm_iscsi -applications snapmirror \
-peer-cluster cl1

cl1::> vserver peer accept -vserver vm_iscsi \
-peer-vserver vm_iscsi_dr

13. cl2::> snapmirror create -source-path vm_iscsi: \
-destination-path vm_iscsi_dr:

cl2::> snapmirror initialize -destination-path vm_iscsi_dr:

cl2::> vol show -vserver vm_iscsi_dr
vm_iscsi_dr lvol s1_aggr1 online DP 1GB 959.5MB 6%
vm_iscsi_dr rv s1_aggr1 online RW 20MB 18.82MB 5%

14. create a lif on the dp-destination svm
cl2::> net int create -vserver vm_iscsi_dr -lif l1 \
-role data -data-protocol iscsi -home-node cl2 -home-port \
e0c -address 192.168.4.215 -netmask 255.255.255.0 \
-status-admin up

15. create iscsi on the dp-destination svm
cl2::> iscsi create -vserver vm_iscsi_dr

17. map the lun to the igroup on the dp-destination svm
cl1::> lun map -vserver vm_iscsi_dr -path /vol/lvol/l1 -igroup ling

18. stop the source svm
cl1::> vserver stop -vserver vm_iscsi

19. break the snapmirror relation and
start the dp-destination svm
cl2::> snapmirror break -destination-path vm_iscsi_dr:
cl2::> vserver start -vserver vm_iscsi_dr

20. login to the vm_iscsi_dr
[root]# iscsiadm -m discovery -t sendtargets -p 192.168.4.215
[root]# iscsiadm -m node –login

(note : — login may have to be typed manually because this —  is not represented as — – or – -)

21. find out the mapped disk
[root]# fdisk -l|grep sd
/dev/sdd1 2048 1638399 818176 83 Linux

22. mount the disk
[root]# umount -f /dev/sdc1
[root]# mount /dev/sdd1 /iscsidisk

done.

Posted in Uncategorized | Leave a comment

netapp cdot flasphool stuff

statistics show-periodic -object disk:raid_group -instance /n1_aggr1/plex0/rg0 -counter disk_busy|user_read_latency -interval 1 -iterations 60

system node run -node cl911_1 wafl awa start n1_aggr1
system node run -node cl911_1 wafl awa print
system node run -node cl911_1 wafl awa stop

with flashpools

system node run -node cl911_1 stats show -p hybrid_aggr

Posted in Uncategorized | Leave a comment

docker service swarm web example

Docker service swarm web 4 nodes 6 containers PDF

docker.swarm.web.ok.new

Docker service swarm web 4 nodes 6 containers PDF

Posted in Uncategorized | Leave a comment

linux resize vmdk – device and physical volume

In this example a vmware vmdk is used by a CentOS 7 VM.
The original size of the vmdk is 1G, and is resized to 3G.

After this resizing in ESXi, run the following in the CentOS VM.
The name of the disk in CentOS is /dev/sdc.

Get the current situation:
fdisk -l |grep sdc
Disk /dev/sdc: 1073 MB, 1073741824 bytes, 2097152 sectors

Rescan the device:
echo 1 > /sys/block/sdc/device/rescan

Get the new situation:
fdisk -l |grep sdc
Disk /dev/sdc: 3221 MB, 3221225472 bytes, 6291456 sectors

View the physical volume:
pvdisplay /dev/sdc |grep -i pv\ size

Resize the physical volume:
pvresize /dev/sdc
1 physical volume(s) resized / 0 physical volume(s) not resized

View the physical volume:
pvdisplay /dev/sdc |grep -i pv\ size
PV Size 3.00 GiB / not usable 3.00 MiB

Posted in Uncategorized | Leave a comment

Refurbishing Mac

Naturally, Macs have quite some weight.
When out of production, they can be re-used to hold down
your raspberry pi.

pi1

Posted in Uncategorized | Leave a comment

raspberry pi register nfs-server with rpcbind

raspberry_nfs_server_start

Problem with raspberry pi (debian) and nfs-kernel-server.
Systemd apparently starts the nfs-kernel-server before rpcbind
has completed starting.

Solution that works for me:

The relevant nfs services are enable and started.
1. Make sure that nfs-kernel-server gets started at boottime.
# systemctl enable nfs-kernel-server.service

After a reboot, nfs is not registered.

# rpcinfo -p
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper

2. Create a service that is called after the portmapper is started.

2a. Create a script that runs the start a couple of times.

cd /usr/local/bin/
# echo #!/bin/bash > nfs_kernel_start.sh
# echo service nfs-kernel-server restart >> nfs_kernel_start.sh
# chmod +x /usr/local/bin/nfs_kernel_start.sh

2b. Create the service

Create the following file and content:

/etc/systemd/system/nfs_kernel_start.service

[Unit]
After=nfs_kernel_start.service
[Service]
ExecStart=/usr/local/bin/nfs_kernel_start.sh
[Install]
WantedBy=multi-user.target

Set the correct permissions.

# chmod 644 /etc/systemd/system/nfs_kernel_start.service

3. Enable the service and start it.

# systemctl enable nfs_kernel_start.service
# systemctl start nfs_kernel_start.service
# rpcinfo -p
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 2 tcp 2049
100227 3 tcp 2049
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
(snipped)

Posted in Uncategorized | Leave a comment

netapp root volume damaged

If a clusternode reports that the ROOT NOT WORKING PROPERLY: RECOVERY REQUIRED

unsetenv bootarg.init.boot_recovery

Posted in Uncategorized | Leave a comment

linux performance introduction

Linux Performance Intro

Posted in Uncategorized | Leave a comment

docker swarm example

docker.swarm_example

Posted in Uncategorized | Leave a comment

linux network namespaces

linux network namespaces

Posted in Uncategorized | Leave a comment

linux namespaces cpu-shares

linux_cpu_shares

Posted in Uncategorized | Leave a comment

docker networking example

docker.network

1. List the docker network.

# docker network ls

NETWORK ID NAME DRIVER SCOPE

82f0e82e8a52 bridge bridge local

2f29f6c3dc07 host host local

0ee77fd0eaf6 none null local

2. List the bridge’s address.

# ifconfig docker0 | grep “inet “

inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0

3. Create two containers.

# docker run -d -it –name=container1 net/centos

# docker run -d -it –name=container2 net/centos

4. List the address of container1.

# docker exec container1 ifconfig eth0 | grep “inet “

inet 172.17.0.2 netmask 255.255.0.0 broadcast 0.0.0.0

5. Create a new network.

# docker network create -d bridge –subnet 172.25.0.0/16 \ isolated_nw

6. Connect container2 to the new network.

# docker network connect isolated_nw container2

7. List the addresses of container2.

# docker exec container2 ifconfig | grep “inet “

inet 172.17.0.3 netmask 255.255.0.0 broadcast 0.0.0.0

inet 172.25.0.2 netmask 255.255.0.0 broadcast 0.0.0.0

inet 127.0.0.1 netmask 255.0.0.0

8. Create a third container and connect it to the new network.

# docker run –network=isolated_nw –ip=172.25.3.3 -itd \ –name=container3 net/centos

9. Check connectivity from container2 to container3.

 

# docker exec container2 ping -w 1 172.25.3.3

PING 172.25.3.3 (172.25.3.3) 56(84) bytes of data.

64 bytes from 172.25.3.3: icmp_seq=1 ttl=64 time=0.088 ms

64 bytes from 172.25.3.3: icmp_seq=2 ttl=64 time=0.072 ms

10 Check connectivity from container3 to container1.

# docker exec container3 ping -w 1 172.17.0.2

PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.

— 172.17.0.2 ping statistics —

2 packets transmitted, 0 received, 100% packet loss, time 999ms

11. Clean up.

# docker rm -f container1 container2 container3

container1

container2

container3

# docker network rm isolated_nw

isolated_nw

 

Posted in Uncategorized | Leave a comment

docker remove all containers and images

docker rm -f $(docker ps -a -q)
docker rmi $(docker images -q)

Posted in Uncategorized | Leave a comment

selinux examples (pdf)

selinux_examples

Posted in Uncategorized | Leave a comment

docker shared storage

In the following example we create a volume in a container and
share the volume with other containers.

1. Share a docker volume
- create the volume in a container that you do not run
docker create -v /webdata –name webd1 centos

- start a second container and use the volume from the 1st container
docker run -it -d –volumes-from webd1 –name webd2 centos

- attach to the container
docker attach webd2

- put a file in /webdata
touch /webdata/afile

- exit the container with <CTRL>PQ
[root@6c6e823f6eb4 /]# <CRTL>PQ

- start a third container and use the volume from the 1st container
docker run -it -d –volumes-from webd1 –name webd3 centos

-attach to the third container
docker attach webd3

- list the contents of /webdata
ls /webdata
afile

- exit the container with <CTRL>PQ
[root@4531a52b8e2a /]# <CTRL>PQ

- remove all three containers
docker rm -f `docker ps -a|grep webd*|awk ‘{print $1}’`

- remove the orphaned volume
docker volume ls -qf dangling=true | xargs -r docker volume rm

In the following example we create a directory on the host
and share the directory between containers.

2. share a host directory
- create a directory on the host
mkdir -p /docker/shared

- create a container and use the directory
docker run -d -it –name webd5 -v /docker/shared:/shared centos

-attach to the container
docker attach webd5

- create file in /shared
touch /shared/afile

- exit the container and remove it.
[root@a4f1061d8d9f /]# <CTRL>PQ
docker rm -f webd5

- list the contents of /docker/shared
ls /docker/shared
afile

- start a new container with the shared directory
docker run -d -it –name webd6 -v /docker/shared:/shared centos

- attach to the container
docker attach webd6
- create a file in /shared
touch /shared/bfile

- list the contents of /shared
ls /shared
afile bfile

- exit the container and check /docker/shared
[root@f4ba93a953c3 /]# exit
ls /docker/shared
afile bfile

Posted in Uncategorized | Leave a comment

linux yum groups

Create a file in the groups format used by yum
Tell createrepo to include that group file in your repository.
Step 1

You can either open a text editor and create the groups xml file manually or you can run the yum-groups-manager command from yum-utils.

Run the groups-create command like this:

yum-groups-manager -n “My Group” –id=mygroup –save=mygroups.xml –mandatory rpm1 rpm2

And you’ll end up with a file like this:

mygroup
False
True
<display_order>1024
My group

glibc rpm yum

Step 2

To include this in a repository, just tell createrepo to use it when making or remaking your repository.

createrepo -g /path/to/mygroups.xml /srv/my/repo

Posted in Uncategorized | Leave a comment

linux strace examples

strace is a systemcall tracer. It tells you what kernel functions are
called as a result of your program. It monitors systemcalls and signals.

strace -p

strace

(trace a systemcall)
strace -e open

(trace multiple systemcalls)
strace -e trace=open,read ls /home

(save the output)
strace -o output.txt

(count the number of calls)
strace -c ls /home

(trace all action on a particular file)
strace -P /etc/cups -p 2261

common calls:
access
close (close file handle)
fchmod (change file permissions)
fchown (change file ownership)
fstat (retrieve details)
lseek (move through file)
open (open file for reading/writing)
read (read a piece of data)
statfs (retrieve file system related details)

===
strace -e trace=network
bind – link the process to a network port
listen – allow to receive incoming connections
socket – open a local or network socket
setsockopt – define options for an active socket

strace -e trace=memory
mmap
munmap

===
example:
check which files are opened when a user connects with ssh.

#ps -ef |grep /usr/sbin/sshd
root 7448 1 0 09:36 ? 00:00:00 /usr/sbin/sshd

#strace -e open -f -p 7448 -o sshout
-e trace the open systemcall
-f follow children
-p attach to pid
-o save output in file

Posted in Uncategorized | Leave a comment

docker – remove all containers and images

#!/bin/bash
# Delete all containers
docker rm $(docker ps -a -q)
# Delete all images
docker rmi $(docker images -q)

Posted in Uncategorized | Leave a comment

linux grow rootfs

Grow xfs rootfs
The root volumegroup in this
example is centos, the disk in the
volumegroup is sda. We add a new
disk, extend the group, extend the
logical volume and grow the filesystem.

add new disk.
(in vmware: edit vm and add new harddisk)

# fdisk -l | grep sd
Disk /dev/sda: 10.7 GB, 10737418240 bytes, 20971520 sectors
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 20971519 9972736 8e Linux LVM
Disk /dev/sdb: 5368 MB, 5368709120 bytes, 10485760 sectors

create new physical volume
# pvcreate /dev/sdb

extend root volume group
# vgextend centos /dev/sdb

extend logical volume
# lvexend /dev/centos/root -L 10g

grow the filesystem
# xfs_growfs /

Posted in Uncategorized | Leave a comment

linux PS1 settings

Information you can display with PS1

link: cyberciti

\a : an ASCII bell character (07)
\d : the date in “Weekday Month Date” format (e.g., “Tue May 26”)
\D{format} : the format is passed to strftime(3) and the result is inserted into the prompt string; an empty format results in a locale-specific time representation. The braces are required
\e : an ASCII escape character (033)
\h : the hostname up to the first ‘.’
\H : the hostname
\j : the number of jobs currently managed by the shell
\l : the basename of the shell’s terminal device name
\n : newline
\r : carriage return
\s : the name of the shell, the basename of $0 (the portion following the final slash)
\t : the current time in 24-hour HH:MM:SS format
\T : the current time in 12-hour HH:MM:SS format
\@ : the current time in 12-hour am/pm format
\A : the current time in 24-hour HH:MM format
\u : the username of the current user
\v : the version of bash (e.g., 2.00)
\V : the release of bash, version + patch level (e.g., 2.00.0)
\w : the current working directory, with $HOME abbreviated with a tilde
\W : the basename of the current working directory, with $HOME abbreviated with a tilde
\! : the history number of this command
\# : the command number of this command
\$ : if the effective UID is 0, a #, otherwise a $
\nnn : the character corresponding to the octal number nnn
\\ : a backslash
\[ : begin a sequence of non-printing characters, which could be used to embed a terminal control sequence into the prompt
\] : end a sequence of non-printing characters
Let us try to set the prompt so that it can display today’d date and hostname:
PS1=”\d \h $ ”

Posted in Uncategorized | Leave a comment

linux parameter expansion

from: wiki

Simple usage
$PARAMETER
${PARAMETER}
Indirection
${!PARAMETER}
Case modification
${PARAMETER^}
${PARAMETER^^}
${PARAMETER,}
${PARAMETER,,}
${PARAMETER~}
${PARAMETER~~}
Variable name expansion
${!PREFIX*}
${!PREFIX@}
Substring removal (also for filename manipulation!)
${PARAMETER#PATTERN}
${PARAMETER##PATTERN}
${PARAMETER%PATTERN}
${PARAMETER%%PATTERN}
Search and replace
${PARAMETER/PATTERN/STRING}
${PARAMETER//PATTERN/STRING}
${PARAMETER/PATTERN}
${PARAMETER//PATTERN}
String length
${#PARAMETER}
Substring expansion
${PARAMETER:OFFSET}
${PARAMETER:OFFSET:LENGTH}
Use a default value
${PARAMETER:-WORD}
${PARAMETER-WORD}
Assign a default value
${PARAMETER:=WORD}
${PARAMETER=WORD}
Use an alternate value
${PARAMETER:+WORD}
${PARAMETER+WORD}
Display error if null or unset
${PARAMETER:?WORD}
${PARAMETER?WORD}

Posted in Uncategorized | Leave a comment

Linux create RPM and local repository

Example of RPM and YUM on CentOS 6.
1. Install the RPM build environment:
lin60 # yum install -y rpm-build rpmdevtools

2. Create the directory for the RPM environment
and the directory in which your program will be installed.
lin60 # mkdir -p /opt/software/rpm/{BUILD,RPMS,SOURCES,SPECS,SRPMS,tmp}
lin60 # mkdir /opt/myprog

3. Create the .rpmmacros file in your logindir. This is a minimal file. There’s a lot more you can configure in that.
lin60 # echo “%packager root” > ~/.rpmmacros
lin60 # echo “%_topdir /opt/software/rpm” >> ~/.rpmmacros
lin60 # echo “%_tmppath /opt/software/rpm/tmp” >> ~/.rpmmacros

4. Create your software in the SOURCES directory.
lin60 # mkdir /opt/software/rpm/SOURCES/myprog-1/
lin60 # cd /opt/software/rpm/SOURCES/myprog-1/
lin60 # echo “echo myprog” > myprog.sh
lin60 # chmod +x myprog.sh

5. Create a tarfile of your package.
lin60 # cd /opt/software/rpm/SOURCES/
lin60 # tar czf myprog-1.0.tar.gz myprog-1/

6. To define how your software will be created into
an RPM package, you need a SPEC file. It should look like this:
lin60 # cat /opt/software/rpm/SPECS/myprog.spec
Name: myprog
Version: 1
Release: 0
Summary: myprog rpm
Source0: myprog-1.0.tar.gz
License: GPL
Group: devel
BuildArch: noarch
BuildRoot: %{_tmppath}/%{name}-buildroot
%description
The myprog script will echo myprog.
%prep
%setup -q
%build
%install
install -m 0755 -d $RPM_BUILD_ROOT/opt/myprog
install -m 0755 myprog.sh $RPM_BUILD_ROOT/opt/myprog/myprog.sh
%clean
rm -rf $RPM_BUILD_ROOT
%post
echo
echo myprog installed
%files
%dir /opt/myprog
/opt/myprog/myprog.sh

Note:
Requires: myprogdep
Will create a dependency on myprogdep

7. Build your RPM package.
lin60 # cd /opt/software/rpm
lin60 # rpmbuild -ba SPECS/myprog.spec

8.Install your rpm from RPMS/noarch
lin60 # cd /opt/software/rpm/RPMS/noarch
lin60 # rpm -ivh myprog-1.0.noarch.rpm

To install the RPM using yum, create a local repository:

yum install createrepo

create a file called local.repo in /etc/yum.repos.d

cat /etc/yum.repos.d/local.repo

[local]
name=Local Repository
baseurl=file:///opt/software/localrepo/
gpgcheck=0
enabled=1

create the repo directory
mkdir /opt/software/localrepo/

copy the rpm to the repository
cp /opt/software/rpm/RPMS/noarch/myprog-1-0.noarch.rpm /opt/software/localrepo/

createrepo /opt/software/localrepo/

yum install myprog

====================

Posted in Uncategorized | Leave a comment

SVM rootvolume protection

Restoring the root volume of an SVM
If the root volume of a Storage Virtual Machine (SVM) becomes unavailable, clients cannot mount the root of the namespace. In such cases, you must restore the root volume by promoting another volume to facilitate data access to the clients.

For SVMs with FlexVol volumes, you can promote one of the following volumes as the root volume:

Load-sharing mirror copy
Data-protection mirror copy
A new FlexVol volume

Note: If you want to restore the root volume of an SVM with Infinite Volume, you must contact technical support.
Starting from clustered Data ONTAP 8.2, SVM root volume is created with 1 GB size to prevent any failures when mounting any volume in the SVM root volume due to lack of space or inodes. Therefore, if you are promoting a new FlexVol volume, it should be at least 1 GB in size.

Situation:
cluster91::> vol show -vserver tempsvm
Vserver Volume Aggregate State Type Size Available Used%
——— ———— ———— ———- —- ———- ———- —–
tempsvm datavol n2_aggr1 online RW 100MB 94.79MB 5%
tempsvm rv n2_aggr1 online RW 20MB 18.80MB 5%
2 entries were displayed.

1. Promoting a loadshare: (create svm, create loadshare, promote loadshare)
cluster91::> vol create -vserver tempsvm -volume rvls -aggregate n2_aggr1 -size 20m -type dp
cluster91::> snapmirror create -source-path tempsvm:rv -destination-path tempsvm:rvls -type ls
cluster91::> snapmirror initialize-ls-set -source-path tempsvm:rv
cluster91::> snapmirror promote -destination-path tempsvm:rvls
Note: this will delete the original rootvolume. You can rename the new rootvolume.
cluster91::> vol rename -vserver tempsvm -volume rvls rv

2. Create a snapmirror and make a dp destination the new rootvolume.
cluster91::> vol create -vserver tempsvm -volume rvdp -aggregate n2_aggr1 -size 20m -type dp
cluster91::> snapmirror create -source-path tempsvm:rv -destination-path tempsvm:rvdp
cluster91::> snapmirror initialize -destination-path tempsvm:rvdp
cluster91::*> snapmirror break -destination-path tempsvm:rvdp
cluster91::*> volume make-vsroot -vserver tempsvm -volume rvdp
cluster91::*> snapmirror delete -destination-path tempsvm:rvdp
cluster91::*> snapmirror release -destination-path tempsvm:rvdp
cluster91::*> vol offline -vserver tempsvm -volume rv
cluster91::*> vol delete -force true -vserver tempsvm -volume rv
cluster91::*> vol rename -vserver tempsvm -volume rvdp -newname rv
cluster91::*> mount -vserver tempsvm -volume datavol -junction-path /datavol
Note: Client mounts will be stale.
Note: Make sure you connect the correct policies to the volume if needed.

3. Create new volume and use make-vsroot.
cluster91::*> vol create -vserver tempsvm -volume rvnew -aggregate n2_aggr1 -size 20m
cluster91::*> volume make-vsroot -vserver tempsvm -volume rvnew
cluster91::*> vol offline -vserver tempsvm -volume rv
cluster91::*> vol destroy -force true -vserver tempsvm -volume rv
cluster91::*> vol rename -vserver tempsvm -volume rvnew -newname rv
cluster91::*> mount -vserver tempsvm -volume datavol -junction-path /datavol
Note: Client mounts will be stale.
Note: Make sure you connect the correct policies to the volume if needed.

2.

Posted in Uncategorized | Leave a comment

sed

substitute one column in a file:

sed ‘s/[A-Z|0-9]*/REPLACED/1′ filename

Posted in Uncategorized | Leave a comment

linux create local repo

In this example
- the server 192.168.0.1 contains all packages in /export/rhel72/Packages
- the /export directory is exported

log in to vm_client
1. create a mountpoint:
# mkdir /software
create a repodir
# mkdir /localrepo

2. mount the /export/ directory from the server to /software
# mount 192.168.0.1:/export /software

3. copy the rhel72 packages from the server to /localrepo
# cd /software/rhel72/Packages
# tar cvf – . | ( cd /localrepo ; tar xvf – )

this will take some time

4. install the following packages from /software
# cd /software/rhel72/Packages
# rpm -ivh python-delta*
# rpm -ivh deltarpm*
# rpm -ivh createrepo*

5. create the repository
# createrepo /localrepo

this will take some more time

6. create the repo file
# echo “[local]” > /etc/yum.repos.d/local.repo
# echo “name=localrepo” >> /etc/yum.repos.d/local.repo
# echo “baseurl=file:///localrepo/” >> /etc/yum.repos.d/local.repo
# echo “enabled=1″ >> /etc/yum.repos.d/local.repo
# echo “gpgcheck=0″ >> /etc/yum.repos.d/local.repo

7. because there is no internet connection, remove the
repo file redhat.repo
# rm /etc/yum.repos.d/redhat.repo

8. run the following command
# yum clean all

9. test your repository
# yum install ksh

Posted in Uncategorized | Leave a comment

selinux add httpd port

Ports and SELINUX example.

We want our webserver to listen to a non default port.

1. Configure httpd to listen to a non default port – say 8999.
After a default install of httpd port 80 is the port that httpd listens to.
Change the port in /etc/httpd/conf/httpd.conf

[root@lin70 /]# sed -i -e ‘s/Listen 80/Listen 8999/’ /etc/httpd/conf/httpd.conf

2. In another shell follow the audit.log file.

[root@lin70 /]# tail -f /var/log/audit/audit.log

3. Restart httpd and view the error message in the audit.log file

[root@lin70 /]# systemctl restart httpd

(audit.log)
avc: denied { name_bind } for pid=17010 comm=”httpd” src=8999

4. What are the ports that httpd is allowed to listen to?

[root@lin70 /]# semanage port -l |grep ^http_port_t
http_port_t tcp 80, 81, 443, 488, 8008, 8009, 8443, 9000

5. Add port 8999 to the list of ports for httpd and check.

[root@lin70 /]# semanage port -a -t http_port_t -p tcp 8999

6. Restart httpd.

[root@lin70 /]# systemctl restart httpd
[root@lin70 /]# semanage port -l |grep ^http_port_t
http_port_t tcp 8999, 80, 81, 443, 488, 8008, 8009, 8443, 9000

done.

Posted in Uncategorized | Leave a comment

selinux stopaudit

cat stopaudit

grep $1 /var/log/audit/audit.log | audit2allow -M $1
sed -i -e s/allow/dontaudit/ ${1}.te
checkmodule -M -m -o $1.mod $1.te
semodule_package -o $1.pp -m $1.mod
semodule -i $1.pp

./stopaudit httpd

Posted in Uncategorized | Leave a comment

selinux allow script

cat allow.sh

grep $1 /var/log/audit/audit.log | audit2allow -M $1
checkmodule -M -m -o $1.mod $1.te
semodule_package -o $1.pp -m $1.mod
semodule -i $1.pp

./allow.sh httpd

Posted in Uncategorized | Leave a comment

Centos systemd

systemctl

# systemctl

Start/stop or enable/disable services

Activates a service immediately:

# systemctl start foo.service

Deactivates a service immediately:

# systemctl stop foo.service

Restarts a service:

# systemctl restart foo.service

Shows status of a service including whether it is running or not:

# systemctl status foo.service

Enables a service to be started on bootup:

# systemctl enable foo.service

Disables a service to not start during bootup:

# systemctl disable foo.service

Check whether a service is already enabled or not:

# systemctl is-enabled foo.service; echo $?

0 indicates that it is enabled. 1 indicates that it is disabled

How do I change the runlevel?

systemd has the concept of targets which is a more flexible replacement for runlevels in sysvinit.

Run level 3 is emulated by multi-user.target. Run level 5 is emulated by graphical.target. runlevel3.target is a symbolic link to multi-user.target and runlevel5.target is a symbolic link to graphical.target.

You can switch to ‘runlevel 3′ by running

# systemctl isolate multi-user.target (or) systemctl isolate runlevel3.target

You can switch to ‘runlevel 5′ by running

# systemctl isolate graphical.target (or) systemctl isolate runlevel5.target

How do I change the default runlevel?

systemd uses symlinks to point to the default runlevel. You have to delete the existing symlink first before creating a new one

# rm /etc/systemd/system/default.target

Switch to runlevel 3 by default

# ln -sf /lib/systemd/system/multi-user.target /etc/systemd/system/default.target

Switch to runlevel 5 by default

# ln -sf /lib/systemd/system/graphical.target /etc/systemd/system/default.target

systemd does not use /etc/inittab file.

List the current run level

runlevel command still works with systemd. You can continue using that however runlevels is a legacy concept in systemd and is emulated via ‘targets’ and multiple targets can be active at the same time. So the equivalent in systemd terms is

# systemctl list-units --type=target

Powering off the machine

You can use

# poweroff

Some more possibilities are: halt -pinit 0shutdown -P now

Note that halt used to work the same as poweroff in previous Fedora releases, but systemd distinguishes between the two, so halt without parameters now does exactly what it says – it merely stops the system without turning it off.

 

Service vs. systemd

# service NetworkManager stop

(or)

# systemctl stop NetworkManager.service

Chkconfig vs. systemd

# chkconfig NetworkManager off

(or)

# systemctl disable NetworkManager.service

Readahead

systemd has a built-in readahead implementation is not enabled on upgrades. It should improve bootup speed but your mileage may vary depending on your hardware. To enable it:

# systemctl enable systemd-readahead-collect.service
# systemctl enable systemd-readahead-replay.service

Posted in Uncategorized | Leave a comment

SELinux change DocumentRoot for Apache

Subject (processes) and Objects (files) have a security context.
(Process contexts are called domains, file contexts are called labels)

Context type
Apache uses a DocumentRoot that has “httpd_sys_content_t” as type.

ls -Zd /var/www/html
drwxr-xr-x. root root system_u:object_r:httpd_sys_content_t:s0 /var/www/html

Apache’s httpd runs with type httpd_t.

ps -efZ |grep httpd
system_u:system_r:httpd_t:s0 root 27356 1 0 11:30 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND
(snipped)

With SELinux enabled, httpd is allowed to access /var/www/html.

When you create a new DocumentRoot, access to this directory will be denied because the new directory will not have the same context as the original DocumentRoot.

mkdir /newdocumentroot
ls -Zd /newdocumentroot
drwxr-xr-x. root root unconfined_u:object_r:default_t:s0 /newdocumentroot

Now, when you access the webserver, access will be denied.
Make sure httpd.conf has the following entries:

DocumentRoot “/newdocumentroot”
<Directory “/newdocumentroot”>
AllowOverride None
# Allow open access:
Require all granted
</Directory>

restart httpd:
service httpd restart

follow audit.log:

tail -f /var/log/audit/audit.log
type=AVC msg=audit(1480764099.856:1548): avc: denied { read } for pid=16337 comm=”httpd” name=”index.html” dev=”dm-1″ ino=28350272 scontext=system_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:default_t:s0 tclass=file

What is the content type that the new documentroot should have for httpd to be allowed to access it?

ls -Z /var/www/html

httpd_sys_content_t

to allow access, change the type to httpd_sys_content_t

chcon -Rt httpd_sys_content_t /newdocumentroot/
or run
chcon -v –type=httpd_sys_content_t /newdocumentroot
This will recursively change context type for all files and directories in /newdocumentroot
including newly added files.

ls -lZ /newdocumentroot
-rw-r–r–. root root unconfined_u:object_r:httpd_sys_content_t:s0 index.html

to restore the orginal context of /newdocumentroot (thus blocking again):
restorecon -R /newdocumentroot
If you want your /newdocumentroot to still have the correct context after a restorecon, you should
make the context permanent with the following command.
semanage fcontext -a -t httpd_sys_content_t “/newdocumentroot(/.*)?”

Note: when you move a an existing file to /newdocumentroot, it will not have the correct context.
You can run the following command to change it in to httpd_sys_content_t

restorecon -R /newdocumentroot

Posted in Uncategorized | Leave a comment

netapp cpu domains

https://kb.netapp.com/support/s/article/faq-cpu-utilization-in-data-ontap-scheduling-and-monitoring?language=en_US

Posted in Uncategorized | Leave a comment

cdot RBAC example

(Thanks to Sander, Kees, Maarten and other students in some class in the Netherlands)

We want a user to be able to perform a particular task in an SVM.
We can use RBAC to configure this.

Example: We want to limit a user to qtree management in an SVM.

1. Create the SVM.
cluster91::*> vserver create -vserver nfs_2 -subtype default -rootvolume rv -aggregate n1_aggr1 -rootvolume-security-style unix

2. Create an interface to access the SVM.
cluster91::*> net int create -vserver nfs_2 -lif lif1 -role data -data-protocol nfs -home-node cluster91-01 -home-port e0d -address 192.168.4.205 -netmask 255.255.255.0 -status-admin up

3. Allow ssh access.
cluster91::*> net int modify -vserver nfs_2 -lif lif1 -firewall-policy mgmt

4. Create a role that gives access to all qtree actions.
cluster91::*> security login role create -role qtree_only -cmddirname “volume qtree” -access all -vserver nfs_2

5. Create a user
cluster91::*> security login create -user-or-group-name q_creator -application ssh -authentication-method password -role qtree_only -vserver nfs_2

6. Finally create the volume.
cluster91::*> vol create -vserver nfs_2 -volume q_vol -aggregate n1_aggr1 -size 100m

Now we log in and try some commands. We can only do qtree stuff.

[root@puck .ssh]# ssh q_creator@192.168.4.205
Password:*******
nfs_2::> vol show

Error: “show” is not a recognized command

nfs_2::> qtree create -volume q_vol -qtree q2

nfs_2::> qtree delete -volume q_vol -qtree q2

Warning: Are you sure you want to delete qtree “q2″ in volume “q_vol” Vserver
“nfs_2″? {y|n}: y
[Job 48] Job is queued: Delete qtree q2 in volume q_vol Vserver nfs_2.

Posted in Uncategorized | Leave a comment

ontap9 svmdr

Set up , manage and delete an svmdr pair.

Present:
Two clusters – cluster1 and cluster2
snapmirror license on both clusters
peering of the two clusters

cluster1 will be the source cluster2 will be the destination.
vserver src (source) and dest (destination).

1. create vserver src on cluster1.

cluster1::> vserver create -vserver src -subtype default -rootvolume rv -aggregate n1_aggr1 -rootvolume-security-style unix

2. create a datavolume for src.

cluster1::> vol create -vserver src -volume data -aggregate n1_aggr1 -size 500m -state online -junction-path /data

3. create destination vserver on cluster2.

cluster2::> vserver create -vserver dest -subtype dp-destination

4. peer the two vservers.

cluster2::> vserver peer create -vserver dest -peer-vserver src -applications snapmirror -peer-cluster cluster1

cluster1::> vserver peer accept -vserver src -peer-vserver dest

5. create a snapmirror relationship.

cluster2::> snapmirror create -source-vserver src -destination-vserver dest -type DP -schedule hourly -identity-preserve true

cluster2::> snapmirror initialize -destination-path dest:

6. check the vservers

cluster1::> vserver show

src data default running running rv n1_aggr1

cluster2::> vserver show

dest data dp-destination stopped – -
running

Perform a failover:

1. stop the source vserver.

cluster1::> vserver stop -vserver src

2. run a snapmirror update.

cluster2::> snapmirror update -destination-path dest:

3. quiesce the relationship and break it.

cluster2::> snapmirror quiesce -destination-path dest:
cluster2::> snapmirror break -destination-path dest:

4. start the vserver

cluster2::> vserver start -vserver dest

5. create a new relationship

cluster1::> snapmirror create -source-path dest: -destination-path src: -type DP -throttle unlimited -identity-preserve true

6. resync the relationship.

cluster1::> snapmirror resync -destination-path src:

7. stop the destination vserver

cluster2::> vserver stop -vserver dest

8. update the source , quiesce and break.

cluster1::> snapmirror update src:
cluster1::> snapmirror quiesce -destination-path src:
cluster1::> snapmirror break -destination-path src:

9. start the source vserver.

cluster1::> vserver start -vserver src

10. resync the original relationship.

cluster2::> snapmirror resync -destination-path dest:

11. delete the temporary relationship.

cluster1::> snapmirror delete -destination-path src:

12. release the destination.

cluster2::> snapmirror release -destination-path src:

Get rid of everything.

cluster2::> snapmirror delete -destination-path dest:
cluster1::> snapmirror release dest:
cluster2::> vol offline -vserver dest -volume data
cluster2::> snapmirror break -destination-path dest:
cluster2::> vol destroy -vserver dest -volume data
cluster2::> vol offline -vserver dest -volume rv
cluster2::> vol destroy -vserver dest -volume rv
cluster2::> vserver peer delete -vserver dest -peer-vserver src
cluster1::> vol offline data -vserver src
cluster1::> vol destroy -vserver src -volume data
cluster1::> vol offline -vserver src rv
cluster1::> vol destroy -vserver src rv
cluster1::> vserver delete src
cluster2::> vserver delete -vserver dest

Posted in Uncategorized | Leave a comment

cdot snapmirror exercise with commands

1. Create a new SVM (SVMcl1) on cluster1 and add a LIF to the SVM: 192.168.0.200

vserver create -vserver SVMcl1 -rootvolume root -aggregate aggr1_n1 -rootvolume-security-style unix
net int create -vserver SVMcl1 -lif lif1 -address 192.168.0.200 -netmask 255.255.255.0 -role data \
-home-node cluster1-02 -data-protocol nfs -home-port e0c

2. Add a 500MB volume (cl1data) to the SVM with junction-path: /data

vol create -vserver SVMcl1 -volume cl1data -aggregate aggr1_n2 -size 500m -junction-path /data

3. Add an export-policy with a rule for the CentOS VM: (192.168.0.10)
Connect the policy to the SVMcl1 root-volume and data-volume
Enable NFS for SVMcl1

export-policy create -vserver SVMcl1 -policyname svmcl1pol
export-policy rule create -vserver SVMcl1 -policyname svmcl1pol -clientmatch 192.168.0.10 \
-rorule any -rwrule any -superuser any
vol modify -vserver SVMcl1 -volume * -policy svmcl1pol
nfs on -vserver SVMcl1

4. Create a new directory on CentOS to be used as an NFS mountpoint: /svmdata
Mount 192.168.0.200:/data /svmdata

mkdir /svmdata
mount 192.168.0.200:/data /svmdata

5. On CentOS: copy all files from /var/log to /svmdata.

cp /var/log/* /svmdata

6. Create a new SVM (SVMcl2) on cluster2 and add a Lif to the SVM: 192.168.0.201

vserver create -vserver SVMcl2 -rootvolume root -aggregate aggr1_n1 \
-rootvolume-security-style unix
net int create -vserver SVMcl2 -lif lif1 -address 192.168.0.201 -netmask 255.255.255.0 -role data \
-home-node cluster2-01 -data-protocol nfs -home-port e0c

7. Add a 500MB volume to the SVM. The VolumeType should be DP: mirdata
You cannot mount the volume because its type is DP.
Add an export-policy with a rule for the CentOS VM: (192.168.0.10)

vol create -vserver SVMcl2 -volume cl2data -aggregate aggr1_n1 -size 500m -type DP
export-policy create -vserver SVMcl2 -policyname svmcl2pol
export-policy rule create -vserver SVMcl2 -policyname svmcl2pol -clientmatch 192.168.0.10 \
-rorule any -rwrule any -superuser any
vol modify -vserver SVMcl2 -volume * -policy svmcl2pol
nfs on -vserver SVMcl2

8. Set up a peering relationship between SVMcl1 and SVMcl2.
In a previous exercise you already created a peering relationship between
the two clusters.

vserver peer create -peer-cluster cluster2 -vserver SVMcl1 -peer-vserver SVMcl2 -applications snapmirror

9. Create a snapmirror relationship between SVMcl1:data and SVMcl2:mirdata. Type: DP.

snapmirror create -source-path SVMcl1:cl1data -destination-path SVMcl2:cl2data -type DP

10. Initialize the relationship.

snapmirror initialize -destination-path SVMcl2:cl2data

11. Make sure the snapmirror update runs every 5 minutes.

snapmirror modify -destination-path SVMcl2:cl2data -schedule 5min

12. Mount SVMcl2:mirdata to junction-path /mirdata

vol mount -vserver SVMcl2 -volume cl2data -junction-path /mirdata

13. On CentOS: create a mountpoint (/svmmirdata) and mount the mirdata volume to /svmmirdata

mkdir /svmmirdata
mount 192.168.0.201:/mirdata /svmmirdata

You should see the files you copied in task 5.

ls /svmmirdata
(output skipped)

14. On CentOS: copy /etc/hosts to /mirdata.
This should fail.

cp /etc/hosts /svmmirdata
cp: cannot create regular file `/svmmirdata/hosts’: Read-only file system

15. Break the snapmirror relationship.

snapmirror break -destination-path SVMcl2:cl2data

16. On CentOS: copy /etc/hosts to /mirdata.
This should succeed.

cp /etc/hosts /svmmirdata

17. reverse the snapmirror relationship.

snapmirror resync -destination-path SVMcl1:cl1data -source-path SVMcl2:cl2data

Warning: All data newer than Snapshot copy
snapmirror.af989dfc-7aa7-11e6-acd4-123478563412_2147484697.2016-09-14_2
03500 on volume SVMcl1:cl1data will be deleted.
Do you want to continue? {y|n}: y

Posted in Uncategorized | Leave a comment

cdot snapmirror exercise without commands

1. Create a new SVM (SVMcl1) on cluster1 and add a LIF to the SVM: 192.168.0.200

2. Add a 500MB volume (cl1data) to the SVM with junction-path: /data

3. Add an export-policy with a rule for the CentOS VM: (192.168.0.10)
Connect the policy to the SVMcl1 root-volume and data-volume
Enable NFS for SVMcl1

4. Create a new directory on CentOS to be used as an NFS mountpoint: /svmdata
Mount 192.168.0.200:/data /svmdata

5. On CentOS: copy all files from /var/log to /svmdata.

6. Create a new SVM (SVMcl2) on cluster2 and add a Lif to the SVM: 192.168.0.201

7. Add a 500MB volume to the SVM. The VolumeType should be DP: mirdata
You cannot mount the volume because its type is DP.
Add an export-policy with a rule for the CentOS VM: (192.168.0.10)

8. Set up a peering relationship between SVMcl1 and SVMcl2.
In a previous exercise you already created a peering relationship between
the two clusters.

9. Create a snapmirror relationship between SVMcl1:data and SVMcl2:mirdata. Type: DP.

10. Initialize the relationship.

11. Make sure the snapmirror update runs every 5 minutes.

12. Mount SVMcl2:mirdata to junction-path /mirdata

13. On CentOS: create a mountpoint (/svmmirdata) and mount the mirdata volume to /svmmirdata
You should see the files you copied in task 5.

14. On CentOS: copy /etc/hosts to /mirdata.
This should fail.

15. Break the snapmirror relationship.

16. On CentOS: copy /etc/hosts to /mirdata.
This should succeed.

17. reverse the snapmirror relationship.

Posted in Uncategorized | Leave a comment

redhat clustering

Redhat Clustering

Posted in Uncategorized | Leave a comment

cdot cifs create error

When setting up a cifs SVM.
Prerequisite: the time difference between DNS/AD and the cluster is no more than 5 minutes.

cl1::*> cifs create -vserver cifs_sales -cifs-server CIFS_SALES -domain netapp.local -ou CN=Computers -default-site “” -status-admin up

In order to create an Active Directory machine account for the CIFS server, you must supply the name and password of a Windows account with sufficient
privileges to add computers to the “CN=Computers” container within the “NETAPP.LOCAL” domain.

Enter the user name: administrator

Enter the password:

Error: Machine account creation procedure failed
[ 0 ms] Trying to create machine account ‘CIFS_SALES’ in domain
‘NETAPP.LOCAL’ for virtual server ‘cifs_sales’
[ 2] Unable to connect to any of the provided DNS servers
[ 2] Cannot find any domain controllers using DNS queries for
‘NETAPP.LOCAL’;verify the domain name and the node’s DNS
configuration
[ 3] Unable to connect to any of the provided DNS servers
[ 3] No servers found in DNS lookup to 192.168.4.247 for
_ldap._tcp.NETAPP.LOCAL.
[ 3] No servers available for MS_LDAP_AD, vserver: 9, domain:
NETAPP.LOCAL.
[ 3] Cannot find any domain controllers;verify the domain name
and your DNS configuration
**[ 3] FAILURE: Failed to find a domain controller

Error: command failed: Failed to create the Active Directory machine account “CIFS_SALES”. Reason: SecD Error: Cannot find an appropriate domain
controller.

Problem may be solved by adding a preferred-dc to the SVM:

cl1::*> domain preferred-dc add -vserver cifs_sales -domain netapp.local -preferred-dc 192.168.4.247

Posted in Uncategorized | Leave a comment

cdot 8.3 domaintunnel

(Elkin)

## clustermode domain-tunnel ##

# To have the administrator of the AD domain login to the administration vserver and manage the cluster, create a domain-tunnel.
# The name of the admin-vserver:

security login create -user-or-group-name netapp\administrator -application ssh -authmethod domain -role admin

# create a vserver with vserver setup

vservername: domain_tunnel_01
Vserver Type: data
Allowed Protocols: cifs
domainname: netapp.local
nameserver: 192.168.4.244
cifs servername: domain_tun_01
active directory domain: netapp.local
ad adminname: administrator
ad adminpassword: *******

vserver create -vserver -subtype default -rootvolume -aggregate -rootvolume-security-style mixed

# Create a network interface for the vserver for ouside communication
network interface create -vserver -lif -role data -data-protocol cifs -home-node -home-port -address -netmask -status-admin up

# Configure DNS configuration
dns create -domains -vserver -name-servers -state enabled

# Create a CIFS share
cifs create -vserver -cifs-server -domain -status-admin up

# Create the Domain-Tunnel
security login domain-tunnel create

# Testing the configuration
On windows in cmd-tool: plink netapp\administrator@

Posted in Uncategorized | Leave a comment

cdot simulator new root aggregate

Situation Simulator.

Cluster: cl1
Node1 : cl1-01
Node2 : cl1-02

cl1::*> set d
cl1::*> cluster show
Node Health Eligibility Epsilon
——————– ——- ———— ————
cl1-01 true true true
cl1-02 true true false
2 entries were displayed.

Node1 (cl1-01) is ok and has epsilon.
Node2 (cl1-02) will get new root aggregate.

1. create a new rootaggr:
cl1::> set d
cl1::*> aggr create newroot -diskcount 3 -force-small-aggregate true \
-nodes cl1-02

cl1::*> reboot -node cl1-02 -inhibit-takeover true

Make the “newroot” aggr the new rootaggregate.
2. Enter the BootMenu, choose option 5 (maintenance).
*> aggr options newroot ha_policy cfo
*> aggr options newroot root
*> halt
Press enter to start the vm again.

Resize the new rootvolume AUTOROOT (the actual size now is 25MB)
3. Login.
cl1::> node run -node cl1-02
cl1-02> vol size AUTOROOT +500m
cl1-02> exit
cl1-02::> halt -node cl1-02

4. Reset the recovery flag at the loader prompt and boot the node.
VLOADER*> unsetenv bootarg.init.boot_recovery
VLOADER*> boot_ontap

5. Add the AUTOROOT volume to VLDB
cl1::debug vreport*> volume add-other-volume -node cl1-02
cl1::debug vreport*> volume add-other-volume -node cl1-01

6. Remove the old rootvolume and rootaggregate
cl1::*> node run -node cl1-02
cl1-02> vol offline vol0
cl1-02> vol destroy vol0
cl1-02> exit
cl1-02*> aggr delete -aggregate aggr0_n2

Posted in Uncategorized | Leave a comment

CDOT config-backups

Howo upload a CDOT config backup from filer to ftpserver.

Requirements.
Running FTP server in network.
IP of ftpserver.
Username of ftpuser.
Password of ftpuser.

Login to Clustermanagement lif.

1. ssh admin@192.168.4.100

Go to priv or diag mode.

2. set d

Change backup settings.

3. configuration backup settings modify -destination ftp://192.168.4.160/ \
-username ftpuser -numbackups1 2 -numbackups2 2 -numbackups3 2

Note: 2 8hourly backups 2 daily backups 2 weekly backups

Set the password for uploading.

4. configuration backup settings set-password
(enter the password twice)

Done.

Posted in Uncategorized | Leave a comment

cdot simulator single node setup

Single Node Cluster (sort of)

Posted in Uncategorized | Leave a comment

cdot 8.3 networking diagram

cdot networking 8.3

Posted in Uncategorized | Leave a comment

cdot metrocluster switchback

metrocluster heal -phase aggregates
metrocluster heal -phase root-aggregates
metrocluster switchback

Posted in Uncategorized | Leave a comment

vaai

vaai for netapp

Posted in Uncategorized | Leave a comment

cdot 8.3.1 licenses

CLUSTERED SIMULATE ONTAP LICENSES
+++++++++++++++++++++++++++++++++

These are the licenses that you use with the clustered Data ONTAP version
of Simulate ONTAP to enable Data ONTAP features.

There are four groups of licenses in this file:

– cluster base license
– feature licenses for the ESX build
– feature licenses for the non-ESX build
– feature licenses for the second node of a 2-node cluster

Cluster Base License (Serial Number 1-80-000008)
================================================

You use the cluster base license when setting up the first simulator in a cluster.

Cluster Base license = SMKQROWJNQYQSDAAAAAAAAAAAAAA

Clustered Data ONTAP Feature Licenses
=====================================

You use the feature licenses to enable unique Data ONTAP features on your simulator.

Licenses for the ESX build (Serial Number 4082368511)
—————————————————–

Use these licenses with the VMware ESX build.

Feature License Code Description
——————- —————————- ——————————————–

CIFS CAYHXPKBFDUFZGABGAAAAAAAAAAA CIFS protocol
FCP APTLYPKBFDUFZGABGAAAAAAAAAAA Fibre Channel Protocol
FlexClone WSKTAQKBFDUFZGABGAAAAAAAAAAA FlexClone
Insight_Balance CGVTEQKBFDUFZGABGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI OUVWXPKBFDUFZGABGAAAAAAAAAAA iSCSI protocol
NFS QFATWPKBFDUFZGABGAAAAAAAAAAA NFS protocol
SnapLock UHGXBQKBFDUFZGABGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise QLXEEQKBFDUFZGABGAAAAAAAAAAA SnapLock Enterprise
SnapManager GCEMCQKBFDUFZGABGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror KYMEAQKBFDUFZGABGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect SWBBDQKBFDUFZGABGAAAAAAAAAAA SnapProtect Applications
SnapRestore YDPPZPKBFDUFZGABGAAAAAAAAAAA SnapRestore
SnapVault INIIBQKBFDUFZGABGAAAAAAAAAAA SnapVault primary and secondary

Licenses for the non-ESX build (Serial Number 4082368507)
———————————————————

Use these licenses with the VMware Workstation, VMware Player, and VMware Fusion build.

Feature License Code Description
——————- —————————- ——————————————–

CIFS YVUCRRRRYVHXCFABGAAAAAAAAAAA CIFS protocol
FCP WKQGSRRRYVHXCFABGAAAAAAAAAAA Fibre Channel Protocol
FlexClone SOHOURRRYVHXCFABGAAAAAAAAAAA FlexClone
Insight_Balance YBSOYRRRYVHXCFABGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI KQSRRRRRYVHXCFABGAAAAAAAAAAA iSCSI protocol
NFS MBXNQRRRYVHXCFABGAAAAAAAAAAA NFS protocol
SnapLock QDDSVRRRYVHXCFABGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise MHUZXRRRYVHXCFABGAAAAAAAAAAA SnapLock Enterprise
SnapManager CYAHWRRRYVHXCFABGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror GUJZTRRRYVHXCFABGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect OSYVWRRRYVHXCFABGAAAAAAAAAAA SnapProtect Applications
SnapRestore UZLKTRRRYVHXCFABGAAAAAAAAAAA SnapRestore
SnapVault EJFDVRRRYVHXCFABGAAAAAAAAAAA SnapVault primary and secondary

Licenses for the second node in a cluster (Serial Number 4034389062)
——————————————————————–

Use these licenses with the second simulator in a cluster (either the ESX or non-ESX build).

Feature License Code Description
——————- —————————- ——————————————–

CIFS MHEYKUNFXMSMUCEZFAAAAAAAAAAA CIFS protocol
FCP KWZBMUNFXMSMUCEZFAAAAAAAAAAA Fibre Channel Protocol
FlexClone GARJOUNFXMSMUCEZFAAAAAAAAAAA FlexClone
Insight_Balance MNBKSUNFXMSMUCEZFAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI YBCNLUNFXMSMUCEZFAAAAAAAAAAA iSCSI protocol
NFS ANGJKUNFXMSMUCEZFAAAAAAAAAAA NFS protocol
SnapLock EPMNPUNFXMSMUCEZFAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise ATDVRUNFXMSMUCEZFAAAAAAAAAAA SnapLock Enterprise
SnapManager QJKCQUNFXMSMUCEZFAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror UFTUNUNFXMSMUCEZFAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect CEIRQUNFXMSMUCEZFAAAAAAAAAAA SnapProtect Applications
SnapRestore ILVFNUNFXMSMUCEZFAAAAAAAAAAA SnapRestore
SnapVault SUOYOUNFXMSMUCEZFAAAAAAAAAAA SnapVault primary and secondary

Posted in Uncategorized | Leave a comment

Cdot Advanced Drive Partitioning how to disable

In order to disable auto-partitioning (ADP), perform the following bootargs at the LOADER prompt before initial setup:
1. Run the following command to disable HDD auto-partitioning:
setenv root-uses-shared-disks? false

2. Run the following command to disable SSD storage pool partitioning:
setenv allow-ssd-partitions? false

3. Run the following command to disable All Flash AFF auto-partitioning:
setenv root-uses-shared-ssds? false

Posted in Uncategorized | Leave a comment

Restoring a vm from a Netapp Snapmirror (DP/XDP) destination

2015 peter van der weerd

Restoring a vm from a Netapp Snapmirror (DP/XDP) destination.

1. List the available snapshots on mirrordestination.

cl1::> cl1::> snap show -vserver nfs1 -volume linvolmir
—Blocks—
Vserver Volume Snapshot State Size Total% Used%
——– ——- ——————————- ——– ——– —— —–
nfs1 linvolmir
snapmirror.b640aac9-d77a-11e3-9cae-123478563412_2147484711.2015-10-13_075517 valid 2.26MB 0% 0%
VeeamSourceSnapshot_linuxvm.2015-10-13_1330 valid 0B 0% 0%
2 entries were displayed.

2. Create a flexclone.

cl1::> vol clone create -vserver nfs1 -flexclone clonedmir -junction-path /clonedmir -parent-volume linvol -parent-snapshot VeeamSourceSnapshot_linuxvm.2015-10-13_1330
(volume clone create)
[Job 392] Job succeeded: Successful

3. Connect the correct export-policy to the new volume.

cl1::> vol modify -vserver nfs1 -volume linvol -policy data

The rest is done on the ESXi server.

4. Mount the datastore to ESXi.

~ # esxcfg-nas -a clonedmir -o 192.168.4.103 -s /clonedmir
Connecting to NAS volume: clonedmir
clonedmir created and connected.

5. Register the VM and note the ID of the VM.

~ # vim-cmd solo/registervm /vmfs/volumes/clonedmir/linux/linux.vmx
174

6. Power on the VM.

~ # vim-cmd vmsvc/power.on 174
Powering on VM:

7. Your prompt will not return until you answer the question about moving or copying.
Open a new session to ESXi, and list the question.

~ # vim-cmd vmsvc/message 174
Virtual machine message _vmx1:
msg.uuid.altered:This virtual machine might have been moved or copied.
In order to configure certain management and networking features, VMware ESX needs to know if this virtual machine was moved or copied.

If you don’t know, answer “I copied it”.
0. Cancel (Cancel)
1. I moved it (I moved it)
2. I copied it (I copied it) [default]
Answer the question.
The VMID is “174″, the MessageID is “_vmx1″ and the answer to the question is “1″

~ # vim-cmd vmsvc/message 174 _vmx1 1

Now the VM is started fully.

Just the commands

cl1::> cl1::> snap show -vserver nfs1 -volume linvolmir
cl1::> vol clone create -vserver nfs1 -flexclone clonedmir -junction-path /clonedmir -parent-volume linvol -parent-snapshot VeeamSourceSnapshot_linuxvm.2015-10-13_1330
cl1::> vol modify -vserver nfs1 -volume linvol -policy data
~ # esxcfg-nas -a clonedmir -o 192.168.4.103 -s /clonedmir
~ # vim-cmd solo/registervm /vmfs/volumes/clonedmir/linux/linux.vmx
~ # vim-cmd vmsvc/power.on 174
~ # vim-cmd vmsvc/message 174
~ # vim-cmd vmsvc/message 174 _vmx1 1

Posted in Uncategorized | Leave a comment

cdot snmp

original link

Enabling SNMP, API access on NetApp cluster mode SVMs
In order to get complete monitoring of, and be able delegate access to, Storage Virtual Machines on NetApp Cluster mode, it is necessary to add the SVMs as separate devices, and enable both SNMP and API access on the SVM itself.

The steps required to do so are:

add an SNMP community for the SVM
ensure SNMP is allowed by the firewall configuration of the interface of the SVM: determine the interface used by the SVM; the firewall policy, and amend if needed
enable API access by allowing API access through the SVM firewall, and creating an API user.
In the following example, we will enable access on the images server.

To enable SNMP
First, we can check the current SNMP configuration:

scenariolab::> system snmp community show
scenariolab
ro Logically
Add SNMP community for the SVM (server) images:

scenariolab::> system snmp community add -type ro -community-name Logical -vserver images
Confirm SNMP configuration:

scenariolab::> system snmp community show
images
ro Logical

scenariolab
ro Logically
You can determine the firewall policy used by the interface for a vserver with the following command:

network interface show -fields firewall-policy
vserver lif firewall-policy
——- —- —————
foo lif2 data
images lif1 data
You can then determine if the policy for the server in question (images, using the data policy in our case) allows snmp:

scenariolab::> firewall policy show -service snmp

(system services firewall policy show)

Policy Service Action IP-List

—————- ———- —— ——————–

cluster snmp allow 0.0.0.0/0

data snmp deny 0.0.0.0/0

intercluster snmp deny 0.0.0.0/0

mgmt snmp allow 0.0.0.0/0
As the data policy does not allow SNMP, we could either amend the firewall policy, or create a new one. In this case, we will create a new firewall policy:

system services firewall policy create -policy data1 -service snmp -action allow -ip-list 0.0.0.0/0

scenariolab::> firewall policy show -service snmp

(system services firewall policy show)

Policy Service Action IP-List

—————- ———- —— ——————–

cluster snmp allow 0.0.0.0/0

data snmp deny 0.0.0.0/0

data1 snmp allow 0.0.0.0/0

intercluster snmp deny 0.0.0.0/0

mgmt snmp allow 0.0.0.0/0
We can now assign new policy to the interface used by the vserver images (lif1):

network interface modify -vserver images -lif lif1 -firewall-policy data1
SNMP is now enabled

API
To enable API access the SVM, we must allow HTTP/HTTPS access through the firewall policy used by the SVM’s interfaces.

These commands add HTTP and HTTPS access to the new firewall policy we created above, that is already applied to the interface for the vserver images.

system service firewall policy create -policy data1 -service http -action allow -ip-list 0.0.0.0/0
system service firewall policy create -policy data1 -service https -action allow -ip-list 0.0.0.0/0
Now we just need to create an API user in the context of this vserver:

security login create -username logicmonitor -application ontapi -authmethod password -vserver images -role vsadmin
You can now add the SVM as a host to LogicMonitor. You should define the snmp.community, netapp.user, and netapp.pass properties for the host to allow access.

Posted in Uncategorized | Leave a comment

cdot read logfiles using a browser

cl1::*> security login create -username log -application http -authmethod password

Please enter a password for user ‘log’:
Please enter it again:

cl1::*> vserver service web modify -vserver * -name spi -enabled true

Warning: The service ‘spi’ depends on: ontapi. Enabling ‘spi’ will enable all of its prerequisites.
Do you want to continue? {y|n}: y
3 entries were modified.

cl1::*> vserver services web access create -name spi -role admin -vserver cl1
cl1::*> vserver service web access create -name compat -role admin -vserver cl1

https://*cluster_mgmt_ip*/spi/*nodename*/etc/log

Posted in Uncategorized | Leave a comment

cdot setup events

#setup smtpserver and sender, smtpserver should be reachable.

event config modify -mailserver smtp.online.nl -mailfrom petervanderweerd@gmail.com

#check your settings

event config show
Mail From: petervanderweerd@gmail.com
Mail Server: smtp.online.nl

#create destination for critical messages. the recipient will receive

#everything sent to the destination

event destination create -name critical_messages -mail p.w@planet.nl

#add message severity to destination. all messages with the

#particular severity will be sent to the destination

event route add-destinations {-severity <=CRITICAL} -destinations critical_messages
#check your settings

event route show -severity <=CRITICAL

#prevent flooding example, the message will be sent once per hour max.

event route modify -messagename bootfs.varfs.issue -timethreshold 3600

#test
set d

event generate -messagename cpeer.unavailable -values “hello”

Posted in Uncategorized | Leave a comment

cdot autosupport test

cluster1::> system node autosupport invoke -type test -node node1

Posted in Uncategorized | Leave a comment

Flexpod UCS

UCS

Posted in Uncategorized | Leave a comment

cdot convert snapmirror to snapvault

Steps
Break the data protection mirror relationship by using the snapmirror break command.

The relationship is broken and the disaster protection volume becomes a read-write volume.

Delete the existing data protection mirror relationship, if one exists, by using the snapmirror delete command.

Remove the relationship information from the source SVM by using the snapmirror release command.
(This also deletes the Data ONTAP created Snapshot copies from the source volume.)

Create a SnapVault relationship between the primary volume and the read-write volume by using the snapmirror create command with the -type XDP parameter.

Convert the destination volume from a read-write volume to a SnapVault volume and establish the SnapVault relationship by using the snapmirror resync command.
(Warning: all data newer than the snapmirror.xxxxxx snapshot copy will be lost and also: the snapvault destination should
not be the source of another snapvault relationship)

Posted in Uncategorized | Leave a comment

cdot mhost troubleshoot

1. go to the systemshell
set diag
systemshell -node cl1-01

2. unmount mroot
cd /etc
./netapp_mroot_unmount
logout

3. run cluster show a couple of times and see that health is false
cluster show

4. run cluster ring show to see that M-host is offline
cluster ring show
Node UnitName Epoch DB Epoch DB Trnxs Master Online
——— ——– ——– ——– ——– ——— ———
cl1-01 mgmt 6 6 699 cl1-01 master
cl1-01 vldb 7 7 84 cl1-01 master
cl1-01 vifmgr 9 9 20 cl1-01 master
cl1-01 bcomd 7 7 22 cl1-01 master
cl1-02 mgmt 0 6 692 – offline
cl1-02 vldb 7 7 84 cl1-01 secondary
cl1-02 vifmgr 9 9 20 cl1-01 secondary
cl1-02 bcomd 7 7 22 cl1-01 secondary

5. try to create a volume and see that the status of the aggregate
cannot be determined if you pick the aggregate from the broken M-host.

6. now vldb will also be offline.

5. remount mroot by starting mgwd from the systemshell
set diag
systemshell -node cl1-01
/sbin/mgwd -z &

7. when you run cluster ring show it should show vldb offline
cl1::*> cluster ring show
Node UnitName Epoch DB Epoch DB Trnxs Master Online
——— ——– ——– ——– ——– ——— ———
cl1-01 mgmt 6 6 738 cl1-01 master
cl1-01 vldb 7 7 87 cl1-01 master
cl1-01 vifmgr 9 9 24 cl1-01 master
cl1-01 bcomd 7 7 22 cl1-01 master
cl1-02 mgmt 6 6 738 cl1-01 secondary
cl1-02 vldb 0 7 84 – offline
cl1-02 vifmgr 0 9 20 – offline
cl1-02 bcomd 7 7 22 cl1-01 secondary

Watch vifmgr has gone bad as well.

8. start vldb by running spmctl -s -h vldb
or run /sbin/vldb
in this case, do the same for vifmgr.

(this will open the databases again and the cluster will be healthy)

Posted in Uncategorized | Leave a comment

cdot lif troubleshoot

1. create a lif

net int create -vserver nfs1 -lif tester -role data -data-protocol cifs,nfs,fcache -home-node cl1-01 -home-port e0c -address 1.1.1.1 -netmask 255.0.0.0 -status-admin up

2. go to diag mode
set diag

3. view the owner of the new lif and delete the owner of the new lif
net int ids show -owner nfs1
net int ids delete -owner nfs1 -name tester
net int ids show -owner nfs1

4. run net int show and see that the lif is not there.
net int show

5. try to create the same lif again.
net int create -vserver nfs1 -lif tester -role data -data-protocol cifs,nfs,fcache -home-node cl1-01 -home-port e0c -address 1.1.1.1 -netmask 255.0.0.0 -status-admin up

(this will fail because the lif is still there, but has no owner)

6. debug the vifmgr table
debug smdb table vifmgr_virtual_interface show -role data -fields lif-name,lif-id
(this will show you the node, the lif-id and the lif-name)

7. using the lif-id from the previous output, delete the lif entry.
debug smdb table vifmgr_virtual_interface delete -node cl1-01 -lif-id 1030

8. see that the lif is gone.
debug smdb table vifmgr_virtual_interface show -role data -fields lif-name,lif-id

9. create the lif.
net int create -vserver nfs1 -lif tester -role data -data-protocol cifs,nfs,fcache -home-node cl1-01 -home-port e0c -address 1.1.1.1 -netmask 255.0.0.0 -status-admin up

Posted in Uncategorized | Leave a comment

CDOT 8.3 release notes

http://mysupport.netapp.com/documentation/docweb/index.html?productID=61898

Posted in Uncategorized | Leave a comment

openstack compute node on centos7

yum install -y https://rdo.fedorapeople.org/rdo-release.rpm

Posted in Uncategorized | Leave a comment

openstack storagebackends

storage backends

Posted in Uncategorized | Leave a comment

openstack lvm and cinder

If you do not specify a volume group, cinder will create his own volume group
called cinder-volumes and use loopback devices for physical volumes.
If you do create and specify a volume group, you should specify the volume group
to be the default LVM volume group for cinder.

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
in cinder.conf
[Default]
volume_group=small
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

#
# Options defined in cinder.volume.drivers.lvm
#

# Name for the VG that will contain exported volumes (string
# value)
volume_group=small

[lvm]
volume_group=small
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

fdisk -l | grep sd
Disk /dev/sda: 17.2 GB, 17179869184 bytes, 33554432 sectors
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 33554431 16264192 8e Linux LVM
Disk /dev/sdb: 1073 MB, 1073741824 bytes, 2097152 sectors

pvcreate /dev/sdb
vgcreate small /dev/sdb

openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_group small
openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.lvm.LVMISCSIDriver

cinder create –display-name myvolume 1
list the id of the volume
cinder list
list the id’s of the instances
nova list

attach the volume to an instance
nova volume-attach ba3ecb1c-53d4-4bea-b3db-d4099e204484 7d9034a8-7dce-447d-951b-5d4f7b947897 auto

On the instance you can use /dev/vdb as a local drive.

Posted in Uncategorized | Leave a comment

CDOT 8.3 statistics catalog example

statistics catalog instance show -object lif
statistics catalog instance show -object volume

statistics catalog counter show -object lif
statistics catalog counter show -object volume

statistics start -object lif -counter recv_data
statistics stop
statistics show -object lif

Posted in Uncategorized | Leave a comment

CDOT 8.3 initiator ip in session

To view the ip-addresses in an iSCSI session:

cdot83::qos> iscsi session show -vserver iscsi -t

Vserver: iscsi
Target Portal Group: o9oi
Target Session ID: 2
Connection ID: 1
Connection State: Full_Feature_Phase
Connection Has session: true
Logical interface: o9oi
Target Portal Group Tag: 1027
Local IP Address: 192.168.4.206
Local TCP Port: 3260
Authentication Type: none
Data Digest Enabled: false
Header Digest Enabled: false
TCP/IP Recv Size: 131400
Initiator Max Recv Data Length: 65536
Remote IP address: 192.168.4.245
Remote TCP Port: 55063
Target Max Recv Data Length: 65536

Posted in Uncategorized | Leave a comment

CDOT 8.3 selective lunmapping

Selective lun mapping is a new iscsi feature in cdot 8.3. Selective lun mapping
results in that only the two nodes in the HA-pair that hosts the LUN map the
lun as ‘reporting-nodes’. This is to reduce the number of paths visible to
clients in large environments.

To list the reporting nodes.

This example is from the 8.3 simulator. There is no shared storage so
SFO is not possible. Hence only one node can report.

cdot83::qos> lun mapping show -vserver iscsi -fields reporting-nodes
vserver path igroup reporting-nodes
——- ———————————————— —— —————
iscsi /vol/lun_17042015_182136_vol/lun_17042015_182136 i cdot83-02

When a volume moves from one HA-Pair to another, you should -remove-reporting-no
des and -add-reporting-nodes.

Posted in Uncategorized | Leave a comment

openstack on centos 7

quickstqrt

1. change selinux to permissive
setenforce 0

2. run
systemctl stop NetworkManager
systemctl disable NetworkManager
systemctl enable network
sudo yum update -y
sudo yum install -y https://rdo.fedorapeople.org/rdo-release.rpm
sudo yum install -y openstack-packstack
packstack –allinone

Posted in Uncategorized | Leave a comment

cdot 8.3 licenses

CLUSTERED SIMULATE ONTAP LICENSES
+++++++++++++++++++++++++++++++++

These are the licenses that you use with the clustered Data ONTAP version
of Simulate ONTAP to enable Data ONTAP features.

There are four groups of licenses in this file:

– cluster base license
– feature licenses for the ESX build
– feature licenses for the non-ESX build
– feature licenses for the second node of a 2-node cluster

Cluster Base License (Serial Number 1-80-000008)
================================================

You use the cluster base license when setting up the first simulator in a cluster.

Cluster Base license = SMKQROWJNQYQSDAAAAAAAAAAAAAA

Clustered Data ONTAP Feature Licenses
=====================================

You use the feature licenses to enable unique Data ONTAP features on your simulator.

Licenses for the ESX build (Serial Number 4082368511)
—————————————————–

Use these licenses with the VMware ESX build.

Feature License Code Description
——————- —————————- ——————————————–

CIFS CAYHXPKBFDUFZGABGAAAAAAAAAAA CIFS protocol
FCP APTLYPKBFDUFZGABGAAAAAAAAAAA Fibre Channel Protocol
FlexClone WSKTAQKBFDUFZGABGAAAAAAAAAAA FlexClone
Insight_Balance CGVTEQKBFDUFZGABGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI OUVWXPKBFDUFZGABGAAAAAAAAAAA iSCSI protocol
NFS QFATWPKBFDUFZGABGAAAAAAAAAAA NFS protocol
SnapLock UHGXBQKBFDUFZGABGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise QLXEEQKBFDUFZGABGAAAAAAAAAAA SnapLock Enterprise
SnapManager GCEMCQKBFDUFZGABGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror KYMEAQKBFDUFZGABGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect SWBBDQKBFDUFZGABGAAAAAAAAAAA SnapProtect Applications
SnapRestore YDPPZPKBFDUFZGABGAAAAAAAAAAA SnapRestore
SnapVault INIIBQKBFDUFZGABGAAAAAAAAAAA SnapVault primary and secondary

Licenses for the non-ESX build (Serial Number 4082368507)
———————————————————

Use these licenses with the VMware Workstation, VMware Player, and VMware Fusion build.

Feature License Code Description
——————- —————————- ——————————————–

CIFS YVUCRRRRYVHXCFABGAAAAAAAAAAA CIFS protocol
FCP WKQGSRRRYVHXCFABGAAAAAAAAAAA Fibre Channel Protocol
FlexClone SOHOURRRYVHXCFABGAAAAAAAAAAA FlexClone
Insight_Balance YBSOYRRRYVHXCFABGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI KQSRRRRRYVHXCFABGAAAAAAAAAAA iSCSI protocol
NFS MBXNQRRRYVHXCFABGAAAAAAAAAAA NFS protocol
SnapLock QDDSVRRRYVHXCFABGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise MHUZXRRRYVHXCFABGAAAAAAAAAAA SnapLock Enterprise
SnapManager CYAHWRRRYVHXCFABGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror GUJZTRRRYVHXCFABGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect OSYVWRRRYVHXCFABGAAAAAAAAAAA SnapProtect Applications
SnapRestore UZLKTRRRYVHXCFABGAAAAAAAAAAA SnapRestore
SnapVault EJFDVRRRYVHXCFABGAAAAAAAAAAA SnapVault primary and secondary

Licenses for the second node in a cluster (Serial Number 4034389062)
——————————————————————–

Use these licenses with the second simulator in a cluster (either the ESX or non-ESX build).

Feature License Code Description
——————- —————————- ——————————————–

CIFS MHEYKUNFXMSMUCEZFAAAAAAAAAAA CIFS protocol
FCP KWZBMUNFXMSMUCEZFAAAAAAAAAAA Fibre Channel Protocol
FlexClone GARJOUNFXMSMUCEZFAAAAAAAAAAA FlexClone
Insight_Balance MNBKSUNFXMSMUCEZFAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI YBCNLUNFXMSMUCEZFAAAAAAAAAAA iSCSI protocol
NFS ANGJKUNFXMSMUCEZFAAAAAAAAAAA NFS protocol
SnapLock EPMNPUNFXMSMUCEZFAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise ATDVRUNFXMSMUCEZFAAAAAAAAAAA SnapLock Enterprise
SnapManager QJKCQUNFXMSMUCEZFAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror UFTUNUNFXMSMUCEZFAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect CEIRQUNFXMSMUCEZFAAAAAAAAAAA SnapProtect Applications
SnapRestore ILVFNUNFXMSMUCEZFAAAAAAAAAAA SnapRestore
SnapVault SUOYOUNFXMSMUCEZFAAAAAAAAAAA SnapVault primary and secondary

Posted in Uncategorized | Leave a comment

openstack restart all

To restart all openstack services:

# for svc in api cert compute conductor network scheduler; do
service openstack-nova-$svc restart
done

Redirecting to /bin/systemctl restart openstack-nova-api.service
Redirecting to /bin/systemctl restart openstack-nova-cert.service
Redirecting to /bin/systemctl restart openstack-nova-compute.service
Redirecting to /bin/systemctl restart openstack-nova-conductor.service
Redirecting to /bin/systemctl restart openstack-nova-network.service
Failed to issue method call: Unit openstack-nova-network.service failed to load: No such file or directory.
Redirecting to /bin/systemctl restart openstack-nova-scheduler.service

(note: network fails)

do it manually…
/usr/bin/systemctl stop neutron-openvswitch-agent
/usr/bin/systemctl start neutron-openvswitch-agent
/usr/bin/systemctl status neutron-openvswitch-agent

service httpd stop
service httpd start

Posted in Uncategorized | Leave a comment

ZFS shadow migration exercise

Setting up shadow migration.
Solaris 10 has ip-address 192.168.4.159
ZFS appliance has ip-address 192.168.4.220

1. On Solaris10.

Create a directory called /mnt/data and put some files in it.

# mkdir /mnt/data
# cd /mnt/data
# cp /var/log/* .

Share /mnt/data with NFS.
# echo “share /mnt/data” >> /etc/dfs/dfstab
# svcadm enable nfs/server
# dfshares
RESOURCE SERVER ACCESS TRANSPORT
solaris10:/mnt/data solaris10 – -

2. On the storage appliance.

Shares” “+Filesystems
In the “Create Filesystem” screen, specify the name: “mig”.
Set the Data migration source to “NFS”.
Specify the source: “192.168.4.159:/mnt/data“.
Apply

3. On Solaris 10.
Check whether /export/mig is shared.
# dfshares 192.168.4.220
RESOURCE SERVER ACCESS TRANSPORT
192.168.4.220:/export/mig 192.168.4.220 – -

Mount the share.
# cd /net/192.168.4.220/export/mig
# ls
Xorg.0.log snmpd.log syslog.1 syslog.5
Xorg.0.log.old sysidconfig.log syslog.2 syslog.6
authlog syslog syslog.3 syslog.7
postrun.log syslog.0 syslog.4

Done.

Create a share in the CLI:

zfs1:> shares
zfs1:shares> select default
zfs1:shares default> filesystem mig3
zfs1:shares default/mig3 (uncommitted)>set shadow=nfs://192.168.4.159/mnt/data
shadow = nfs://192.168.4.159/mnt/data (uncommitted)
zfs1:shares default/mig3 (uncommitted)> commit

Done.

Posted in Uncategorized | Leave a comment

ZFS replication_exercise

Controller ZFS1 192.168.4.220 project zfs1_proj
is replicated to Controller ZFS2 192.168.4.230.

Controller ZFS1 zfs1_proj has share zfs1_proj_fs1 that is mounted
by solaris 10 to mountpoint /mnt/fs1

On ZFS1
* create project zfs1_proj and filesystem zfs1_proj_fs1 *
* make sure the filesystem is created in zfs1_proj! *
(this is not described)

1. Create replication target.
Configuration” “Services” “Remote Replication” “+target
In the Add Replication Target enter:
Name: zfs2_target
Hostname: 192.168.4.230
Root Password: *******

2. Setup replication to ZFS2.
Shares” -> “Projects” edit “zfs_proj” -> “Replication
+Actions
In the Add Replication Action window enter:
Target: zfs2_target
Pool: p0
Scheduled

On Solaris 10

Mount the share on /mnt/fs1 and create a file.
# mount 192.168.4.220:/export/zfs1_proj_fs1 /mnt/fs1
# cd /mnt/fs1; touch a

On ZFS1
Projects” edit “zfs_proj” -> “Replication
Click on “Sync now” … watch the status bar.

On ZFS2
Shares” “Projects” “Replica
edit “zfs_proj
Try to add a filesystem and note the errormessage.
Ok

On ZFS2
Shares” “Projects” “Replica
edit “zfs_proj
Replication
Note the four icons: enable/disable, clone, sever, reverse.

Click on the “sever” icon.
In the Sever Replication windows enter:
zfs2_proj

Now the replication is stopped and the share on ZFS2 is accessible
by clients.

On Solaris
# umount -f /mnt/fs1
# mount 192.168.4.230:/export/zfs1_proj_fs1 /mnt/fs1
# ls /mnt/fs1
a

Do the same exercise a second time, but now click on the icon “reverse”.
What happens?

Posted in Uncategorized | Leave a comment

ZFS solaris 10 iscsi initiator exercise

On Solaris

1. Determine whether the required software is installed.

-bash-3.2# pkginfo |grep iscsi
system SUNWiscsir Sun iSCSI Device Driver (root)
system SUNWiscsitgtr Sun iSCSI Target (Root)
system SUNWiscsitgtu Sun iSCSI Target (Usr)
system SUNWiscsiu Sun iSCSI Management Utilities (usr)

2. Enable iscsi_initiator

-bash-3.2# svcadm enable iscsi/initiator

-bash-3.2# svcs -a|grep iscsi
disabled Mar_17 svc:/system/iscsitgt:default
online Mar_17 svc:/network/iscsi/initiator:default

3. Determine the Solaris 10 iqn.
-bash-3.2# iscsiadm list initiator-node
Initiator node name: iqn.1986-03.com.sun:01:7ac1dcf7ffff.5300d745
Initiator node alias: solaris10
Login Parameters (Default/Configured):
Header Digest: NONE/-
Data Digest: NONE/-
Authentication Type: NONE
RADIUS Server: NONE
RADIUS access: unknown
Tunable Parameters (Default/Configured):
Session Login Response Time: 60/-
Maximum Connection Retry Time: 180/-
Login Retry Time Interval: 60/-
Configured Sessions: 0

4. Enable static and/or send targets
-bash-3.2# iscsiadm list discovery
Discovery:
Static: disabled
Send Targets: disabled
iSNS: disabled

iscsiadm modify discovery –static enable
iscsiadm modify discovery –sendtargets enable

—-

On zfs

1. In the BUI create target
“configuration” “san” “+target”
In the create iscsi target window enter:
Alias: “zfs_t1″
Network Interfaces: “e1000g0″
Ok

Drag the target to below the Target Group “default”
A new targetgroup “targets-0″ is created.
Click apply

Create Initiator
“configuration” “san” “+initiators”
In the Identify iSCSI Initiator window enter
Initiator IQN: “iqn.1986-03.com.sun:01:7ac1dcf7ffff.5300d745″
Alias: “solaris”
Ok

Drag the initiator to below the Initiator Group “default”
A new initiatorgroup “initiators-0″ is created.
Click apply

2. Create a lun
“shares” “luns” “+luns”
In the Create Lun window enter:
Name: solarislun
Size: 2G
Target group: targets-0
Initiator group: initiators-0
Apply

On Solaris

-bash-3.2# iscsiadm add discovery-address 192.168.4.230:3260
-bash-3.2# iscsiadm modify discovery –sendtargets enable
-bash-3.2# devfsadm

Now the disk should show up with format.

Posted in Uncategorized | Leave a comment

solaris 10 initiator

# svcadm enable iscsi_initiator

# iscsiadm add discovery-address 192.168.248.213:3260

# iscsiadm modify discovery –sendtargets enable

# devfsadm -i iscsi

Posted in Uncategorized | Leave a comment

7000 iscsi

On 7000 cli:
configuration san iscsi targets create
set alias=a1
set interfaces=e1000g0
commit

On linux:
cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:3add18fcc55c

On 7000 cli:
configuration san iscsi initiators create
set alias=lin1
set initiator=InitiatorName=iqn.1994-05.com.redhat:3add18fcc55c

On 7000 cli:
shares
select default
lun lin1
set volsize=1g
commit

On linux:
iscsiadm -m discovery -t sendtargets -p 192.168.4.221
/etc/init.d/iscsi restart
fdisk -l

Posted in Uncategorized | Leave a comment

7000 cli network interfaces

log in to cli using ssh.

configuration net datatalinks device
set label=replication
set links=e1000g2
commit

configuration net interfaces ip
set label=repliation
set links=e1000g2
set v4addrs=192.168.4.223/24
commit

Posted in oracle | Leave a comment

linux btrfs

http://www.funtoo.org/BTRFS_Fun

funtoo linux
Go
Actions
Tools
Account

BTRFS Fun
Support Funtoo and help us grow! Donate $15 per month and get a free SSD-based Funtoo Virtual Container. 5 spots left.
Important
BTRFS is still experimental even with latest Linux kernels (3.4-rc at date of writing) so be prepared to lose some data sooner or later or hit a severe issue/regressions/”itchy” bugs. Subliminal message: Do not put critical data on BTRFS partitions.

Introduction
BTRFS is an advanced filesystem mostly contributed by Sun/Oracle whose origins take place in 2007. A good summary is given in [1]. BTRFS aims to provide a modern answer for making storage more flexible and efficient. According to its main contributor, Chris Mason, the goal was “to let Linux scale for the storage that will be available. Scaling is not just about addressing the storage but also means being able to administer and to manage it with a clean interface that lets people see what’s being used and makes it more reliable.” (Ref. http://en.wikipedia.org/wiki/Btrfs).

Btrfs, often compared to ZFS, is offering some interesting features like:

Using very few fixed location metadata, thus allowing an existing ext2/ext3 filesystem to be “upgraded” in-place to BTRFS.
Operations are transactional
Online volume defragmentation (online filesystem check is on the radar but is not yet implemented).
Built-in storage pool capabilities (no need for LVM)
Built-in RAID capabilities (both for the data and filesystem metadata). RAID-5/6 is planned for 3.5 kernels
Capabilities to grow/shrink the volume
Subvolumes and snapshots (extremely powerful, you can “rollback” to a previous filesystem state as if nothing had happened).
Copy-On-Write
Usage of B-Trees to store the internal filesystem structures (B-Trees are known to have a logarithmic growth in depth, thus making them more efficient when scanning)
Requirements
A recent Linux kernel (BTRFS metadata format evolves from time to time and mounting using a recent Linux kernel can make the BTRFS volume unreadable with an older kernel revision, e.g. Linux 2.6.31 vs Linux 2.6.30). You must also use sys-fs/btrfs-progs (0.19 or better use -9999 which points to the git repository).

Playing with BTRFS storage pool capabilities
Whereas it would possible to use btrfs just as you are used to under a non-LVM system, it shines in using its built-in storage pool capabilities. Tired of playing with LVM ? :-) Good news: you do not need it anymore with btrfs.

Setting up a storage pool
BTRFS terminology is a bit confusing. If you already have used another ‘advanced’ filesystem like ZFS or some mechanism like LVM, it’s good to know that there are many correlations. In the BTRFS world, the word volume corresponds to a storage pool (ZFS) or a volume group (LVM). Ref. http://www.rkeene.org/projects/info/wiki.cgi/165

The test bench uses disk images through loopback devices. Of course, in a real world case, you will use local drives or units though a SAN. To start with, 5 devices of 1 GiB are allocated:

# dd if=/dev/zero of=/tmp/btrfs-vol0.img bs=1G count=1
# dd if=/dev/zero of=/tmp/btrfs-vol1.img bs=1G count=1
# dd if=/dev/zero of=/tmp/btrfs-vol2.img bs=1G count=1
# dd if=/dev/zero of=/tmp/btrfs-vol3.img bs=1G count=1
# dd if=/dev/zero of=/tmp/btrfs-vol4.img bs=1G count=1
Then attached:

# losetup /dev/loop0 /tmp/btrfs-vol0.img
# losetup /dev/loop1 /tmp/btrfs-vol1.img
# losetup /dev/loop2 /tmp/btrfs-vol2.img
# losetup /dev/loop3 /tmp/btrfs-vol3.img
# losetup /dev/loop4 /tmp/btrfs-vol4.img
Creating the initial volume (pool)
BTRFS uses different strategies to store data and for the filesystem metadata (ref. https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices).

By default the behavior is:

metadata is replicated on all of the devices. If a single device is used the metadata is duplicated inside this single device (useful in case of corruption or bad sector, there is a higher chance that one of the two copies is clean). To tell btrfs to maintain a single copy of the metadata, just use single. Remember: dead metadata = dead volume with no chance of recovery.
data is spread amongst all of the devices (this means no redundancy; any data block left on a defective device will be inaccessible)
To create a BTRFS volume made of multiple devices with default options, use:

# mkfs.btrfs /dev/loop0 /dev/loop1 /dev/loop2
To create a BTRFS volume made of a single device with a single copy of the metadata (dangerous!), use:

# mkfs.btrfs -m single /dev/loop0
To create a BTRFS volume made of multiple devices with metadata spread amongst all of the devices, use:

# mkfs.btrfs -m raid0 /dev/loop0 /dev/loop1 /dev/loop2
To create a BTRFS volume made of multiple devices, with metadata spread amongst all of the devices and data mirrored on all of the devices (you probably don’t want this in a real setup), use:

# mkfs.btrfs -m raid0 -d raid1 /dev/loop0 /dev/loop1 /dev/loop2
To create a fully redundant BTRFS volume (data and metadata mirrored amongst all of the devices), use:

# mkfs.btrfs -d raid1 /dev/loop0 /dev/loop1 /dev/loop2
Note
Technically you can use anything as a physical volume: you can have a volume composed of 2 local hard drives, 3 USB keys, 1 loopback device pointing to a file on a NFS share and 3 logical devices accessed through your SAN (you would be an idiot, but you can, nevertheless). Having different physical volume sizes would lead to issues, but it works :-) .
Checking the initial volume
To verify the devices of which BTRFS volume is composed, just use btrfs-show device (old style) or btrfs filesystem show device (new style). You need to specify one of the devices (the metadata has been designed to keep a track of the what device is linked what other device). If the initial volume was set up like this:

# mkfs.btrfs /dev/loop0 /dev/loop1 /dev/loop2

WARNING! – Btrfs Btrfs v0.19 IS EXPERIMENTAL
WARNING! – see http://btrfs.wiki.kernel.org before using

adding device /dev/loop1 id 2
adding device /dev/loop2 id 3
fs created label (null) on /dev/loop0
nodesize 4096 leafsize 4096 sectorsize 4096 size 3.00GB
Btrfs Btrfs v0.19
It can be checked with one of these commands (They are equivalent):

# btrfs filesystem show /dev/loop0
# btrfs filesystem show /dev/loop1
# btrfs filesystem show /dev/loop2
The result is the same for all commands:

Label: none uuid: 0a774d9c-b250-420e-9484-b8f982818c09
Total devices 3 FS bytes used 28.00KB
devid 3 size 1.00GB used 263.94MB path /dev/loop2
devid 1 size 1.00GB used 275.94MB path /dev/loop0
devid 2 size 1.00GB used 110.38MB path /dev/loop1
To show all of the volumes that are present:

# btrfs filesystem show
Label: none uuid: 0a774d9c-b250-420e-9484-b8f982818c09
Total devices 3 FS bytes used 28.00KB
devid 3 size 1.00GB used 263.94MB path /dev/loop2
devid 1 size 1.00GB used 275.94MB path /dev/loop0
devid 2 size 1.00GB used 110.38MB path /dev/loop1

Label: none uuid: 1701af39-8ea3-4463-8a77-ec75c59e716a
Total devices 1 FS bytes used 944.40GB
devid 1 size 1.42TB used 1.04TB path /dev/sda2

Label: none uuid: 01178c43-7392-425e-8acf-3ed16ab48813
Total devices 1 FS bytes used 180.14GB
devid 1 size 406.02GB used 338.54GB path /dev/sda4
Warning
BTRFS wiki mentions that btrfs device scan should be performed, consequence of not doing the incantation? Volume not seen?
Mounting the initial volume
BTRFS volumes can be mounted like any other filesystem. The cool stuff at the top on the sundae is that the design of the BTRFS metadata makes it possible to use any of the volume devices. The following commands are equivalent:

# mount /dev/loop0 /mnt
# mount /dev/loop1 /mnt
# mount /dev/loop2 /mnt
For every physical device used for mounting the BTRFS volume df -h reports the same (in all cases 3 GiB of “free” space is reported):

# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/loop1 3.0G 56K 1.8G 1% /mnt
The following command prints very useful information (like how the BTRFS volume has been created):

# btrfs filesystem df /mnt
Data, RAID0: total=409.50MB, used=0.00
Data: total=8.00MB, used=0.00
System, RAID1: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=204.75MB, used=28.00KB
Metadata: total=8.00MB, used=0.00
By the way, as you can see, for the btrfs command the mount point should be specified, not one of the physical devices.

Shrinking the volume
A common practice in system administration is to leave some head space, instead of using the whole capacity of a storage pool (just in case). With btrfs one can easily shrink volumes. Let’s shrink the volume a bit (about 25%):

# btrfs filesystem resize -500m /mnt
# dh -h
/dev/loop1 2.6G 56K 1.8G 1% /mnt
And yes, it is an on-line resize, there is no need to umount/shrink/mount. So no downtimes! :-) However, a BTRFS volume requires a minimal size… if the shrink is too aggressive the volume won’t be resized:

# btrfs filesystem resize -1g /mnt
Resize ‘/mnt’ of ‘-1g’
ERROR: unable to resize ‘/mnt’
Growing the volume
This is the opposite operation, you can make a BTRFS grow to reach a particular size (e.g. 150 more megabytes):

# btrfs filesystem resize +150m /mnt
Resize ‘/mnt’ of ‘+150m’
# dh -h
/dev/loop1 2.7G 56K 1.8G 1% /mnt
You can also take an “all you can eat” approach via the max option, meaning all of the possible space will be used for the volume:

# btrfs filesystem resize max /mnt
Resize ‘/mnt’ of ‘max’
# dh -h
/dev/loop1 3.0G 56K 1.8G 1% /mnt
Adding a new device to the BTRFS volume
To add a new device to the volume:

# btrfs device add /dev/loop4 /mnt
oxygen ~ # btrfs filesystem show /dev/loop4
Label: none uuid: 0a774d9c-b250-420e-9484-b8f982818c09
Total devices 4 FS bytes used 28.00KB
devid 3 size 1.00GB used 263.94MB path /dev/loop2
devid 4 size 1.00GB used 0.00 path /dev/loop4
devid 1 size 1.00GB used 275.94MB path /dev/loop0
devid 2 size 1.00GB used 110.38MB path /dev/loop1
Again, no need to umount the volume first as adding a device is an on-line operation (the device has no space used yet hence the ’0.00′). The operation is not finished as we must tell btrfs to prepare the new device (i.e. rebalance/mirror the metadata and the data between all devices):

# btrfs filesystem balance /mnt
# btrfs filesystem show /dev/loop4
Label: none uuid: 0a774d9c-b250-420e-9484-b8f982818c09
Total devices 4 FS bytes used 28.00KB
devid 3 size 1.00GB used 110.38MB path /dev/loop2
devid 4 size 1.00GB used 366.38MB path /dev/loop4
devid 1 size 1.00GB used 378.38MB path /dev/loop0
devid 2 size 1.00GB used 110.38MB path /dev/loop1
Note
Depending on the sizes and what is in the volume a balancing operation could take several minutes or hours.
Removing a device from the BTRFS volume
# btrfs device delete /dev/loop2 /mnt
# btrfs filesystem show /dev/loop0
Label: none uuid: 0a774d9c-b250-420e-9484-b8f982818c09
Total devices 4 FS bytes used 28.00KB
devid 4 size 1.00GB used 264.00MB path /dev/loop4
devid 1 size 1.00GB used 268.00MB path /dev/loop0
devid 2 size 1.00GB used 0.00 path /dev/loop1
*** Some devices missing
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/loop1 3.0G 56K 1.5G 1% /mnt
Here again, removing a device is totally dynamic and can be done as an on-line operation! Note that when a device is removed, its content is transparently redistributed among the other devices.

Obvious points:

** DO NOT UNPLUG THE DEVICE BEFORE THE END OF THE OPERATION, DATA LOSS WILL RESULT**
If you have used raid0 in either metadata or data at the BTRFS volume creation you will end in a unusable volume if one of the the devices fails before being properly removed from the volume as some stripes will be lost.
Once you add a new device to the BTRFS volume as a replacement for a removed one, you can cleanup the references to the missing device:

# btrfs device delete missing
Using a BTRFS volume in degraded mode
Warning
It is not possible to use a volume in degraded mode if raid0 has been used for data/metadata and the device had not been properly removed with btrfs device delete (some stripes will be missing). The situation is even worse if RAID0 is used for the the metadata: trying to mount a BTRFS volume in read/write mode while not all the devices are accessible will simply kill the remaining metadata, hence making the BTRFS volume totally unusable… you have been warned! :-)
If you use raid1 or raid10 for data AND metadata and you have a usable submirror accessible (consisting of 1 drive in case of RAID1 or the two drive of the same RAID0 array in case of RAID10), you can mount the array in degraded mode in the case of some devices are missing (e.g. dead SAN link or dead drive) :

# mount -o degraded /dev/loop0 /mnt
If you use RAID0 (and have one of your drives inaccessible) the metadata or RAID10 but not enough drives are on-line to even get a degraded mode possible, btrfs will refuse to mount the volume:

# mount /dev/loop0 /mnt
mount: wrong fs type, bad option, bad superblock on /dev/loop0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog – try
dmesg | tail or so
The situation is no better if you have used RAID1 for the metadata and RAID0 for the data, you can mount the drive in degraded mode but you will encounter problems while accessing your files:

# cp /mnt/test.dat /tmp
cp: reading `/mnt/test.dat’: Input/output error
cp: failed to extend `/tmp/test.dat’: Input/output error
Playing with subvolumes and snapshots
A story of boxes….
When you think about subvolumes in BTRFS, think about boxes. Each one of those can contain items and other smaller boxes (“sub-boxes”) which in turn can also contains items and boxes (sub-sub-boxes) and so on. Each box and items has a number and a name, except for the top level box, which has only a number (zero). Now imagine that all of the boxes are semi-opaque: you can see what they contain if you are outside the box but you can’t see outside when you are inside the box. Thus, depending on the box you are in you can view either all of the items and sub-boxes (top level box) or only a part of them (any other box but the top level one). To give you a better idea of this somewhat abstract explanation let’s illustrate a bit:

(0) –+-> Item A (1)
|
+-> Item B (2)
|
+-> Sub-box 1 (3) –+-> Item C (4)
| |
| +-> Sub-sub-box 1.1 (5) –+-> Item D (6)
| | |
| | +-> Item E (7)
| | |
| | +-> Sub-Sub-sub-box 1.1.1 (8) —> Item F (9)
| +-> Item F (10)
|
+-> Sub-box 2 (11) –> Item G (12)
What you see in the hierarchy depends on where you are (note that the top level box numbered 0 doesn’t have a name, you will see why later). So:

If you are in the node named top box (numbered 0) you see everything, i.e. things numbered 1 to 12
If you are in “Sub-sub-box 1.1″ (numbered 5), you see only things 6 to 9
If you are in “Sub-box 2″ (numbered 11), you only see what is numbered 12
Did you notice? We have two items named ‘F’ (respectively numbered 9 and 10). This is not a typographic error, this is just to illustrate the fact that every item lives its own peaceful existence in its own box. Although they have the same name, 9 and 10 are two distinct and unrelated objects (of course it is impossible to have two objects named ‘F’ in the same box, even they would be numbered differently).

… applied to BTRFS! (or, “What is a volume/subvolume?”)
BTRFS subvolumes work in the exact same manner, with some nuances:

First, imagine a frame that surrounds the whole hierarchy (represented in dots below). This is your BTRFS volume. A bit abstract at first glance, but BTRFS volumes have no tangible existence, they are just an aggregation of devices tagged as being clustered together (that fellowship is created when you invoke mkfs.btrfs or btrfs device add).
Second, the first level of hierarchy contains only a single box numbered zero which can never be destroyed (because everything it contains would also be destroyed).
If in our analogy of a nested boxes structure we used the word “box”, in the real BTRFS word we use the word “subvolume” (box => subvolume). Like in our boxes analogy, all subvolumes hold a unique number greater than zero and a name, with the exception of root subvolume located at the very first level of the hierarchy which is always numbered zero and has no name (BTRFS tools destroy subvolumes by their name not their number so no name = no possible destruction. This is a totally intentional architectural choice, not a flaw).

Here is a typical hierarchy:

…..BTRFS Volume………………………………………………………………………………………………………………..
.
. Root subvolume (0) –+-> Subvolume SV1 (258) —> Directory D1 –+-> File F1
. | |
. | +-> File F2
. |
. +-> Directory D1 –+-> File F1
. | |
. | +-> File F2
. | |
. | +-> File F3
. | |
. | +-> Directory D11 —> File F4
. +-> File F1
. |
. +-> Subvolume SV2 (259) –+-> Subvolume SV21 (260)
. |
. +-> Subvolume SV22 (261) –+-> Directory D2 —> File F4
. |
. +-> Directory D3 –+-> Subvolume SV221 (262) —> File F5
. | |
. | +-> File F6
. | |
. | +-> File F7
. |
. +-> File F8
.
…………………………………………………………………………………………………………………….
Maybe you have a question: “Okay, What is the difference between a directory and a subvolume? Both can can contain something!”. To further confuse you, here is what users get if they reproduce the first level hierarchy on a real machine:

# ls -l
total 0
drwx—— 1 root root 0 May 23 12:48 SV1
drwxr-xr-x 1 root root 0 May 23 12:48 D1
-rw-r–r– 1 root root 0 May 23 12:48 F1
drwx—— 1 root root 0 May 23 12:48 SV2
Although subvolumes SV1 and SV2 have been created with special BTRFS commands they appear just as if they were ordinary directories! A subtle nuance exists, however: think again at our boxes analogy we did before and map the following concepts in the following manner:

a subvolume : the semi-opaque box
a directory : a sort of item (that can contain something even another subvolume)
a file : another sort of item
So, in the internal filesystem metadata SV1 and SV2 are stored in a different manner than D1 (although this is transparently handled for users). You can, however see SV1 and SV2 for what they are (subvolumes) by running the following command (subvolume numbered (0) has been mounted on /mnt):

# btrfs subvolume list /mnt
ID 258 top level 5 path SV1
ID 259 top level 5 path SV2
What would we get if we create SV21 and SV22 inside of SV2? Let’s try! Before going further you should be aware that a subvolume is created by invoking the magic command btrfs subvolume create:

# cd /mnt/SV2
# btrfs subvolume create SV21
Create subvolume ‘./SV21′
# btrfs subvolume create SV22
Create subvolume ‘./SV22′
# btrfs subvolume list /mnt
ID 258 top level 5 path SV1
ID 259 top level 5 path SV2
ID 260 top level 5 path SV2/SV21
ID 261 top level 5 path SV2/SV22
Again, invoking ls in /mnt/SV2 will report the subvolumes as being directories:

# ls -l
total 0
drwx—— 1 root root 0 May 23 13:15 SV21
drwx—— 1 root root 0 May 23 13:15 SV22
Changing the point of view on the subvolumes hierarchy
At some point in our boxes analogy we have talked about what we see and what we don’t see depending on our location in the hierarchy. Here lies a big important point: whereas most of the BTRFS users mount the root subvolume (subvolume id = 0, we will retain the root subvolume terminology) in their VFS hierarchy thus making visible the whole hierarchy contained in the BTRFS volume, it is absolutely possible to mount only a subset of it. How that could be possible? Simple: Just specify the subvolume number when you invoke mount. For example, to mount the hierarchy in the VFS starting at subvolume SV22 (261) do the following:

# mount -o subvolid=261 /dev/loop0 /mnt
Here lies an important notion not disclosed in the previous paragraph: although both directories and subvolumes can act as containers, only subvolumes can be mounted in a VFS hierarchy. It is a fundamental aspect to remember: you cannot mount a sub-part of a subvolume in the VFS; you can only mount the subvolume in itself. Considering the hierarchy schema in the previous section, if you want to access the directory D3 you have three possibilities:

Mount the non-named subvolume (numbered 0) and access D3 through /mnt/SV2/SV22/D3 if the non-named subvolume is mounted in /mnt
Mount the subvolume SV2 (numbered 259) and access D3 through /mnt/SV22/D3 if the the subvolume SV2 is mounted in /mnt
Mount the subvolume SV22 (numbered 261) and access D3 through /mnt/D3 if the the subvolume SV22 is mounted in /mnt
This is accomplished by the following commands, respectively:

# mount -o subvolid=0 /dev/loop0 /mnt
# mount -o subvolid=259 /dev/loop0 /mnt
# mount -o subvolid=261 /dev/loop0 /mnt
Note
When a subvolume is mounted in the VFS, everything located “above” the subvolume is hidden. Concretely, if you mount the subvolume numbered 261 in /mnt, you only see what is under SV22, you won’t see what is located above SV22 like SV21, SV2, D1, SV1, etc.
The default subvolume
$100 questions: 1. “If I don’t put ‘subvolid’ in the command line, 1. how does the kernel know which one of the subvolumes it has to mount? 2. Does Omitting the ‘subvolid’ means automatically ‘mount subvolume numbered 0′?”. Answers: 1. BTRFS magic! ;-) 2. No, not necessarily, you can choose something other than the non-named subvolume.

When you create a brand new BTRFS filesystem, the system not only creates the initial the root subvolume (numbered 0) but also tags it as being the default subvolume. When you ask the operating system to mount a subvolume contained in a BTRFS volume without specifying a subvolume number, it determines which of the existing subvolumes has been tagged as “default subvolume” and mounts it. If none of the exiting subvolumes has the tag “default subvolume” (e.g. because the default subvolume has been deleted), the mount command gives up with a rather cryptic message:

# mount /dev/loop0 /mnt
mount: No such file or directory
It is also possible to change at any time which subvolume contained in a BTRFS volume is considered the default volume. This is accomplished with btrfs subvolume set-default. The following tags the subvolume 261 as being the default:

# btrfs subvolume set-default 261 /mnt
After that operation, doing the following is exactly the same:

# mount /dev/loop0 /mnt
# mount -o subvolid=261 /dev/loop0 /mnt
Note
The chosen new default subvolume must be visible in the VFS when you invoke btrfs subvolume set-default’
Deleting subvolumes
Question: “As subvolumes appear like directories, can I delete a subvolume by doing an rm -rf on it?”. Answer: Yes, you can, but that way is not the most elegant, especially when it contains several gigabytes of data scattered on thousands of files, directories and maybe other subvolumes located in the one you want to remove. It isn’t elegant because rm -rf could take several minutes (or even hours!) to complete whereas something else can do the same job in the fraction of a second.

“Huh?” Yes perfectly possible, and here is the cool goodie for the readers who arrived at this point: when you want to remove a subvolume, use btrfs subvolume delete instead of rm -rf. That btrfs command will remove the snapshots in a fraction of a second, even it contains several gigabytes of data!

Warning
You can never remove the root subvolume of a BTRFS volume as btrfs delete expects a subvolume name (again: this is not a flaw in the design of BTRFS; removing the subvolume numbered 0 would destroy the entirety of a BTRFS volume…too dangerous).
If the subvolume you delete was tagged as the default subvolume you will have to designate another default subvolume or explicitly tell the system which one of the subvolumes has to be mounted)
An example: considering our initial example given above and supposing you have mounted non-named subvolume numbered 0 in /mnt, you can remove SV22 by doing:

# btrfs subvolume delete /mnt/SV2/SV22
Obviously the BTRFS volume will look like this after the operation:

…..BTRFS Volume………………………………………………………………………………………………………………..
.
. (0) –+-> Subvolume SV1 (258) —> Directory D1 –+-> File F1
. | |
. | +-> File F2
. |
. +-> Directory D1 –+-> File F1
. | |
. | +-> File F2
. | |
. | +-> File F3
. | |
. | +-> Directory D11 —> File F4
. +-> File F1
. |
. +-> Subvolume SV2 (259) –+-> Subvolume SV21 (260)
…………………………………………………………………………………………………………………….
Snapshot and subvolumes
If you have a good comprehension of what a subvolume is, understanding what a snapshot is won’t be a problem: a snapshot is a subvolume with some initial contents. “Some initial contents” here means an exact copy.

When you think about snapshots, think about copy-on-write: the data blocks are not duplicated between a mounted subvolume and its snapshot unless you start to make changes to the files (a snapshot can occupy nearly zero extra space on the disk). At time goes on, more and more data blocks will be changed, thus making snapshots “occupy” more and more space on the disk. It is therefore recommended to keep only a minimal set of them and remove unnecessary ones to avoid wasting space on the volume.

The following illustrates how to take a snaphot of the VFS root:

# btrfs subvolume snapshot / /snap-2011-05-23
Create a snapshot of ‘/’ in ‘//snap-2011-05-23′
Once created, the snapshot will persist in /snap-2011-05-23 as long as you don’t delete it. Note that the snapshot contents will remain exactly the same it was at the time is was taken (as long as you don’t make changes… BTRFS snapshots are writable!). A drawback of having snapshots: if you delete some files in the original filesystem, the snapshot still contains them and the disk blocks can’t be claimed as free space. Remember to remove unwanted snapshots and keep a bare minimal set of them.

Listing and deleting snaphots
As there is no distinction between a snapshot and a subvolume, snapshots are managed with the exact same commands, especially when the time has come to delete some of them. An interesting feature in BTRFS is that snapshots are writable. You can take a snapshot and make changes in the files/directories it contains. A word of caution: there are no undo capbilities! What has been changed has been changed forever… If you need to do several tests just take several snapshots or, better yet, snapshot your snapshot then do whatever you need in this copy-of-the-copy :-) .

Using snapshots for system recovery (aka Back to the Future)
Here is where BTRFS can literally be a lifeboat. Suppose you want to apply some updates via emerge -uaDN @world but you want to be sure that you can jump back into the past in case something goes seriously wrong after the system update (does libpng14 remind you of anything?!). Here is the “putting-things-together part” of the article!

The following only applies if your VFS root and system directories containing /sbin, /bin, /usr, /etc…. are located on a BTRFS volume. To make things simple, the whole structure is supposed to be located in the SAME subvolume of the same BTRFS volume.

To jump back into the past you have at least two options:

Fiddle with the default subvolume numbers
Use the kernel command line parameters in the bootloader configuration files
In all cases you must take a snapshot of your VFS root *before* updating the system:

# btrfs subvolume snapshot / /before-updating-2011-05-24
Create a snapshot of ‘/’ in ‘//before-updating-2011-05-24′
Note
Hint: You can create an empty file at the root of your snapshot with the name of your choice to help you easily identify which subvolume is the currently mounted one (e.g. if the snapshot has been named before-updating-2011-05-24, you can use a slightly different name like current-is-before-updating-2011-05-24 => touch /before-updating-2011-05-24/current-is-before-updating-2011-05-24). This is extremly useful if you are dealing with several snapshots.
There is no “better” way; it’s just a question of personal preference.

Way #1: Fiddle with the default subvolume number
Hypothesis:

Your “production” VFS root partition resides in the root subvolume (subvolid=0),
Your /boot partition (where the bootloader configuration files are stored) is on another standalone partition
First search for the newly created subvolume number:

# btrfs subvolume list /
”’ID 256”’ top level 5 path before-updating-2011-05-24
’256′ is the ID to be retained (of course, this ID will differ in your case).

Now, change the default subvolume of the BTRFS volume to designate the subvolume (snapshot) before-updating and not the root subvolume then reboot:

# btrfs subvolume set-default 256 /
Once the system has rebooted, and if you followed the advice in the previous paragraph that suggests to create an empty file of the same name as the snapshot, you should be able to see if the mounted VFS root is the copy hold by the snapshot before-updating-2011-05-24:

# ls -l /

-rw-rw-rw- 1 root root 0 May 24 20:33 current-is-before-updating-2011-05-24

The correct subvolume has been used for mounting the VFS! Excellent! This is now the time to mount your “production” VFS root (remember the root subvolume can only be accessed via its identification number i.e 0):

# mount -o subvolid=0 /mnt
# mount

/dev/sda2 on /mnt type btrfs (rw,subvolid=0)
Oh by the way, as the root subvolume is now mounted in /mnt let’s try something, just for the sake of the demonstration:

# ls /mnt

drwxr-xr-x 1 root root 0 May 24 20:33 current-is-before-updating-2011-05-24

# btrfs subvolume list /mnt
ID 256 top level 5 path before-updating-2011-05-24
No doubt possible :-) Time to rollback! For this rsync will be used in the following way:

# rsync –progress -aHAX –exclude=/proc –exclude=/dev –exclude=/sys –exclude=/mnt / /mnt
Basically we are asking rsync to:

preserve timestamps, hard and symbolic links, owner/group IDs, ACLs and any extended attributes (refer to the rsync manual page for further details on options used) and to report its progression
ignore the mount points where virtual filesystems are mounted (procfs, sysfs…)
avoid a re-recursion by reprocessing /mnt (you can speed up the process by adding some extra directories if you are sure they don’t hold any important changes or any change at all like /var/tmp/portage for example).
Be patient! The resync may take several minutes or hours depending on the amount of data amount to process…

Once finished, you will have to set the default subvolume to be the root subvolume:

# btrfs subvolume set-default 0 /mnt
ID 256 top level 5 path before-updating-2011-05-24
Warning
DO NOT ENTER / instead of /mnt in the above command; it won’t work and you will be under the snapshot before-updating-2011-05-24 the next time the machine reboots.

The reason is that subvolume number must be “visible” from the path given at the end of the btrfs subvolume set-default command line. Again refer the boxes analogy: in our context we are in a subbox numbered 256 which is located *inside* the box numbered 0 (so it can’t see neither interfere with it). [TODO: better explain]
Now just reboot and you should be in business again! Once you have rebooted just check if you are really under the right subvolume:

# ls /

drwxr-xr-x 1 root root 0 May 24 20:33 current-is-before-updating-2011-05-24

# btrfs subvolume list /
ID 256 top level 5 path before-updating-2011-05-24
At the right place? Excellent! You can now delete the snapshot if you wish, or better: keep it as a lifeboat of “last good known system state.”

Way #2: Change the kernel command line in the bootloader configuration files
First search for the newly created subvolume number:

# btrfs subvolume list /
”’ID 256”’ top level 5 path before-updating-2011-05-24
’256′ is the ID to be retained (can differ in your case).

Now with your favourite text editor, edit the adequate kernel command line in your bootloader configuration (/etc/boot.conf). This file contains is typically organized in several sections (one per kernel present on the system plus some global settings), like the excerpt below:

set timeout=5
set default=0

# Production kernel
menuentry “Funtoo Linux production kernel (2.6.39-gentoo x86/64)” {
insmod part_msdos
insmod ext2

set root=(hd0,1)
linux /kernel-x86_64-2.6.39-gentoo root=/dev/sda2
initrd /initramfs-x86_64-2.6.39-gentoo
}

Find the correct kernel line and add one of the following statements after root=/dev/sdX:

rootflags=subvol=before-updating-2011-05-24
– Or -
rootflags=subvolid=256
Warning
If the kernel your want to use has been generated with Genkernel, you MUST use real_rootflags=subvol=… instead of rootflags=subvol=… at the penalty of not having your rootflags taken into consideration by the kernel on reboot.

Applied to the previous example you will get the following if you referred the subvolume by its name:

set timeout=5
set default=0

# Production kernel
menuentry “Funtoo Linux production kernel (2.6.39-gentoo x86/64)” {
insmod part_msdos
insmod ext2

set root=(hd0,1)
linux /kernel-x86_64-2.6.39-gentoo root=/dev/sda2 rootflags=subvol=before-updating-2011-05-24
initrd /initramfs-x86_64-2.6.39-gentoo
}

Or you will get the following if you referred the subvolume by its identification number:

set timeout=5
set default=0

# Production kernel
menuentry “Funtoo Linux production kernel (2.6.39-gentoo x86/64)” {
insmod part_msdos
insmod ext2

set root=(hd0,1)
linux /kernel-x86_64-2.6.39-gentoo root=/dev/sda2 rootflags=subvolid=256
initrd /initramfs-x86_64-2.6.39-gentoo
}

Once the modifications are done, save your changes and take the necessary extra steps to commit the configuration changes on the first sectors of the disk if needed (this mostly applies to the users of LILO; Grub and SILO do not need to be refreshed) and reboot.

Once the system has rebooted and if you followed the advice in the previous paragraph that suggests to create an empty file of the same name as the snapshot, you should be able to see if the mounted VFS root is the copy hold by the snapshot before-updating-2011-05-24:

# ls -l /

-rw-rw-rw- 1 root root 0 May 24 20:33 current-is-before-updating-2011-05-24

The correct subvolume has been used for mounting the VFS! Excellent! This is now the time to mount your “production” VFS root (remember the root subvolume can only be accessed via its identification number 0):

# mount -o subvolid=0 /mnt
# mount

/dev/sda2 on /mnt type btrfs (rw,subvolid=0)
Time to rollback! For this rsync will be used in the following way:

# rsync –progress -aHAX –exclude=/proc –exclude=/dev –exclude=/sys –exclude=/mnt / /mnt
Here, please refer to what has been said in Way #1 concerning the used options in rsync. Once everything is in place again, edit your bootloader configuration to remove the rootflags/real_rootflags kernel parameter, reboot and check if you are really under the right subvolume:

# ls /

drwxr-xr-x 1 root root 0 May 24 20:33 current-is-before-updating-2011-05-24

# btrfs subvolume list /
ID 256 top level 5 path current-is-before-updating-2011-05-24
At the right place? Excellent! You can now delete the snapshot if you wish, or better: keep it as a lifeboat of “last good known system state.”

Some BTRFS practices / returns of experience / gotchas
Although BTRFS is still evolving, at the date of writing it (still) is an experimental filesystem and should be not be used for production systems and for storing critical data (even if the data is non critical, having backups on a partition formatted with a “stable” filesystem like Reiser or ext3/4 is recommended).
From time to time some changes are brought to the metadata (BTRFS format is not definitive at date of writing) and a BTRFS partition could not be used with older Linux kernels (this happened with Linux 2.6.31).
More and more Linux distributions are proposing the filesystem as an alternative for ext4
Some reported gotchas: https://btrfs.wiki.kernel.org/index.php/Gotchas
Playing around with BTFRS can be a bit tricky especially when dealing with default volumes and mount point (again: the box analogy)
Using compression (e.g. LZO =>> mount -o compress=lzo) on the filesystem can improve the throughput performance, however many files nowadays are already compressed at application level (music, pictures, videos….).
Using space caching capabilities (mount -o space_cache) seems to brings some extra slight performance improvements.
There is very interesting discussion on BTRFS design limitations with B-Trees lying on LKML. We strongly encourage you to read about on
Deploying a Funtoo instance in a subvolume other than the root subvolume
Some Funtoo core devs have used BTRFS for many months and no major glitches have been reported so far (except an non-aligned memory access trap on SPARC64 in a checksum calculation routine; minor latest kernels may brought a correction) except a long time ago but this was more related to a kernel crash due to a bug that corrupted some internal data rather than the filesystem code in itself.

The following can simplify your life in case of recovery (not tested):

When you prepare the disk space that will hold the root of your future Funtoo instance (and so, will hold /usr /bin /sbin /etc etc…), don’t use the root subvolume but take an extra step to define a subvolume like illustrated below:

# fdisk /dev/sda2
….
# mkfs.btrfs /dev/sda2
# mount /dev/sda2 /mnt/funtoo
# subvolume create /mnt/funtoo /mnt/funtoo/live-vfs-root-20110523
# chroot /mnt/funtoo/live-vfs-root-20110523 /bin/bash
Then either:

Set the default subvolume /live-vfs-root-20110523 as being the default subvolume (btrfs subvolume set-default…. remember to inspect the subvolume identification number)
Use rootflag / real_rootfsflags (always use real_rootfsflags for kernel generated with Genkernel) on the kernel command line in your bootloader configuration file
Technically speaking, it won’t change your life BUT at system recovery: when you want to rollback to a functional VFS root copy because something happened (buggy system package, too aggressive cleanup that removed Python, dead compiling toolchain…) you can avoid a time costly rsync but at the cost of putting a bit of overhead over your shoulders when taking a snapshot.

Here again you have two ways to recover the system:

fiddling with the default subvolume:
Mount to the no named volume somewhere (e.g. mount -o subvolid=0 /dev/sdX /mnt)
Take a snapshot (remember to check its identification number) of your current subvolume and store it under the root volume you just have just mounted (btrfs snapshot create / /mnt/before-updating-20110524) — (Where is the “frontier”? If 0 is monted does its contennts also appear in the taken snashot located on the same volume?)
Update your system or do whatever else “dangerous” operation
If you need to return to the latest good known system state, just set the default subvolume to use to the just taken snapshot (btrfs subvolume set-default /mnt)
Reboot
Once you have rebooted, just mount the root subvolume again and delete the subvolume that correspond to the failed system update (btrfs subvolume delete /mnt/)
fiddling with the kernel command line:
Mount to the no named volume somewhere (e.g. mount -o subvolid=0 /dev/sdX /mnt)
Take a snapshot (remember to check its identification number) of your current subvolume and store it under the root volume you just have just mounted (btrfs snapshot create / /mnt/before-updating-20110524) — (Where is the “frontier”? If 0 is mounted does its contents also appear in the taken snapshot located on the same volume?)
Update your system or do whatever else “dangerous” operation
If you need to return to the latest good known system state, just set the rootflags/real_rootflags as demonstrated in previous paragraphs in your loader configuration file
Reboot
Once you have rebooted, just mount the root subvolume again and delete the subvolume that correspond to the failed system update (btrfs subvolume delete /mnt/)
Space recovery / defragmenting the filesystem
Tip
From time to time it is advised to ask for re-optimizing the filesystem structures and data blocks in a subvolume. In BTRFS terminology this is called a defragmentation and it only be performed when the subvolume is mounted in the VFS (online defragmentation):
# btrfs filesystem defrag /mnt
You can still access the subvolume, even change its contents, while a defragmentation is running.

It is also a good idea to remove the snapshots you don’t use anymore especially if huge files and/or lots of files are changed because snapshots will still hold some blocks that could be reused.

SSE 4.2 boost
If your CPU supports hardware calculation of CRC32 (e.g. since Intel Nehalem series and later and AMD Bulldozer series), you are encouraged to enable that support in your kernel since BTRFS makes an aggressive use of those. Just check you have enabled CRC32c INTEL hardware acceleration in Cryptographic API either as a module or as a built-in feature

Recovering an apparent dead BTRFS filesystem
Dealing with a filesystem metadata coherence is a critical in a filesystem design. Losing some data blocks (i.e. having some corrupted files) is less critical than having a screwed-up and unmountable filesystem especially if you do backups on a regular basis (the rule with BTRFS is *do backups*, BTRFS has no mature filesystem repair tool and you *will* end up in having to re-create your filesystem from scratch again sooner or later).

Mounting with recovery option (Linux 3.2 and beyond)
If you are using Linux 3.2 and later (only!), you can use the recovery option to make BTRFS seek for a usable copy of tree root (several copies of it exists on the disk). Just mount your filesystem as:

# mount -o recovery /dev/yourBTFSvolume /mount/point
btrfs-select-super / btrfs-zero-log
Two other handy tools exist but they are not deployed by default by sys-fs/btrfs-progs (even btrfs-progs-9999) ebuilds because they only lie in the branch “next” of the btrfs-progs Git repository:

btrfs-select-super
btrfs-zero-log
Building the btrfs-progs goodies
The two tools this section is about are not build by default and Funtoo ebuilds does not build them as well for the moment. So you must build them manually:

# mkdir ~/src
# cd ~/src
# git clone git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git
# cd btrfs-progs
# make && make btrfs-select-super && make btrfs-zero-log
Note
In the past, btrfs-select-super and btrfs-zero-log were lying in the git-next branch, this is no longer the case and those tools are available in the master branch
Fixing dead superblock
In case of a corrupted superblock, start by asking btrfsck to use an alternate copy of the superblock instead of the superblock #0. This is achieved via the -s option followed by the number of the alternate copy you wish to use. In the following example we ask for using the superblock copy #2 of /dev/sda7:

# ./btrfsck –s 2 /dev/sd7
When btrfsck is happy, use btrfs-super-select to restore the default superblock (copy #0) with a clean copy. In the following example we ask for restoring the superblock of /dev/sda7 with its copy #2:

# ./btrfs-super-select -s 2 /dev/sda7
Note that this will overwrite all the other supers on the disk, which means you really only get one shot at it.

If you run btrfs-super-select prior prior to figuring out which one is good, you’ve lost your chance to find a good one.

Clearing the BTRFS journal
This will only help with one specific problem!

If you are unable to mount a BTRFS partition after a hard shutdown, crash or power loss, it may be due to faulty log playback in kernels prior to 3.2. The first thing to try is updating your kernel, and mounting. If this isn’t possible, an alternate solution lies in truncating the BTRFS journal, but only if you see “replay_one_*” functions in the oops callstack.

To truncate the journal of a BTRFS partition (and thereby lose any changes that only exist in the log!), just give the filesystem to process to btrfs-zero-log:

# ./btrfs-zero-log /dev/sda7
This is not a generic technique, and works by permanently throwing away a small amount of potentially good data.

Using btrfsck
Warning
Extremely experimental…
If one thing is famous in the BTRFS world it would be the so-wished fully functional btrfsck. A read-only version of the tool was existing out there for years, however at the begining of 2012, BTRFS developers made a public and very experimental release: the secret jewel lies in the branch dangerdonteveruse of the BTRFS Git repository hold by Chris Mason on kernel.org.

# git clone git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git
# cd btrfs-progs
# git checkout dangerdonteveruse
# make
So far the tool can:

Fix errors in the extents tree and in blocks groups accounting
Wipe the CRC tree and create a brand new one (you can to mount the filesystem with CRC checking disabled )
To repair:

# btrfsck –repair /dev/”yourBTRFSvolume”
To wipe the CRC tree:

# btrfsck –init-csum-tree /dev/”yourBTRFSvolume”
Two other options exist in the source code: –super (equivalent of btrfs-select-super ?) and –init-extent-tree (clears out any extent?)

Final words
We give the great lines here but BTRFS can be very tricky especially when several subvolumes coming from several BTRFS volumes are used. And remember: BTRFS is still experimental at date of writing :)

Lessons learned
Very interesting but still lacks some important features present in ZFS like RAID-Z, virtual volumes, management by attributes, filesystem streaming, etc.
Extremly interesting for Gentoo/Funtoo systems partitions (snapshot/rollback capabilities). However not integrated in portage yet.
If possible, use a file monitoring tool like TripWire this is handy to see what file has been corrupted once the filesystem is recovered or if a bug happens
It is highly advised to not use the root subvolume when deploying a new Funtoo instance or put any kind of data on it in a more general case. Rolling back a data snapshot will be much easier and much less error prone (no copy process, just a matter of ‘swapping’ the subvolumes).
Backup, backup backup your data! ;)

Support Funtoo and help us grow! Donate $15 per month and get a free SSD-based Funtoo Virtual Container. 5 spots left.
Got Funtoo?
Have you installed Funtoo Linux yet? Discover the power of a from-source meta-distribution optimized for your hardware! See our installation instructions and browse our CPU-optimized builds.

Funtoo News
Drobbins
RSS/Atom Support
You can now follow this news feed at http://www.funtoo.org/news/atom.xml .
10 February 2015 by Drobbins
Drobbins
Creating a Friendly Funtoo Culture
This news item details some recent steps that have been taken to help ensure that Funtoo is a friendly and welcoming place for our users.
2 February 2015 by Drobbins
Mgorny
CPU FLAGS X86
CPU_FLAGS_X86 are being introduced to group together USE flags managing CPU instruction sets.
31 January 2015 by Mgorny View More News…
More Articles
Browse all our Linux-related articles, below:

A
Awk by Example, Part 1
Awk by Example, Part 2
Awk by Example, Part 3
B
BTRFS Fun
Bash by Example, Part 1
Bash by Example, Part 2
Bash by Example, Part 3
F
Funtoo Filesystem Guide, Part 1
Funtoo Filesystem Guide, Part 2
Funtoo Filesystem Guide, Part 3
Funtoo Filesystem Guide, Part 4
Funtoo Filesystem Guide, Part 5
G
GUID Booting Guide
GlusterFS
K
Keychain
L
LVM Fun
Learning Linux LVM, Part 1
Learning Linux LVM, Part 2
Linux Fundamentals, Part 1
Linux Fundamentals, Part 1/pt-br
Linux Fundamentals, Part 2
Linux Fundamentals, Part 3
Linux Fundamentals, Part 4
M
Making the Distribution, Part 1
Making the Distribution, Part 2
Making the Distribution, Part 3
Maximum Swappage
O
OpenSSH Key Management, Part 1
OpenSSH Key Management, Part 2
OpenSSH Key Management, Part 3
P
POSIX Threads Explained, Part 1
POSIX Threads Explained, Part 2
POSIX Threads Explained, Part 3
Partition Planning Tips
Partitioning in Action, Part 1
Partitioning in Action, Part 2
Prompt Magic
S
SAN Box used via iSCSI
Sed by Example, Part 1
Sed by Example, Part 2
Sed by Example, Part 3
Slowloris DOS Mitigation Guide
T
The Gentoo.org Redesign, Part 1
The Gentoo.org Redesign, Part 2
The Gentoo.org Redesign, Part 3
The Gentoo.org Redesign, Part 4
Traffic Control
W
Windows 7 Virtualization with KVM
X
X Window System
Z
ZFS Fun

… further results

Categories: LabsArticlesFeaturedFilesystems
Powered by MediaWiki Powered by Semantic MediaWiki
This page was last modified on December 28, 2014, at 09:41.
Privacy policy
About Funtoo
Disclaimers

Posted in Uncategorized | Leave a comment

linux ifcfg keywords

/usr/share/doc/initscripts*/sysconfig.txt

11.1 About Network Interfaces
Prev Chapter 11 Network Configuration Next
11.1 About Network Interfaces

Each physical and virtual network device on an Oracle Linux system has an associated configuration file named ifcfg-interface in the /etc/sysconfig/network-scripts directory, where interface is the name of the interface. For example:

# cd /etc/sysconfig/network-scripts
# ls ifcfg-*
ifcfg-eth0 ifcfg-eth1 ifcfg-lo
In this example, there are two configuration files for Ethernet interfaces, ifcfg-eth0 and ifcfg-eth1, and one for the loopback interface, ifcfg-lo. The system reads the configuration files at boot time to configure the network interfaces.

The following are sample entries from an ifcfg-eth0 file for a network interface that obtains its IP address using the Dynamic Host Configuration Protocol (DHCP):

DEVICE=”eth0″
NM_CONTROLLED=”yes”
ONBOOT=yes
USERCTL=no
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME=”System eth0″
UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03
HWADDR=08:00:27:16:C3:33
PEERDNS=yes
PEERROUTES=yes
If the interface is configured with a static IP address, the file contains entries such as the following:

DEVICE=”eth0″
NM_CONTROLLED=”yes”
ONBOOT=yes
USERCTL=no
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME=”System eth0″
UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03
HWADDR=08:00:27:16:C3:33
IPADDR=192.168.1.101
NETMASK=255.255.255.0
BROADCAST=192.168.1.255
PEERDNS=yes
PEERROUTES=yes
The following configuration parameters are typically used in interface configuration files:

BOOTPROTO
How the interface obtains its IP address:

bootp
Bootstrap Protocol (BOOTP).

dhcp
Dynamic Host Configuration Protocol (DHCP).

none
Statically configured IP address.

BROADCAST
IPv4 broadcast address.

DEFROUTE
Whether this interface is the default route.

DEVICE
Name of the physical network interface device (or a PPP logical device).

HWADDR
Media access control (MAC) address of an Ethernet device.

IPADDR
IPv4 address of the interface.

IPV4_FAILURE_FATAL
Whether the device is disabled if IPv4 configuration fails.

IPV6_FAILURE_FATAL
Whether the device is disabled if IPv6 configuration fails.

IPV6ADDR
IPv6 address of the interface in CIDR notation. For example: IPV6ADDR=”2001:db8:1e11:115b::1/32″

IPV6INIT
Whether to enable IPv6 for the interface.

MASTER
Specifies the name of the master bonded interface, of which this interface is slave.

NAME
Name of the interface as displayed in the Network Connections GUI.

NETMASK
IPv4 network mask of the interface.

NETWORK
IPV4 address of the network.

NM_CONTROLLED
Whether the network interface device is controlled by the network management daemon, NetworkManager.

ONBOOT
Whether the interface is activated at boot time.

PEERDNS
Whether the /etc/resolv.conf file used for DNS resolution contains information obtained from the DHCP server.

PEERROUTES
Whether the information for the routing table entry that defines the default gateway for the interface is obtained from the DHCP server.

SLAVE
Specifies that this interface is a component of a bonded interface.

TYPE
Interface type.

USERCTL
Whether users other than root can control the state of this interface.

UUID
Universally unique identifier for the network interface device.

Prev Up Next
Chapter 11 Network Configuration Home 11.2 About Network Configuration Files
Oracle logo
Copyright © 2013, 2014, Oracle and/or its affiliates. All rights reserved. Legal Notices

Posted in Uncategorized | Leave a comment

cdot vifproblems

You may have a situation where an interface is configured in the RDB (VifMgr)
but no longer present in the local userinterface.

1. create a new interface
2. delete the userinterface entry
3. try to create it again (fails because it is in VifMgr)
4. view VifMgr entry
5. delete the VifMgr entry

1.
net int create -vserver student2 -lif test2 -role data -data-protocol nfs,cifs,fcache -home-node cluster1-02 -home-port e0c -address 192.168.81.201 -netmask 255.255.255.0 -status-admin up

2.
set diag; net int ids delete -owner student2 -name test2

3.
net int create -vserver student2 -lif data1 -role data – data-protocol nfs,cifs,fcache -home-node cluster1-02 -home-port e0c – address 192.168.81.215 -netmask 255.255.255.0 -status-admin up
(network interface create)
Info: Failed to create virtual interface
Error: command failed: LIF “test2″ with ID 1034 (on Vserver ID 6), IP address 192.168.81.201, is configured in the VIFMgr but is not visible in the user interface. Contact support personnel for further assistance.

4.
debug smdb table vifmgr_virtual_interface show -role data
-fields lif-name,lif-id

5.
debug smdb table vifmgr_virtual_interface delete -node cluster1-02 -lif-id lif_ID

Posted in Uncategorized | Leave a comment

cdot aggr troubleshoot with debug vreport

If there is an inconsistency between VLDB and WAFL on a local node,
you can fix that with “debug”

Create an aggregate in the nodeshell
This aggregate will be known in WAFL but not in VLDB

cl1::*> node run -node cl1-01
Type ‘exit’ or ‘Ctrl-D’ to return to the CLI
cl1-01> options nodescope.reenabledcmds “aggr=create,destroy,status,offline,online”
cl1-01> aggr create aggrwafl 5
Creation of an aggregate with 5 disks has completed.

Create an aggregate in the clustershell and remove it from WAFL.
This aggregate will then be known in VLDB but removed from WAFL.

cl1::>aggr create rdbaggr -diskcount 5 -node cl1-01
cl1::> node run -node cl1-01
Type ‘exit’ or ‘Ctrl-D’ to return to the CLI
cl1-01> aggr offline rdbaggr
Aggregate ‘rdbaggr’ is now offline.
cl1-01> aggr destroy rdbaggr
Are you sure you want to destroy this aggregate? y
Aggregate ‘rdbaggr’ destroyed.
cl1-01> exit
logout

Run dbug vreport.

cl1::> set diag
cl1::*> debug vreport show

aggregate Differences:
Name Reason Attributes
——– ——- —————————————————
aggrwafl(8e836c6d-76a0-4202-bb40-22694424f847)
Present in WAFL Only
Node Name: cl1-01
Aggregate UUID: 8e836c6d-76a0-4202-bb40-22694424f847
Aggregate State: online
Aggregate Raid Status: raid_dp

rdbaggr Present in VLDB Only
Node Name: cl1-01
Aggregate UUID: d7cfb17e-a374-43c6-b721-debfcf549191
Aggregate State: unknown
Aggregate Raid Status:
2 entries were displayed.

Fix VLDB ( aggregate will be added to VLDB )
cl1::debug*> debug vreport fix -type aggregate -object aggrwafl(8e836c6d-76a0-4202-bb40-22694424f847)

Fix VLDB ( aggregate will be removed from VLDB )
cl1::debug*> debug vreport fix -type aggregate -object rdbaggr
cl1::debug*> debug vreport show
This table is currently empty.
Info: WAFL and VLDB volume/aggregate records are consistent.

=================================================================

Posted in netapp | Leave a comment

Mac Locale warnings after OS update

vi $HOME/.ssh/config
SendEnv LANG LC_*

Posted in Uncategorized | Leave a comment

netapp 7-mode lun usage -s and lun clone dependency

create a lun and create a busy lun situation.
by default snapshot_clone_dependency is switched to off

If you switch it to on before the below actions then
you can delete a snapshot even if there are more recent
snapshots having pointers to lunblocks.
vol options volume_name snapshot_clone_dependency [on][off]

filer1> snap create lunvol_n1 snapbeforelun

filer1> options licensed_feature.iscsi.enable on
filer1> iscsi start
filer1> lun setup
This setup will take you through the steps needed to create LUNs
and to make them accessible by initiators. You can type ^C (Control-C)
at any time to abort the setup and no unconfirmed changes will be made
to the system.
—output skipped—
Do you want to create another LUN? [n]:

filer1> lun show -m
LUN path Mapped to LUN ID Protocol
———————————————————————–
/vol/lunvol_n1/lun1 win 0 iSCSI
On the windows system connect to the filer target with iscsi initiator and
use computer->manage->storage to rescan disks and initialise the new disk and
put ntfs filesystem disk.

filer1> snap create lunvol_n1 snapafterlun

filer1> lun clone create /vol/lunvol_n1/lunclone -b /vol/lunvol_n1/lun1 snapafterlun

filer1> lun map /vol/lunvol_n1/lunclone win
On windows rescan disks, mark partition as active and specify drive letter.
Now you can restore any files from the new (cloned) lun to the already existing lun

filer1> snap create lunvol_n1 snaprandom

filer1>snap delete lunvol_n1 snapafterlun
Snapshot snapafterlun is busy because of snapmirror, sync mirror, volume clone, snap restore, dump, CIFS share, volume copy, ndmp, WORM volume, SIS Clone, LUN clone

filer1> lun snap usage -s lunvol_n1 snapafterlun
You need to delete the following LUNs before deleting the snapshot
/vol/lunvol_n1/lunclone
You need to delete the following snapshots before deleting the snapshot
Snapshot – hourly.0
Snapshot – snaprandom
 

Posted in Uncategorized | Leave a comment

linux mount cifs share

mkdir /mnt/share

mount -t cifs //windowsserver/pathname /mnt/share -o username=administrator,password=********

 

Posted in linux | Leave a comment

netapp 7-mode snapvault restore files from backuplun

1. filer1 is primary filer and has aggr aggr1.

2. filer2 is secondary filer and has aggr aggr1.

3. windows is connected with iSCSI to filer1 and filer2
the igroup on both filers is called wingroup.

4. create a sourcevolume,qtree,lun on filer1 and map the lun.
- vol create volsource aggr1 500m
- qtree create /vol/volsource/q1
- lun create -s 100m -t windows /vol/volsource/q1/l1
- lun map /vol/volsource/q1/l1 wingroup
(the lun should now be visible on windows after a
rescan of the disks)

5. create a destinationvolume on filer2.
- vol create voldest aggr1 500m

6. setup a snapvault snapshot schedule on filer1.
- snapvault snap sched volsource snap1 2@0-23

7. setup a snapvault snapshot schedule on filer2.
- snapvault snap sched -x voldest snap1 40@0-23

8. Start the snapvault relation from filer2.
- snapvault start -S filer1:/vol/volsource/q1 filer2:/vol/voldest/q1

9. View the snapshot on filer2.
- snap list voldest
filer2(4079432752)_voldest-base.0 (busy,snapvault)
(this is the snapshot to clone the lun from)

10. Clone the lun on filer2.
- lun clone create /vol/voldest/backlun -b /vol/voldest/q1/l1 filer2(4079432752)_voldest-base.0

11. Online the lun and map it to wingroup.
- lun online /vol/voldest/backlun
- lun map /vol/voldest/backlun wingroup

12. On windows, rescan disks and find the drive to restore files
from.

Posted in Uncategorized | Leave a comment

solaris 11 zfs ARC

1. The Adaptive Replacement Cache
An ordered list of recently used resource entries; most recently
used at the top and least recently used at the bottom. Entries used
for a second time are placed at the top of the list and the bottom
entries will be evicted first. It used to be called LRU(Least Recenctly
Used) (LRU is evicted first).
ARC is an improvement because it keeps two lists T1 T2.
T1 for recent cache entries
T2 for frequent entries, referenced at least twice

2. Solaris 11 allows for locked pages by ZFS that cannot be vacated.
This may cause a problem for other programs in need of memory, or
if another filesystem than ZFS is used as well.

3. ZFS claims all memory except for 1 GB, or it claims at least 3/4

4. How much memory do we have?
prtconf | grep Mem
Memory size: 3072 Megabytes

5. See ARC statistics:
mdb -k
>::arc
(output skipped)
c_max = 2291 MB

6. How much memory is in use by ZFS?
mdb -k
>::memstat
Page Summary______Pages______MB____%Tot
Kernel____________136490_____533____17%
ZFS File Data_____94773______370____12%


7. Is the tunable parameter to limit ARC for ZFS set?
0 means no, 1 means yes.
mdb -k
> zfs_arc_max/E
zfs_arc_max:
zfs_arc_max: 0

8. What is the target max size for ARC?
kstat -p zfs::arcstats:c_max
zfs:0:arcstats:c_max 2402985984

(2402985984 bytes is 2291 MB)

9. How much does ZFS use at the moment?
kstat -p zfs::arcstats:size 5 (show results every 5 seconds)
zfs:0:arcstats:size 539604016
zfs:0:arcstats:size 539604016

(now start reading files and see the number increase)

for i in `seq 1 100`; do cat /var/adm/messages >>/test/data;
cat /test/data > /test/data$i; done

kstat -p zfs::arcstats:size 5 (show results every 5 seconds)
zfs:0:arcstats:size 539604016
zfs:0:arcstats:size 1036895816
zfs:0:arcstats:size 1036996288
zfs:0:arcstats:size 1037086728

10. How much memory is in use by ZFS now?
mdb -k
> ::memstat
Page Summary___Pages___MB___%Tot
———— —————- —————- —-
Kernel_________140147__547___18%
ZFS File Data__236377__923___30%

11. To set the target tunable parameter, modify /etc/system and reboot.
vi /etc/system and add:
set zfs:zfs_arc_max= 1073741824
(after the reboot it will be 1 GB instead of 2.2 GB.

—–

brendan gregg’s archits script.
# cat -n archits.sh
1 #!/usr/bin/sh
2
3 interval=${1:-5} # 5 secs by default
4
5 kstat -p zfs:0:arcstats:hits zfs:0:arcstats:misses $interval | awk ‘
6 BEGIN {
7 printf “%12s %12s %9s\n”, “HITS”, “MISSES”, “HITRATE”
8 }
9 /hits/ {
10 hits = $2 – hitslast
11 hitslast = $2
12 }
13 /misses/ {
14 misses = $2 – misslast
15 misslast = $2
16 rate = 0
17 total = hits + misses
18 if (total)
19 rate = (hits * 100) / total
20 printf “%12d %12d %8.2f%%\n”, hits, misses, rate
21 }
22 ‘
—————————

Posted in Uncategorized | Leave a comment

Performance seconds

cpu regs 300ns
SSL 25us 250us
Disk 5 ms – 20ms
Optical 100ms

nanosecond to second is as second to 31.710 years
microsecond to second is as second to 11.574 days

Screen Shot 2014-10-08 at 6.25.56 PM

Posted in Uncategorized | Leave a comment

Solaris remove empty lines in vi

:v/./d

Posted in Uncategorized | Leave a comment

netapp sim upgrade

By Ron Havenaar/Dirk Oogjen

This is how it works with a new simulator:

1. Download the NetApp CDOT 8.2.1 simulator from support.netapp.com
2. Unzip it to a directory of choice.
3. From that directory, import the DataONTAP.vmx file in VMware.
That can be a recent version of VMware workstation, Fusion or ESX, whatever you have.
4. The name of the imported vsim is now: vsim_netapp-cm. For this first node, you might first want to rename it to something like vsim_netapp-cm1.
5. The default setting for memory usage in the vm is 1.6 Gbyte. However, that is not enough for the upgrade.
You need to set this to at least 4 Gbyte. Later, when done with the upgrade, you can turn that back to 1.6 Gbyte. I tried, but less than 4 Gbyte won’t work…
6. Power On the simulator, but make sure you interrupt the boot process immediately!
You need the VLOADER> prompt.
In case you booted it before (when it is not a new machine), or you accidently booted the machine through from scratch, you cannot continue with it.
You MUST start with a new machine, and you MUST have the VLOADER> prompt at first boot if you want to change the serial numbers.
You CANNOT change the serial numbers on a machine that has been booted through before. (well… there is always a way.. but that takes much longer than starting from scratch again)
Just start over again if in doubt.
7. Two things need to happen now:
a. The serial numbers have to be set.
Each simulator in the set must have its own serial numbers. (more on this later)
b. You need to choose what kind of disks you want with this node.
These are the procedures for this: (next steps)
1. First the serial number:
Take for example the Serial Number 4082368507.
VLOADER> setenv SYS_SERIAL_NUM 4082368507
VLOADER> setenv bootarg.nvram.sysid 4082368507
2. Then choose the disk set to use for this node.
AGGR0 must be at least 10Gbyte for the upgrade, so be aware if you use small disks you have to add a lot of disks to AGGR0.

Pick a simdisk inventory and enter the corresponding setenv commands at the VLOADER> prompt.

The default simdisk inventory is 28x1gb 15k:
setenv bootarg.vm.sim.vdevinit “23:14:0,23:14:1”
setenv bootarg.sim.vdevinit “23:14:0,23:14:1”

This inventory enables the simulation of multiple disk tiers and/or flash pools: (28x1gb 15k+14x2gb 7.2k+14x100mb SSD)
setenv bootarg.vm.sim.vdevinit “23:14:0,23:14:1,32:14:2,34:14:3”
setenv bootarg.sim.vdevinit “23:14:0,23:14:1,32:14:2,34:14:3”

This one makes the usable capacity of the DataONTAP-sim.vmdk even larger (54x4gb 15k)
setenv bootarg.vm.sim.vdevinit “31:14:0,31:14:1,31:14:2,31:14:3”
setenv bootarg.sim.vdevinit “31:14:0,31:14:1,31:14:2,31:14:3”

Using type 36 drives you can create the largest capacity.

Explanation of the above:
31:14:0 means: type 31 (4 Gbyte), 14 disks, shelf 0
Type 23: 1 Gbyte FC
Type 30: 2 Gbyte FC
Type 31: 4 Gbyte FC
Type 35: 500 Mbyte SSD (simulated of course, for flashpool)
Type 36: 9 Gbyte SAS

Choose any combination of disks, but the maximum number of disks a simulator would allow is 56, no matter what size or type.
.
9. Then Boot the simulator.
VLOADER> boot
10. Wait until you see the menu announcement.
Press Ctrl-C for the menu. Do NOT press enter after ctrl-C, ONLY ctrl-c !
If you press ctrl-c too quick, you get the VLOADER> instead.
If so, type: boot and wait for the menu announcement.
11. From the menu, choose option 4.
Confirm twice that you indeed would like to wipe all disks.
The system reboots.
12. Wait for the zeroing process to finish. That could take a while… Follow the dots… ;-)
13. When done, follow the setup procedure and build your first Clustered Data Ontap node.
14. The cluster base license key in a cdot vsim is:
SMKQROWJNQYQSDAAAAAAAAAAAAAA (that is 14 x A)
Further licenses can be easily added later using putty, since with putty you can cut and paste (on the VMware console in many cases you cannot), so you can skip the other licenses for now.
15. Turn off auto support:
Cluster::> autosupport modify -support disable -node *
16. Assign all disks:
Cluster::> disk assign -all true -node
17. Now add enough disks to Aggr0 to allow for enough space for the upgrade.
It has to be at least 10 Gbyte.
a. Cluster::> Storage aggregate show
b. Cluster::> Storage aggregate add-disks <# disks>
example: Cluster::> stor aggr add-disks aggr0 2
c. Cluster::> Storage aggregate show
d. Keep adding disks until you reach at least 10 Gbyte for Aggr0.
1. Log in with Putty and add all licenses for this node.
2. Check if the cluster is healthy:
Cluster::> cluster show
It should show the word ‘healthy’
3. Configure NTP time:
a. This is how to check it:
Cluster::> system services ntp server show
Cluster::> system date show (or: cluster date show)
b. This is an example how to set it:
Cluster::> timezone Europe/Amsterdam
Cluster::> system services ntp server create -server -node
Cluster::> cluster date show
1. This is how you can determine which image (OnTap version) is used on the simulator:
Cluster::> system node image show
2. Now it is time to make a VM Snapshot of the simulator, before you do the actual update.
A snapshot would allow you to revert to this state, start again with the update process, or update to a different version. Wait for the VM snapshot to finish before you continue.
3. If you don’t have it yet, download the Clustered Data Ontap image from the download.netapp.com website. You need an account there to do so.
You can use the FAS6290 image, e.g. 822_q_image.tgz of 83_q_image.tgz (or whatever you need)
And yes, the simulator can be updated with this image! Neat huh!
4. We are going to use HTTP File Server (or HFS) to do the upgrade.
If you don’t have it yet, this is a free and very easy to use program, excellent for this job.
You can download it from http://www.rejetto.com/hfs/
5. Add the .tgz image to HFS. Drag and drop will do. Do NOT unzip the image.
6. Make sure HFS is using the right local IP address.
If you have VMware on your local machine or any other program using a different IP range, HFS might use the wrong IP address. In HFS, go to the menu, under IP Address and check if the IP address is correct.
7. In the first line in HFS you see the URL you need to use. Write down this information.
8. Now we are going to do the upgrade:
Cluster::> system node image update -node * -package -replace-package true
Example: Cluster::> system node image update -node * -package http://192.168.178.88:8080/822_q_image.tgz -replace-package true

9. Since we only have one node, you cannot check if this is working fine.
If you decided to set up two or more nodes to start with, this is what you can do to check the progress on a different node:
Cluster::>> system node image show-update-progress -node *
You have to keep doing this until the system says: exited. Exit status Success
Otherwise, wait about half an hour. It takes quite a while! Be patient….!!!
10. When done, check if everything went okay:
Cluster::> system node image show
You should see the new version as image 2, but as you can see, that is not the default yet.

11. If it is indeed image2, choose to set this image as default:
Cluster::> system node image modify -node -image image2 -isdefault true
12. Check if that is okay now:
Cluster::> system node image show
Under ‘Is default’ is should say ‘true’
13. Reboot the node
Cluster::> reboot -node *
14. Just let it reboot completely. Do not interrupt. Wait for the login prompt.
The system would probably tell you the update process is not ready yet. No problem.
Just wait a few minutes. That will be done automatically.
15. If you want to check if it is still busy with the upgrade:
Cluster::> set -privilege advanced
Cluster::> system node upgrade-revert show
If you see the message: upgrade successful, it is ready.
16. You can check to see if everything has been upgraded correctly with:
Cluster::> system node image show
Cluster::> version
17. In VMware, you can set the memory usage back to 1.6 Gbyte now.
18. Now add the second node and follow the same procedure.
19. Alternatively, you could have set up two or more nodes with 8.2.1. first and do the upgrade afterwards, whatever you like better.

Notes (not with detailed instructions):
The same procedure applies to 7-mode. The upgrade instructions are a bit different, but the procedure is the same. What we found out in 7-mode however, is, that we needed to set the /cfcard directory to read/write first, since it was read only. I did not have that issue with the cdot upgrade.
If you encounter that issue, this is what you need to do before the upgrade:
1. Go to the systemshell
2. sudo chmod –r 777 /cfcard

Furthermore, if you want to run the simulator on a Vmware ESX-i box you need to change something there too:
1. Allow remote access with SSH and use putty to log in on your ESX-i server:
2. Edit the file /etc/rc.local.d/local.sh
3. Add the following text as last line:
vmkload_mod multiextent
4. Reboot the ESX box.

Posted in Uncategorized | Leave a comment