cdot 8.3 networking diagram

cdot networking 8.3

Posted in Uncategorized | Leave a comment

cdot metrocluster switchback

metrocluster heal -phase aggregates
metrocluster heal -phase root-aggregates
metrocluster switchback

Posted in Uncategorized | Leave a comment

vaai

vaai for netapp

Posted in Uncategorized | Leave a comment

cdot 8.3.1 licenses

CLUSTERED SIMULATE ONTAP LICENSES
+++++++++++++++++++++++++++++++++

These are the licenses that you use with the clustered Data ONTAP version
of Simulate ONTAP to enable Data ONTAP features.

There are four groups of licenses in this file:

- cluster base license
- feature licenses for the ESX build
- feature licenses for the non-ESX build
- feature licenses for the second node of a 2-node cluster

Cluster Base License (Serial Number 1-80-000008)
================================================

You use the cluster base license when setting up the first simulator in a cluster.

Cluster Base license = SMKQROWJNQYQSDAAAAAAAAAAAAAA

Clustered Data ONTAP Feature Licenses
=====================================

You use the feature licenses to enable unique Data ONTAP features on your simulator.

Licenses for the ESX build (Serial Number 4082368511)
-----------------------------------------------------

Use these licenses with the VMware ESX build.

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS CAYHXPKBFDUFZGABGAAAAAAAAAAA CIFS protocol
FCP APTLYPKBFDUFZGABGAAAAAAAAAAA Fibre Channel Protocol
FlexClone WSKTAQKBFDUFZGABGAAAAAAAAAAA FlexClone
Insight_Balance CGVTEQKBFDUFZGABGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI OUVWXPKBFDUFZGABGAAAAAAAAAAA iSCSI protocol
NFS QFATWPKBFDUFZGABGAAAAAAAAAAA NFS protocol
SnapLock UHGXBQKBFDUFZGABGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise QLXEEQKBFDUFZGABGAAAAAAAAAAA SnapLock Enterprise
SnapManager GCEMCQKBFDUFZGABGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror KYMEAQKBFDUFZGABGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect SWBBDQKBFDUFZGABGAAAAAAAAAAA SnapProtect Applications
SnapRestore YDPPZPKBFDUFZGABGAAAAAAAAAAA SnapRestore
SnapVault INIIBQKBFDUFZGABGAAAAAAAAAAA SnapVault primary and secondary

Licenses for the non-ESX build (Serial Number 4082368507)
---------------------------------------------------------

Use these licenses with the VMware Workstation, VMware Player, and VMware Fusion build.

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS YVUCRRRRYVHXCFABGAAAAAAAAAAA CIFS protocol
FCP WKQGSRRRYVHXCFABGAAAAAAAAAAA Fibre Channel Protocol
FlexClone SOHOURRRYVHXCFABGAAAAAAAAAAA FlexClone
Insight_Balance YBSOYRRRYVHXCFABGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI KQSRRRRRYVHXCFABGAAAAAAAAAAA iSCSI protocol
NFS MBXNQRRRYVHXCFABGAAAAAAAAAAA NFS protocol
SnapLock QDDSVRRRYVHXCFABGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise MHUZXRRRYVHXCFABGAAAAAAAAAAA SnapLock Enterprise
SnapManager CYAHWRRRYVHXCFABGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror GUJZTRRRYVHXCFABGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect OSYVWRRRYVHXCFABGAAAAAAAAAAA SnapProtect Applications
SnapRestore UZLKTRRRYVHXCFABGAAAAAAAAAAA SnapRestore
SnapVault EJFDVRRRYVHXCFABGAAAAAAAAAAA SnapVault primary and secondary

Licenses for the second node in a cluster (Serial Number 4034389062)
--------------------------------------------------------------------

Use these licenses with the second simulator in a cluster (either the ESX or non-ESX build).

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS MHEYKUNFXMSMUCEZFAAAAAAAAAAA CIFS protocol
FCP KWZBMUNFXMSMUCEZFAAAAAAAAAAA Fibre Channel Protocol
FlexClone GARJOUNFXMSMUCEZFAAAAAAAAAAA FlexClone
Insight_Balance MNBKSUNFXMSMUCEZFAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI YBCNLUNFXMSMUCEZFAAAAAAAAAAA iSCSI protocol
NFS ANGJKUNFXMSMUCEZFAAAAAAAAAAA NFS protocol
SnapLock EPMNPUNFXMSMUCEZFAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise ATDVRUNFXMSMUCEZFAAAAAAAAAAA SnapLock Enterprise
SnapManager QJKCQUNFXMSMUCEZFAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror UFTUNUNFXMSMUCEZFAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect CEIRQUNFXMSMUCEZFAAAAAAAAAAA SnapProtect Applications
SnapRestore ILVFNUNFXMSMUCEZFAAAAAAAAAAA SnapRestore
SnapVault SUOYOUNFXMSMUCEZFAAAAAAAAAAA SnapVault primary and secondary

Posted in Uncategorized | Leave a comment

Cdot Advanced Drive Partitioning how to disable

In order to disable auto-partitioning (ADP), perform the following bootargs at the LOADER prompt before initial setup:
1. Run the following command to disable HDD auto-partitioning:
setenv root-uses-shared-disks? false

2. Run the following command to disable SSD storage pool partitioning:
setenv allow-ssd-partitions? false

3. Run the following command to disable All Flash AFF auto-partitioning:
setenv root-uses-shared-ssds? false

Posted in Uncategorized | Leave a comment

Restoring a vm from a Netapp Snapmirror (DP/XDP) destination

2015 peter van der weerd

Restoring a vm from a Netapp Snapmirror (DP/XDP) destination.

1. List the available snapshots on mirrordestination.

cl1::> cl1::> snap show -vserver nfs1 -volume linvolmir
---Blocks---
Vserver Volume Snapshot State Size Total% Used%
-------- ------- ------------------------------- -------- -------- ------ -----
nfs1 linvolmir
snapmirror.b640aac9-d77a-11e3-9cae-123478563412_2147484711.2015-10-13_075517 valid 2.26MB 0% 0%
VeeamSourceSnapshot_linuxvm.2015-10-13_1330 valid 0B 0% 0%
2 entries were displayed.

2. Create a flexclone.

cl1::> vol clone create -vserver nfs1 -flexclone clonedmir -junction-path /clonedmir -parent-volume linvol -parent-snapshot VeeamSourceSnapshot_linuxvm.2015-10-13_1330
(volume clone create)
[Job 392] Job succeeded: Successful

3. Connect the correct export-policy to the new volume.

cl1::> vol modify -vserver nfs1 -volume linvol -policy data

The rest is done on the ESXi server.

4. Mount the datastore to ESXi.

~ # esxcfg-nas -a clonedmir -o 192.168.4.103 -s /clonedmir
Connecting to NAS volume: clonedmir
clonedmir created and connected.

5. Register the VM and note the ID of the VM.

~ # vim-cmd solo/registervm /vmfs/volumes/clonedmir/linux/linux.vmx
174

6. Power on the VM.

~ # vim-cmd vmsvc/power.on 174
Powering on VM:

7. Your prompt will not return until you answer the question about moving or copying.
Open a new session to ESXi, and list the question.

~ # vim-cmd vmsvc/message 174
Virtual machine message _vmx1:
msg.uuid.altered:This virtual machine might have been moved or copied.
In order to configure certain management and networking features, VMware ESX needs to know if this virtual machine was moved or copied.

If you don't know, answer "I copied it".
0. Cancel (Cancel)
1. I moved it (I moved it)
2. I copied it (I copied it) [default]
Answer the question.
The VMID is "174", the MessageID is "_vmx1" and the answer to the question is "1"

~ # vim-cmd vmsvc/message 174 _vmx1 1

Now the VM is started fully.

Just the commands

cl1::> cl1::> snap show -vserver nfs1 -volume linvolmir
cl1::> vol clone create -vserver nfs1 -flexclone clonedmir -junction-path /clonedmir -parent-volume linvol -parent-snapshot VeeamSourceSnapshot_linuxvm.2015-10-13_1330
cl1::> vol modify -vserver nfs1 -volume linvol -policy data
~ # esxcfg-nas -a clonedmir -o 192.168.4.103 -s /clonedmir
~ # vim-cmd solo/registervm /vmfs/volumes/clonedmir/linux/linux.vmx
~ # vim-cmd vmsvc/power.on 174
~ # vim-cmd vmsvc/message 174
~ # vim-cmd vmsvc/message 174 _vmx1 1

Posted in Uncategorized | Leave a comment

cdot snmp

original link

Enabling SNMP, API access on NetApp cluster mode SVMs
In order to get complete monitoring of, and be able delegate access to, Storage Virtual Machines on NetApp Cluster mode, it is necessary to add the SVMs as separate devices, and enable both SNMP and API access on the SVM itself.

The steps required to do so are:

add an SNMP community for the SVM
ensure SNMP is allowed by the firewall configuration of the interface of the SVM: determine the interface used by the SVM; the firewall policy, and amend if needed
enable API access by allowing API access through the SVM firewall, and creating an API user.
In the following example, we will enable access on the images server.

To enable SNMP
First, we can check the current SNMP configuration:

scenariolab::> system snmp community show
scenariolab
ro Logically
Add SNMP community for the SVM (server) images:

scenariolab::> system snmp community add -type ro -community-name Logical -vserver images
Confirm SNMP configuration:

scenariolab::> system snmp community show
images
ro Logical

scenariolab
ro Logically
You can determine the firewall policy used by the interface for a vserver with the following command:

network interface show -fields firewall-policy
vserver lif firewall-policy
------- ---- ---------------
foo lif2 data
images lif1 data
You can then determine if the policy for the server in question (images, using the data policy in our case) allows snmp:

scenariolab::> firewall policy show -service snmp

(system services firewall policy show)

Policy Service Action IP-List

---------------- ---------- ------ --------------------

cluster snmp allow 0.0.0.0/0

data snmp deny 0.0.0.0/0

intercluster snmp deny 0.0.0.0/0

mgmt snmp allow 0.0.0.0/0
As the data policy does not allow SNMP, we could either amend the firewall policy, or create a new one. In this case, we will create a new firewall policy:

system services firewall policy create -policy data1 -service snmp -action allow -ip-list 0.0.0.0/0

scenariolab::> firewall policy show -service snmp

(system services firewall policy show)

Policy Service Action IP-List

---------------- ---------- ------ --------------------

cluster snmp allow 0.0.0.0/0

data snmp deny 0.0.0.0/0

data1 snmp allow 0.0.0.0/0

intercluster snmp deny 0.0.0.0/0

mgmt snmp allow 0.0.0.0/0
We can now assign new policy to the interface used by the vserver images (lif1):

network interface modify -vserver images -lif lif1 -firewall-policy data1
SNMP is now enabled

API
To enable API access the SVM, we must allow HTTP/HTTPS access through the firewall policy used by the SVM's interfaces.

These commands add HTTP and HTTPS access to the new firewall policy we created above, that is already applied to the interface for the vserver images.

system service firewall policy create -policy data1 -service http -action allow -ip-list 0.0.0.0/0
system service firewall policy create -policy data1 -service https -action allow -ip-list 0.0.0.0/0
Now we just need to create an API user in the context of this vserver:

security login create -username logicmonitor -application ontapi -authmethod password -vserver images -role vsadmin
You can now add the SVM as a host to LogicMonitor. You should define the snmp.community, netapp.user, and netapp.pass properties for the host to allow access.

Posted in Uncategorized | Leave a comment

cdot read logfiles using a browser

cl1::*> security login create -username log -application http -authmethod password

Please enter a password for user 'log':
Please enter it again:

cl1::*> vserver service web modify -vserver * -name spi -enabled true

Warning: The service 'spi' depends on: ontapi. Enabling 'spi' will enable all of its prerequisites.
Do you want to continue? {y|n}: y
3 entries were modified.

cl1::*> vserver services web access create -name spi -role admin -vserver cl1
cl1::*> vserver service web access create -name compat -role admin -vserver cl1

https://*cluster_mgmt_ip*/spi/*nodename*/etc/log

Posted in Uncategorized | Leave a comment

cdot setup events

#setup smtpserver and sender, smtpserver should be reachable.

event config modify -mailserver smtp.online.nl -mailfrom petervanderweerd@gmail.com

#check your settings

event config show
Mail From: petervanderweerd@gmail.com
Mail Server: smtp.online.nl

#create destination for critical messages. the recipient will receive

#everything sent to the destination

event destination create -name critical_messages -mail p.w@planet.nl

#add message severity to destination. all messages with the

#particular severity will be sent to the destination

event route add-destinations {-severity <=CRITICAL} -destinations critical_messages
#check your settings

event route show -severity <=CRITICAL

#prevent flooding example, the message will be sent once per hour max.

event route modify -messagename bootfs.varfs.issue -timethreshold 3600

#test
set d

event generate -messagename cpeer.unavailable -values "hello"

Posted in Uncategorized | Leave a comment

cdot autosupport test

cluster1::> system node autosupport invoke -type test -node node1

Posted in Uncategorized | Leave a comment

Flexpod UCS

UCS

Posted in Uncategorized | Leave a comment

cdot convert snapmirror to snapvault

Steps
Break the data protection mirror relationship by using the snapmirror break command.

The relationship is broken and the disaster protection volume becomes a read-write volume.

Delete the existing data protection mirror relationship, if one exists, by using the snapmirror delete command.

Remove the relationship information from the source SVM by using the snapmirror release command.
(This also deletes the Data ONTAP created Snapshot copies from the source volume.)

Create a SnapVault relationship between the primary volume and the read-write volume by using the snapmirror create command with the -type XDP parameter.

Convert the destination volume from a read-write volume to a SnapVault volume and establish the SnapVault relationship by using the snapmirror resync command.
(Warning: all data newer than the snapmirror.xxxxxx snapshot copy will be lost and also: the snapvault destination should
not be the source of another snapvault relationship)

Posted in Uncategorized | Leave a comment

cdot mhost troubleshoot

1. go to the systemshell
set diag
systemshell -node cl1-01

2. unmount mroot
cd /etc
./netapp_mroot_unmount
logout

3. run cluster show a couple of times and see that health is false
cluster show

4. run cluster ring show to see that M-host is offline
cluster ring show
Node UnitName Epoch DB Epoch DB Trnxs Master Online
--------- -------- -------- -------- -------- --------- ---------
cl1-01 mgmt 6 6 699 cl1-01 master
cl1-01 vldb 7 7 84 cl1-01 master
cl1-01 vifmgr 9 9 20 cl1-01 master
cl1-01 bcomd 7 7 22 cl1-01 master
cl1-02 mgmt 0 6 692 - offline
cl1-02 vldb 7 7 84 cl1-01 secondary
cl1-02 vifmgr 9 9 20 cl1-01 secondary
cl1-02 bcomd 7 7 22 cl1-01 secondary

5. try to create a volume and see that the status of the aggregate
cannot be determined if you pick the aggregate from the broken M-host.

6. now vldb will also be offline.

5. remount mroot by starting mgwd from the systemshell
set diag
systemshell -node cl1-01
/sbin/mgwd -z &

7. when you run cluster ring show it should show vldb offline
cl1::*> cluster ring show
Node UnitName Epoch DB Epoch DB Trnxs Master Online
--------- -------- -------- -------- -------- --------- ---------
cl1-01 mgmt 6 6 738 cl1-01 master
cl1-01 vldb 7 7 87 cl1-01 master
cl1-01 vifmgr 9 9 24 cl1-01 master
cl1-01 bcomd 7 7 22 cl1-01 master
cl1-02 mgmt 6 6 738 cl1-01 secondary
cl1-02 vldb 0 7 84 - offline
cl1-02 vifmgr 0 9 20 - offline
cl1-02 bcomd 7 7 22 cl1-01 secondary

Watch vifmgr has gone bad as well.

8. start vldb by running spmctl -s -h vldb
or run /sbin/vldb
in this case, do the same for vifmgr.

(this will open the databases again and the cluster will be healthy)

Posted in Uncategorized | Leave a comment

cdot lif troubleshoot

1. create a lif

net int create -vserver nfs1 -lif tester -role data -data-protocol cifs,nfs,fcache -home-node cl1-01 -home-port e0c -address 1.1.1.1 -netmask 255.0.0.0 -status-admin up

2. go to diag mode
set diag

3. view the owner of the new lif and delete the owner of the new lif
net int ids show -owner nfs1
net int ids delete -owner nfs1 -name tester
net int ids show -owner nfs1

4. run net int show and see that the lif is not there.
net int show

5. try to create the same lif again.
net int create -vserver nfs1 -lif tester -role data -data-protocol cifs,nfs,fcache -home-node cl1-01 -home-port e0c -address 1.1.1.1 -netmask 255.0.0.0 -status-admin up

(this will fail because the lif is still there, but has no owner)

6. debug the vifmgr table
debug smdb table vifmgr_virtual_interface show -role data -fields lif-name,lif-id
(this will show you the node, the lif-id and the lif-name)

7. using the lif-id from the previous output, delete the lif entry.
debug smdb table vifmgr_virtual_interface delete -node cl1-01 -lif-id 1030

8. see that the lif is gone.
debug smdb table vifmgr_virtual_interface show -role data -fields lif-name,lif-id

9. create the lif.
net int create -vserver nfs1 -lif tester -role data -data-protocol cifs,nfs,fcache -home-node cl1-01 -home-port e0c -address 1.1.1.1 -netmask 255.0.0.0 -status-admin up

Posted in Uncategorized | Leave a comment

CDOT 8.3 release notes

http://mysupport.netapp.com/documentation/docweb/index.html?productID=61898

Posted in Uncategorized | Leave a comment

openstack compute node on centos7

yum install -y https://rdo.fedorapeople.org/rdo-release.rpm

Posted in Uncategorized | Leave a comment

openstack storagebackends

storage backends

Posted in Uncategorized | Leave a comment

openstack lvm and cinder

If you do not specify a volume group, cinder will create his own volume group
called cinder-volumes and use loopback devices for physical volumes.
If you do create and specify a volume group, you should specify the volume group
to be the default LVM volume group for cinder.

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
in cinder.conf
[Default]
volume_group=small
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
---
#
# Options defined in cinder.volume.drivers.lvm
#

# Name for the VG that will contain exported volumes (string
# value)
volume_group=small
---
[lvm]
volume_group=small
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

fdisk -l | grep sd
Disk /dev/sda: 17.2 GB, 17179869184 bytes, 33554432 sectors
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 33554431 16264192 8e Linux LVM
Disk /dev/sdb: 1073 MB, 1073741824 bytes, 2097152 sectors

pvcreate /dev/sdb
vgcreate small /dev/sdb

openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_group small
openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.lvm.LVMISCSIDriver

cinder create --display-name myvolume 1
list the id of the volume
cinder list
list the id's of the instances
nova list

attach the volume to an instance
nova volume-attach ba3ecb1c-53d4-4bea-b3db-d4099e204484 7d9034a8-7dce-447d-951b-5d4f7b947897 auto

On the instance you can use /dev/vdb as a local drive.

Posted in Uncategorized | Leave a comment

CDOT 8.3 statistics catalog example

statistics catalog instance show -object lif
statistics catalog instance show -object volume

statistics catalog counter show -object lif
statistics catalog counter show -object volume

statistics start -object lif -counter recv_data
statistics stop
statistics show -object lif

Posted in Uncategorized | Leave a comment

CDOT 8.3 initiator ip in session

To view the ip-addresses in an iSCSI session:

cdot83::qos> iscsi session show -vserver iscsi -t

Vserver: iscsi
Target Portal Group: o9oi
Target Session ID: 2
Connection ID: 1
Connection State: Full_Feature_Phase
Connection Has session: true
Logical interface: o9oi
Target Portal Group Tag: 1027
Local IP Address: 192.168.4.206
Local TCP Port: 3260
Authentication Type: none
Data Digest Enabled: false
Header Digest Enabled: false
TCP/IP Recv Size: 131400
Initiator Max Recv Data Length: 65536
Remote IP address: 192.168.4.245
Remote TCP Port: 55063
Target Max Recv Data Length: 65536

Posted in Uncategorized | Leave a comment

CDOT 8.3 selective lunmapping

Selective lun mapping is a new iscsi feature in cdot 8.3. Selective lun mapping
results in that only the two nodes in the HA-pair that hosts the LUN map the
lun as 'reporting-nodes'. This is to reduce the number of paths visible to
clients in large environments.

To list the reporting nodes.

This example is from the 8.3 simulator. There is no shared storage so
SFO is not possible. Hence only one node can report.

cdot83::qos> lun mapping show -vserver iscsi -fields reporting-nodes
vserver path igroup reporting-nodes
------- ------------------------------------------------ ------ ---------------
iscsi /vol/lun_17042015_182136_vol/lun_17042015_182136 i cdot83-02

When a volume moves from one HA-Pair to another, you should -remove-reporting-no
des and -add-reporting-nodes.

Posted in Uncategorized | Leave a comment

openstack on centos 7

quickstqrt

1. change selinux to permissive
setenforce 0

2. run
systemctl stop NetworkManager
systemctl disable NetworkManager
systemctl enable network
sudo yum update -y
sudo yum install -y https://rdo.fedorapeople.org/rdo-release.rpm
sudo yum install -y openstack-packstack
packstack --allinone

Posted in Uncategorized | Leave a comment

cdot 8.3 licenses

CLUSTERED SIMULATE ONTAP LICENSES
+++++++++++++++++++++++++++++++++

These are the licenses that you use with the clustered Data ONTAP version
of Simulate ONTAP to enable Data ONTAP features.

There are four groups of licenses in this file:

- cluster base license
- feature licenses for the ESX build
- feature licenses for the non-ESX build
- feature licenses for the second node of a 2-node cluster

Cluster Base License (Serial Number 1-80-000008)
================================================

You use the cluster base license when setting up the first simulator in a cluster.

Cluster Base license = SMKQROWJNQYQSDAAAAAAAAAAAAAA

Clustered Data ONTAP Feature Licenses
=====================================

You use the feature licenses to enable unique Data ONTAP features on your simulator.

Licenses for the ESX build (Serial Number 4082368511)
-----------------------------------------------------

Use these licenses with the VMware ESX build.

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS CAYHXPKBFDUFZGABGAAAAAAAAAAA CIFS protocol
FCP APTLYPKBFDUFZGABGAAAAAAAAAAA Fibre Channel Protocol
FlexClone WSKTAQKBFDUFZGABGAAAAAAAAAAA FlexClone
Insight_Balance CGVTEQKBFDUFZGABGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI OUVWXPKBFDUFZGABGAAAAAAAAAAA iSCSI protocol
NFS QFATWPKBFDUFZGABGAAAAAAAAAAA NFS protocol
SnapLock UHGXBQKBFDUFZGABGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise QLXEEQKBFDUFZGABGAAAAAAAAAAA SnapLock Enterprise
SnapManager GCEMCQKBFDUFZGABGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror KYMEAQKBFDUFZGABGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect SWBBDQKBFDUFZGABGAAAAAAAAAAA SnapProtect Applications
SnapRestore YDPPZPKBFDUFZGABGAAAAAAAAAAA SnapRestore
SnapVault INIIBQKBFDUFZGABGAAAAAAAAAAA SnapVault primary and secondary

Licenses for the non-ESX build (Serial Number 4082368507)
---------------------------------------------------------

Use these licenses with the VMware Workstation, VMware Player, and VMware Fusion build.

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS YVUCRRRRYVHXCFABGAAAAAAAAAAA CIFS protocol
FCP WKQGSRRRYVHXCFABGAAAAAAAAAAA Fibre Channel Protocol
FlexClone SOHOURRRYVHXCFABGAAAAAAAAAAA FlexClone
Insight_Balance YBSOYRRRYVHXCFABGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI KQSRRRRRYVHXCFABGAAAAAAAAAAA iSCSI protocol
NFS MBXNQRRRYVHXCFABGAAAAAAAAAAA NFS protocol
SnapLock QDDSVRRRYVHXCFABGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise MHUZXRRRYVHXCFABGAAAAAAAAAAA SnapLock Enterprise
SnapManager CYAHWRRRYVHXCFABGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror GUJZTRRRYVHXCFABGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect OSYVWRRRYVHXCFABGAAAAAAAAAAA SnapProtect Applications
SnapRestore UZLKTRRRYVHXCFABGAAAAAAAAAAA SnapRestore
SnapVault EJFDVRRRYVHXCFABGAAAAAAAAAAA SnapVault primary and secondary

Licenses for the second node in a cluster (Serial Number 4034389062)
--------------------------------------------------------------------

Use these licenses with the second simulator in a cluster (either the ESX or non-ESX build).

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS MHEYKUNFXMSMUCEZFAAAAAAAAAAA CIFS protocol
FCP KWZBMUNFXMSMUCEZFAAAAAAAAAAA Fibre Channel Protocol
FlexClone GARJOUNFXMSMUCEZFAAAAAAAAAAA FlexClone
Insight_Balance MNBKSUNFXMSMUCEZFAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI YBCNLUNFXMSMUCEZFAAAAAAAAAAA iSCSI protocol
NFS ANGJKUNFXMSMUCEZFAAAAAAAAAAA NFS protocol
SnapLock EPMNPUNFXMSMUCEZFAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise ATDVRUNFXMSMUCEZFAAAAAAAAAAA SnapLock Enterprise
SnapManager QJKCQUNFXMSMUCEZFAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror UFTUNUNFXMSMUCEZFAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect CEIRQUNFXMSMUCEZFAAAAAAAAAAA SnapProtect Applications
SnapRestore ILVFNUNFXMSMUCEZFAAAAAAAAAAA SnapRestore
SnapVault SUOYOUNFXMSMUCEZFAAAAAAAAAAA SnapVault primary and secondary

Posted in Uncategorized | Leave a comment

openstack restart all

To restart all openstack services:

# for svc in api cert compute conductor network scheduler; do
service openstack-nova-$svc restart
done

Redirecting to /bin/systemctl restart openstack-nova-api.service
Redirecting to /bin/systemctl restart openstack-nova-cert.service
Redirecting to /bin/systemctl restart openstack-nova-compute.service
Redirecting to /bin/systemctl restart openstack-nova-conductor.service
Redirecting to /bin/systemctl restart openstack-nova-network.service
Failed to issue method call: Unit openstack-nova-network.service failed to load: No such file or directory.
Redirecting to /bin/systemctl restart openstack-nova-scheduler.service

(note: network fails)

do it manually...
/usr/bin/systemctl stop neutron-openvswitch-agent
/usr/bin/systemctl start neutron-openvswitch-agent
/usr/bin/systemctl status neutron-openvswitch-agent

service httpd stop
service httpd start

Posted in Uncategorized | Leave a comment

ZFS shadow migration exercise

Setting up shadow migration.
Solaris 10 has ip-address 192.168.4.159
ZFS appliance has ip-address 192.168.4.220

1. On Solaris10.

Create a directory called /mnt/data and put some files in it.

# mkdir /mnt/data
# cd /mnt/data
# cp /var/log/* .

Share /mnt/data with NFS.
# echo "share /mnt/data" >> /etc/dfs/dfstab
# svcadm enable nfs/server
# dfshares
RESOURCE SERVER ACCESS TRANSPORT
solaris10:/mnt/data solaris10 - -

2. On the storage appliance.

"Shares" "+Filesystems"
In the "Create Filesystem" screen, specify the name: "mig".
Set the Data migration source to "NFS".
Specify the source: "192.168.4.159:/mnt/data".
Apply

3. On Solaris 10.
Check whether /export/mig is shared.
# dfshares 192.168.4.220
RESOURCE SERVER ACCESS TRANSPORT
192.168.4.220:/export/mig 192.168.4.220 - -

Mount the share.
# cd /net/192.168.4.220/export/mig
# ls
Xorg.0.log snmpd.log syslog.1 syslog.5
Xorg.0.log.old sysidconfig.log syslog.2 syslog.6
authlog syslog syslog.3 syslog.7
postrun.log syslog.0 syslog.4

Done.

Create a share in the CLI:

zfs1:> shares
zfs1:shares> select default
zfs1:shares default> filesystem mig3
zfs1:shares default/mig3 (uncommitted)>set shadow=nfs://192.168.4.159/mnt/data
shadow = nfs://192.168.4.159/mnt/data (uncommitted)
zfs1:shares default/mig3 (uncommitted)> commit

Done.

Posted in Uncategorized | Leave a comment

ZFS replication_exercise

Controller ZFS1 192.168.4.220 project zfs1_proj
is replicated to Controller ZFS2 192.168.4.230.

Controller ZFS1 zfs1_proj has share zfs1_proj_fs1 that is mounted
by solaris 10 to mountpoint /mnt/fs1

On ZFS1
* create project zfs1_proj and filesystem zfs1_proj_fs1 *
* make sure the filesystem is created in zfs1_proj! *
(this is not described)

1. Create replication target.
"Configuration" "Services" "Remote Replication" "+target"
In the Add Replication Target enter:
Name: zfs2_target
Hostname: 192.168.4.230
Root Password: *******

2. Setup replication to ZFS2.
"Shares" -> "Projects" edit "zfs_proj" -> "Replication"
"+Actions"
In the Add Replication Action window enter:
Target: zfs2_target
Pool: p0
Scheduled

On Solaris 10

Mount the share on /mnt/fs1 and create a file.
# mount 192.168.4.220:/export/zfs1_proj_fs1 /mnt/fs1
# cd /mnt/fs1; touch a

On ZFS1
"Projects" edit "zfs_proj" -> "Replication"
Click on "Sync now" ... watch the status bar.

On ZFS2
"Shares" "Projects" "Replica"
edit "zfs_proj"
Try to add a filesystem and note the errormessage.
Ok

On ZFS2
"Shares" "Projects" "Replica"
edit "zfs_proj"
"Replication"
Note the four icons: enable/disable, clone, sever, reverse.

Click on the "sever" icon.
In the Sever Replication windows enter:
"zfs2_proj"

Now the replication is stopped and the share on ZFS2 is accessible
by clients.

On Solaris
# umount -f /mnt/fs1
# mount 192.168.4.230:/export/zfs1_proj_fs1 /mnt/fs1
# ls /mnt/fs1
a

Do the same exercise a second time, but now click on the icon "reverse".
What happens?

Posted in Uncategorized | Leave a comment

ZFS solaris 10 iscsi initiator exercise

On Solaris

1. Determine whether the required software is installed.

-bash-3.2# pkginfo |grep iscsi
system SUNWiscsir Sun iSCSI Device Driver (root)
system SUNWiscsitgtr Sun iSCSI Target (Root)
system SUNWiscsitgtu Sun iSCSI Target (Usr)
system SUNWiscsiu Sun iSCSI Management Utilities (usr)

2. Enable iscsi_initiator

-bash-3.2# svcadm enable iscsi/initiator

-bash-3.2# svcs -a|grep iscsi
disabled Mar_17 svc:/system/iscsitgt:default
online Mar_17 svc:/network/iscsi/initiator:default

3. Determine the Solaris 10 iqn.
-bash-3.2# iscsiadm list initiator-node
Initiator node name: iqn.1986-03.com.sun:01:7ac1dcf7ffff.5300d745
Initiator node alias: solaris10
Login Parameters (Default/Configured):
Header Digest: NONE/-
Data Digest: NONE/-
Authentication Type: NONE
RADIUS Server: NONE
RADIUS access: unknown
Tunable Parameters (Default/Configured):
Session Login Response Time: 60/-
Maximum Connection Retry Time: 180/-
Login Retry Time Interval: 60/-
Configured Sessions: 0

4. Enable static and/or send targets
-bash-3.2# iscsiadm list discovery
Discovery:
Static: disabled
Send Targets: disabled
iSNS: disabled

iscsiadm modify discovery --static enable
iscsiadm modify discovery --sendtargets enable

----

On zfs

1. In the BUI create target
"configuration" "san" "+target"
In the create iscsi target window enter:
Alias: "zfs_t1"
Network Interfaces: "e1000g0"
Ok

Drag the target to below the Target Group "default"
A new targetgroup "targets-0" is created.
Click apply

Create Initiator
"configuration" "san" "+initiators"
In the Identify iSCSI Initiator window enter
Initiator IQN: "iqn.1986-03.com.sun:01:7ac1dcf7ffff.5300d745"
Alias: "solaris"
Ok

Drag the initiator to below the Initiator Group "default"
A new initiatorgroup "initiators-0" is created.
Click apply

2. Create a lun
"shares" "luns" "+luns"
In the Create Lun window enter:
Name: solarislun
Size: 2G
Target group: targets-0
Initiator group: initiators-0
Apply

---

On Solaris

-bash-3.2# iscsiadm add discovery-address 192.168.4.230:3260
-bash-3.2# iscsiadm modify discovery --sendtargets enable
-bash-3.2# devfsadm

Now the disk should show up with format.

Posted in Uncategorized | Leave a comment

solaris 10 initiator

# svcadm enable iscsi_initiator

# iscsiadm add discovery-address 192.168.248.213:3260

# iscsiadm modify discovery --sendtargets enable

# devfsadm -i iscsi

Posted in Uncategorized | Leave a comment

7000 iscsi

On 7000 cli:
configuration san iscsi targets create
set alias=a1
set interfaces=e1000g0
commit

On linux:
cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:3add18fcc55c

On 7000 cli:
configuration san iscsi initiators create
set alias=lin1
set initiator=InitiatorName=iqn.1994-05.com.redhat:3add18fcc55c

On 7000 cli:
shares
select default
lun lin1
set volsize=1g
commit

On linux:
iscsiadm -m discovery -t sendtargets -p 192.168.4.221
/etc/init.d/iscsi restart
fdisk -l

Posted in Uncategorized | Leave a comment

7000 cli network interfaces

log in to cli using ssh.

configuration net datatalinks device
set label=replication
set links=e1000g2
commit

configuration net interfaces ip
set label=repliation
set links=e1000g2
set v4addrs=192.168.4.223/24
commit

Posted in oracle | Leave a comment

linux btrfs

http://www.funtoo.org/BTRFS_Fun

funtoo linux
Go
Actions
Tools
Account

BTRFS Fun
Support Funtoo and help us grow! Donate $15 per month and get a free SSD-based Funtoo Virtual Container. 5 spots left.
Important
BTRFS is still experimental even with latest Linux kernels (3.4-rc at date of writing) so be prepared to lose some data sooner or later or hit a severe issue/regressions/"itchy" bugs. Subliminal message: Do not put critical data on BTRFS partitions.

Introduction
BTRFS is an advanced filesystem mostly contributed by Sun/Oracle whose origins take place in 2007. A good summary is given in [1]. BTRFS aims to provide a modern answer for making storage more flexible and efficient. According to its main contributor, Chris Mason, the goal was "to let Linux scale for the storage that will be available. Scaling is not just about addressing the storage but also means being able to administer and to manage it with a clean interface that lets people see what's being used and makes it more reliable." (Ref. http://en.wikipedia.org/wiki/Btrfs).

Btrfs, often compared to ZFS, is offering some interesting features like:

Using very few fixed location metadata, thus allowing an existing ext2/ext3 filesystem to be "upgraded" in-place to BTRFS.
Operations are transactional
Online volume defragmentation (online filesystem check is on the radar but is not yet implemented).
Built-in storage pool capabilities (no need for LVM)
Built-in RAID capabilities (both for the data and filesystem metadata). RAID-5/6 is planned for 3.5 kernels
Capabilities to grow/shrink the volume
Subvolumes and snapshots (extremely powerful, you can "rollback" to a previous filesystem state as if nothing had happened).
Copy-On-Write
Usage of B-Trees to store the internal filesystem structures (B-Trees are known to have a logarithmic growth in depth, thus making them more efficient when scanning)
Requirements
A recent Linux kernel (BTRFS metadata format evolves from time to time and mounting using a recent Linux kernel can make the BTRFS volume unreadable with an older kernel revision, e.g. Linux 2.6.31 vs Linux 2.6.30). You must also use sys-fs/btrfs-progs (0.19 or better use -9999 which points to the git repository).

Playing with BTRFS storage pool capabilities
Whereas it would possible to use btrfs just as you are used to under a non-LVM system, it shines in using its built-in storage pool capabilities. Tired of playing with LVM ? 🙂 Good news: you do not need it anymore with btrfs.

Setting up a storage pool
BTRFS terminology is a bit confusing. If you already have used another 'advanced' filesystem like ZFS or some mechanism like LVM, it's good to know that there are many correlations. In the BTRFS world, the word volume corresponds to a storage pool (ZFS) or a volume group (LVM). Ref. http://www.rkeene.org/projects/info/wiki.cgi/165

The test bench uses disk images through loopback devices. Of course, in a real world case, you will use local drives or units though a SAN. To start with, 5 devices of 1 GiB are allocated:

# dd if=/dev/zero of=/tmp/btrfs-vol0.img bs=1G count=1
# dd if=/dev/zero of=/tmp/btrfs-vol1.img bs=1G count=1
# dd if=/dev/zero of=/tmp/btrfs-vol2.img bs=1G count=1
# dd if=/dev/zero of=/tmp/btrfs-vol3.img bs=1G count=1
# dd if=/dev/zero of=/tmp/btrfs-vol4.img bs=1G count=1
Then attached:

# losetup /dev/loop0 /tmp/btrfs-vol0.img
# losetup /dev/loop1 /tmp/btrfs-vol1.img
# losetup /dev/loop2 /tmp/btrfs-vol2.img
# losetup /dev/loop3 /tmp/btrfs-vol3.img
# losetup /dev/loop4 /tmp/btrfs-vol4.img
Creating the initial volume (pool)
BTRFS uses different strategies to store data and for the filesystem metadata (ref. https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices).

By default the behavior is:

metadata is replicated on all of the devices. If a single device is used the metadata is duplicated inside this single device (useful in case of corruption or bad sector, there is a higher chance that one of the two copies is clean). To tell btrfs to maintain a single copy of the metadata, just use single. Remember: dead metadata = dead volume with no chance of recovery.
data is spread amongst all of the devices (this means no redundancy; any data block left on a defective device will be inaccessible)
To create a BTRFS volume made of multiple devices with default options, use:

# mkfs.btrfs /dev/loop0 /dev/loop1 /dev/loop2
To create a BTRFS volume made of a single device with a single copy of the metadata (dangerous!), use:

# mkfs.btrfs -m single /dev/loop0
To create a BTRFS volume made of multiple devices with metadata spread amongst all of the devices, use:

# mkfs.btrfs -m raid0 /dev/loop0 /dev/loop1 /dev/loop2
To create a BTRFS volume made of multiple devices, with metadata spread amongst all of the devices and data mirrored on all of the devices (you probably don't want this in a real setup), use:

# mkfs.btrfs -m raid0 -d raid1 /dev/loop0 /dev/loop1 /dev/loop2
To create a fully redundant BTRFS volume (data and metadata mirrored amongst all of the devices), use:

# mkfs.btrfs -d raid1 /dev/loop0 /dev/loop1 /dev/loop2
Note
Technically you can use anything as a physical volume: you can have a volume composed of 2 local hard drives, 3 USB keys, 1 loopback device pointing to a file on a NFS share and 3 logical devices accessed through your SAN (you would be an idiot, but you can, nevertheless). Having different physical volume sizes would lead to issues, but it works :-).
Checking the initial volume
To verify the devices of which BTRFS volume is composed, just use btrfs-show device (old style) or btrfs filesystem show device (new style). You need to specify one of the devices (the metadata has been designed to keep a track of the what device is linked what other device). If the initial volume was set up like this:

# mkfs.btrfs /dev/loop0 /dev/loop1 /dev/loop2

WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using

adding device /dev/loop1 id 2
adding device /dev/loop2 id 3
fs created label (null) on /dev/loop0
nodesize 4096 leafsize 4096 sectorsize 4096 size 3.00GB
Btrfs Btrfs v0.19
It can be checked with one of these commands (They are equivalent):

# btrfs filesystem show /dev/loop0
# btrfs filesystem show /dev/loop1
# btrfs filesystem show /dev/loop2
The result is the same for all commands:

Label: none uuid: 0a774d9c-b250-420e-9484-b8f982818c09
Total devices 3 FS bytes used 28.00KB
devid 3 size 1.00GB used 263.94MB path /dev/loop2
devid 1 size 1.00GB used 275.94MB path /dev/loop0
devid 2 size 1.00GB used 110.38MB path /dev/loop1
To show all of the volumes that are present:

# btrfs filesystem show
Label: none uuid: 0a774d9c-b250-420e-9484-b8f982818c09
Total devices 3 FS bytes used 28.00KB
devid 3 size 1.00GB used 263.94MB path /dev/loop2
devid 1 size 1.00GB used 275.94MB path /dev/loop0
devid 2 size 1.00GB used 110.38MB path /dev/loop1

Label: none uuid: 1701af39-8ea3-4463-8a77-ec75c59e716a
Total devices 1 FS bytes used 944.40GB
devid 1 size 1.42TB used 1.04TB path /dev/sda2

Label: none uuid: 01178c43-7392-425e-8acf-3ed16ab48813
Total devices 1 FS bytes used 180.14GB
devid 1 size 406.02GB used 338.54GB path /dev/sda4
Warning
BTRFS wiki mentions that btrfs device scan should be performed, consequence of not doing the incantation? Volume not seen?
Mounting the initial volume
BTRFS volumes can be mounted like any other filesystem. The cool stuff at the top on the sundae is that the design of the BTRFS metadata makes it possible to use any of the volume devices. The following commands are equivalent:

# mount /dev/loop0 /mnt
# mount /dev/loop1 /mnt
# mount /dev/loop2 /mnt
For every physical device used for mounting the BTRFS volume df -h reports the same (in all cases 3 GiB of "free" space is reported):

# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/loop1 3.0G 56K 1.8G 1% /mnt
The following command prints very useful information (like how the BTRFS volume has been created):

# btrfs filesystem df /mnt
Data, RAID0: total=409.50MB, used=0.00
Data: total=8.00MB, used=0.00
System, RAID1: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=204.75MB, used=28.00KB
Metadata: total=8.00MB, used=0.00
By the way, as you can see, for the btrfs command the mount point should be specified, not one of the physical devices.

Shrinking the volume
A common practice in system administration is to leave some head space, instead of using the whole capacity of a storage pool (just in case). With btrfs one can easily shrink volumes. Let's shrink the volume a bit (about 25%):

# btrfs filesystem resize -500m /mnt
# dh -h
/dev/loop1 2.6G 56K 1.8G 1% /mnt
And yes, it is an on-line resize, there is no need to umount/shrink/mount. So no downtimes! 🙂 However, a BTRFS volume requires a minimal size... if the shrink is too aggressive the volume won't be resized:

# btrfs filesystem resize -1g /mnt
Resize '/mnt' of '-1g'
ERROR: unable to resize '/mnt'
Growing the volume
This is the opposite operation, you can make a BTRFS grow to reach a particular size (e.g. 150 more megabytes):

# btrfs filesystem resize +150m /mnt
Resize '/mnt' of '+150m'
# dh -h
/dev/loop1 2.7G 56K 1.8G 1% /mnt
You can also take an "all you can eat" approach via the max option, meaning all of the possible space will be used for the volume:

# btrfs filesystem resize max /mnt
Resize '/mnt' of 'max'
# dh -h
/dev/loop1 3.0G 56K 1.8G 1% /mnt
Adding a new device to the BTRFS volume
To add a new device to the volume:

# btrfs device add /dev/loop4 /mnt
oxygen ~ # btrfs filesystem show /dev/loop4
Label: none uuid: 0a774d9c-b250-420e-9484-b8f982818c09
Total devices 4 FS bytes used 28.00KB
devid 3 size 1.00GB used 263.94MB path /dev/loop2
devid 4 size 1.00GB used 0.00 path /dev/loop4
devid 1 size 1.00GB used 275.94MB path /dev/loop0
devid 2 size 1.00GB used 110.38MB path /dev/loop1
Again, no need to umount the volume first as adding a device is an on-line operation (the device has no space used yet hence the '0.00'). The operation is not finished as we must tell btrfs to prepare the new device (i.e. rebalance/mirror the metadata and the data between all devices):

# btrfs filesystem balance /mnt
# btrfs filesystem show /dev/loop4
Label: none uuid: 0a774d9c-b250-420e-9484-b8f982818c09
Total devices 4 FS bytes used 28.00KB
devid 3 size 1.00GB used 110.38MB path /dev/loop2
devid 4 size 1.00GB used 366.38MB path /dev/loop4
devid 1 size 1.00GB used 378.38MB path /dev/loop0
devid 2 size 1.00GB used 110.38MB path /dev/loop1
Note
Depending on the sizes and what is in the volume a balancing operation could take several minutes or hours.
Removing a device from the BTRFS volume
# btrfs device delete /dev/loop2 /mnt
# btrfs filesystem show /dev/loop0
Label: none uuid: 0a774d9c-b250-420e-9484-b8f982818c09
Total devices 4 FS bytes used 28.00KB
devid 4 size 1.00GB used 264.00MB path /dev/loop4
devid 1 size 1.00GB used 268.00MB path /dev/loop0
devid 2 size 1.00GB used 0.00 path /dev/loop1
*** Some devices missing
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/loop1 3.0G 56K 1.5G 1% /mnt
Here again, removing a device is totally dynamic and can be done as an on-line operation! Note that when a device is removed, its content is transparently redistributed among the other devices.

Obvious points:

** DO NOT UNPLUG THE DEVICE BEFORE THE END OF THE OPERATION, DATA LOSS WILL RESULT**
If you have used raid0 in either metadata or data at the BTRFS volume creation you will end in a unusable volume if one of the the devices fails before being properly removed from the volume as some stripes will be lost.
Once you add a new device to the BTRFS volume as a replacement for a removed one, you can cleanup the references to the missing device:

# btrfs device delete missing
Using a BTRFS volume in degraded mode
Warning
It is not possible to use a volume in degraded mode if raid0 has been used for data/metadata and the device had not been properly removed with btrfs device delete (some stripes will be missing). The situation is even worse if RAID0 is used for the the metadata: trying to mount a BTRFS volume in read/write mode while not all the devices are accessible will simply kill the remaining metadata, hence making the BTRFS volume totally unusable... you have been warned! 🙂
If you use raid1 or raid10 for data AND metadata and you have a usable submirror accessible (consisting of 1 drive in case of RAID1 or the two drive of the same RAID0 array in case of RAID10), you can mount the array in degraded mode in the case of some devices are missing (e.g. dead SAN link or dead drive) :

# mount -o degraded /dev/loop0 /mnt
If you use RAID0 (and have one of your drives inaccessible) the metadata or RAID10 but not enough drives are on-line to even get a degraded mode possible, btrfs will refuse to mount the volume:

# mount /dev/loop0 /mnt
mount: wrong fs type, bad option, bad superblock on /dev/loop0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
The situation is no better if you have used RAID1 for the metadata and RAID0 for the data, you can mount the drive in degraded mode but you will encounter problems while accessing your files:

# cp /mnt/test.dat /tmp
cp: reading `/mnt/test.dat': Input/output error
cp: failed to extend `/tmp/test.dat': Input/output error
Playing with subvolumes and snapshots
A story of boxes....
When you think about subvolumes in BTRFS, think about boxes. Each one of those can contain items and other smaller boxes ("sub-boxes") which in turn can also contains items and boxes (sub-sub-boxes) and so on. Each box and items has a number and a name, except for the top level box, which has only a number (zero). Now imagine that all of the boxes are semi-opaque: you can see what they contain if you are outside the box but you can't see outside when you are inside the box. Thus, depending on the box you are in you can view either all of the items and sub-boxes (top level box) or only a part of them (any other box but the top level one). To give you a better idea of this somewhat abstract explanation let's illustrate a bit:

(0) --+-> Item A (1)
|
+-> Item B (2)
|
+-> Sub-box 1 (3) --+-> Item C (4)
| |
| +-> Sub-sub-box 1.1 (5) --+-> Item D (6)
| | |
| | +-> Item E (7)
| | |
| | +-> Sub-Sub-sub-box 1.1.1 (8) ---> Item F (9)
| +-> Item F (10)
|
+-> Sub-box 2 (11) --> Item G (12)
What you see in the hierarchy depends on where you are (note that the top level box numbered 0 doesn't have a name, you will see why later). So:

If you are in the node named top box (numbered 0) you see everything, i.e. things numbered 1 to 12
If you are in "Sub-sub-box 1.1" (numbered 5), you see only things 6 to 9
If you are in "Sub-box 2" (numbered 11), you only see what is numbered 12
Did you notice? We have two items named 'F' (respectively numbered 9 and 10). This is not a typographic error, this is just to illustrate the fact that every item lives its own peaceful existence in its own box. Although they have the same name, 9 and 10 are two distinct and unrelated objects (of course it is impossible to have two objects named 'F' in the same box, even they would be numbered differently).

... applied to BTRFS! (or, "What is a volume/subvolume?")
BTRFS subvolumes work in the exact same manner, with some nuances:

First, imagine a frame that surrounds the whole hierarchy (represented in dots below). This is your BTRFS volume. A bit abstract at first glance, but BTRFS volumes have no tangible existence, they are just an aggregation of devices tagged as being clustered together (that fellowship is created when you invoke mkfs.btrfs or btrfs device add).
Second, the first level of hierarchy contains only a single box numbered zero which can never be destroyed (because everything it contains would also be destroyed).
If in our analogy of a nested boxes structure we used the word "box", in the real BTRFS word we use the word "subvolume" (box => subvolume). Like in our boxes analogy, all subvolumes hold a unique number greater than zero and a name, with the exception of root subvolume located at the very first level of the hierarchy which is always numbered zero and has no name (BTRFS tools destroy subvolumes by their name not their number so no name = no possible destruction. This is a totally intentional architectural choice, not a flaw).

Here is a typical hierarchy:

.....BTRFS Volume................................................................................................................................
.
. Root subvolume (0) --+-> Subvolume SV1 (258) ---> Directory D1 --+-> File F1
. | |
. | +-> File F2
. |
. +-> Directory D1 --+-> File F1
. | |
. | +-> File F2
. | |
. | +-> File F3
. | |
. | +-> Directory D11 ---> File F4
. +-> File F1
. |
. +-> Subvolume SV2 (259) --+-> Subvolume SV21 (260)
. |
. +-> Subvolume SV22 (261) --+-> Directory D2 ---> File F4
. |
. +-> Directory D3 --+-> Subvolume SV221 (262) ---> File F5
. | |
. | +-> File F6
. | |
. | +-> File F7
. |
. +-> File F8
.
.....................................................................................................................................
Maybe you have a question: "Okay, What is the difference between a directory and a subvolume? Both can can contain something!". To further confuse you, here is what users get if they reproduce the first level hierarchy on a real machine:

# ls -l
total 0
drwx------ 1 root root 0 May 23 12:48 SV1
drwxr-xr-x 1 root root 0 May 23 12:48 D1
-rw-r--r-- 1 root root 0 May 23 12:48 F1
drwx------ 1 root root 0 May 23 12:48 SV2
Although subvolumes SV1 and SV2 have been created with special BTRFS commands they appear just as if they were ordinary directories! A subtle nuance exists, however: think again at our boxes analogy we did before and map the following concepts in the following manner:

a subvolume : the semi-opaque box
a directory : a sort of item (that can contain something even another subvolume)
a file : another sort of item
So, in the internal filesystem metadata SV1 and SV2 are stored in a different manner than D1 (although this is transparently handled for users). You can, however see SV1 and SV2 for what they are (subvolumes) by running the following command (subvolume numbered (0) has been mounted on /mnt):

# btrfs subvolume list /mnt
ID 258 top level 5 path SV1
ID 259 top level 5 path SV2
What would we get if we create SV21 and SV22 inside of SV2? Let's try! Before going further you should be aware that a subvolume is created by invoking the magic command btrfs subvolume create:

# cd /mnt/SV2
# btrfs subvolume create SV21
Create subvolume './SV21'
# btrfs subvolume create SV22
Create subvolume './SV22'
# btrfs subvolume list /mnt
ID 258 top level 5 path SV1
ID 259 top level 5 path SV2
ID 260 top level 5 path SV2/SV21
ID 261 top level 5 path SV2/SV22
Again, invoking ls in /mnt/SV2 will report the subvolumes as being directories:

# ls -l
total 0
drwx------ 1 root root 0 May 23 13:15 SV21
drwx------ 1 root root 0 May 23 13:15 SV22
Changing the point of view on the subvolumes hierarchy
At some point in our boxes analogy we have talked about what we see and what we don't see depending on our location in the hierarchy. Here lies a big important point: whereas most of the BTRFS users mount the root subvolume (subvolume id = 0, we will retain the root subvolume terminology) in their VFS hierarchy thus making visible the whole hierarchy contained in the BTRFS volume, it is absolutely possible to mount only a subset of it. How that could be possible? Simple: Just specify the subvolume number when you invoke mount. For example, to mount the hierarchy in the VFS starting at subvolume SV22 (261) do the following:

# mount -o subvolid=261 /dev/loop0 /mnt
Here lies an important notion not disclosed in the previous paragraph: although both directories and subvolumes can act as containers, only subvolumes can be mounted in a VFS hierarchy. It is a fundamental aspect to remember: you cannot mount a sub-part of a subvolume in the VFS; you can only mount the subvolume in itself. Considering the hierarchy schema in the previous section, if you want to access the directory D3 you have three possibilities:

Mount the non-named subvolume (numbered 0) and access D3 through /mnt/SV2/SV22/D3 if the non-named subvolume is mounted in /mnt
Mount the subvolume SV2 (numbered 259) and access D3 through /mnt/SV22/D3 if the the subvolume SV2 is mounted in /mnt
Mount the subvolume SV22 (numbered 261) and access D3 through /mnt/D3 if the the subvolume SV22 is mounted in /mnt
This is accomplished by the following commands, respectively:

# mount -o subvolid=0 /dev/loop0 /mnt
# mount -o subvolid=259 /dev/loop0 /mnt
# mount -o subvolid=261 /dev/loop0 /mnt
Note
When a subvolume is mounted in the VFS, everything located "above" the subvolume is hidden. Concretely, if you mount the subvolume numbered 261 in /mnt, you only see what is under SV22, you won't see what is located above SV22 like SV21, SV2, D1, SV1, etc.
The default subvolume
$100 questions: 1. "If I don't put 'subvolid' in the command line, 1. how does the kernel know which one of the subvolumes it has to mount? 2. Does Omitting the 'subvolid' means automatically 'mount subvolume numbered 0'?". Answers: 1. BTRFS magic! 😉 2. No, not necessarily, you can choose something other than the non-named subvolume.

When you create a brand new BTRFS filesystem, the system not only creates the initial the root subvolume (numbered 0) but also tags it as being the default subvolume. When you ask the operating system to mount a subvolume contained in a BTRFS volume without specifying a subvolume number, it determines which of the existing subvolumes has been tagged as "default subvolume" and mounts it. If none of the exiting subvolumes has the tag "default subvolume" (e.g. because the default subvolume has been deleted), the mount command gives up with a rather cryptic message:

# mount /dev/loop0 /mnt
mount: No such file or directory
It is also possible to change at any time which subvolume contained in a BTRFS volume is considered the default volume. This is accomplished with btrfs subvolume set-default. The following tags the subvolume 261 as being the default:

# btrfs subvolume set-default 261 /mnt
After that operation, doing the following is exactly the same:

# mount /dev/loop0 /mnt
# mount -o subvolid=261 /dev/loop0 /mnt
Note
The chosen new default subvolume must be visible in the VFS when you invoke btrfs subvolume set-default'
Deleting subvolumes
Question: "As subvolumes appear like directories, can I delete a subvolume by doing an rm -rf on it?". Answer: Yes, you can, but that way is not the most elegant, especially when it contains several gigabytes of data scattered on thousands of files, directories and maybe other subvolumes located in the one you want to remove. It isn't elegant because rm -rf could take several minutes (or even hours!) to complete whereas something else can do the same job in the fraction of a second.

"Huh?" Yes perfectly possible, and here is the cool goodie for the readers who arrived at this point: when you want to remove a subvolume, use btrfs subvolume delete instead of rm -rf. That btrfs command will remove the snapshots in a fraction of a second, even it contains several gigabytes of data!

Warning
You can never remove the root subvolume of a BTRFS volume as btrfs delete expects a subvolume name (again: this is not a flaw in the design of BTRFS; removing the subvolume numbered 0 would destroy the entirety of a BTRFS volume...too dangerous).
If the subvolume you delete was tagged as the default subvolume you will have to designate another default subvolume or explicitly tell the system which one of the subvolumes has to be mounted)
An example: considering our initial example given above and supposing you have mounted non-named subvolume numbered 0 in /mnt, you can remove SV22 by doing:

# btrfs subvolume delete /mnt/SV2/SV22
Obviously the BTRFS volume will look like this after the operation:

.....BTRFS Volume................................................................................................................................
.
. (0) --+-> Subvolume SV1 (258) ---> Directory D1 --+-> File F1
. | |
. | +-> File F2
. |
. +-> Directory D1 --+-> File F1
. | |
. | +-> File F2
. | |
. | +-> File F3
. | |
. | +-> Directory D11 ---> File F4
. +-> File F1
. |
. +-> Subvolume SV2 (259) --+-> Subvolume SV21 (260)
.....................................................................................................................................
Snapshot and subvolumes
If you have a good comprehension of what a subvolume is, understanding what a snapshot is won't be a problem: a snapshot is a subvolume with some initial contents. "Some initial contents" here means an exact copy.

When you think about snapshots, think about copy-on-write: the data blocks are not duplicated between a mounted subvolume and its snapshot unless you start to make changes to the files (a snapshot can occupy nearly zero extra space on the disk). At time goes on, more and more data blocks will be changed, thus making snapshots "occupy" more and more space on the disk. It is therefore recommended to keep only a minimal set of them and remove unnecessary ones to avoid wasting space on the volume.

The following illustrates how to take a snaphot of the VFS root:

# btrfs subvolume snapshot / /snap-2011-05-23
Create a snapshot of '/' in '//snap-2011-05-23'
Once created, the snapshot will persist in /snap-2011-05-23 as long as you don't delete it. Note that the snapshot contents will remain exactly the same it was at the time is was taken (as long as you don't make changes... BTRFS snapshots are writable!). A drawback of having snapshots: if you delete some files in the original filesystem, the snapshot still contains them and the disk blocks can't be claimed as free space. Remember to remove unwanted snapshots and keep a bare minimal set of them.

Listing and deleting snaphots
As there is no distinction between a snapshot and a subvolume, snapshots are managed with the exact same commands, especially when the time has come to delete some of them. An interesting feature in BTRFS is that snapshots are writable. You can take a snapshot and make changes in the files/directories it contains. A word of caution: there are no undo capbilities! What has been changed has been changed forever... If you need to do several tests just take several snapshots or, better yet, snapshot your snapshot then do whatever you need in this copy-of-the-copy :-).

Using snapshots for system recovery (aka Back to the Future)
Here is where BTRFS can literally be a lifeboat. Suppose you want to apply some updates via emerge -uaDN @world but you want to be sure that you can jump back into the past in case something goes seriously wrong after the system update (does libpng14 remind you of anything?!). Here is the "putting-things-together part" of the article!

The following only applies if your VFS root and system directories containing /sbin, /bin, /usr, /etc.... are located on a BTRFS volume. To make things simple, the whole structure is supposed to be located in the SAME subvolume of the same BTRFS volume.

To jump back into the past you have at least two options:

Fiddle with the default subvolume numbers
Use the kernel command line parameters in the bootloader configuration files
In all cases you must take a snapshot of your VFS root *before* updating the system:

# btrfs subvolume snapshot / /before-updating-2011-05-24
Create a snapshot of '/' in '//before-updating-2011-05-24'
Note
Hint: You can create an empty file at the root of your snapshot with the name of your choice to help you easily identify which subvolume is the currently mounted one (e.g. if the snapshot has been named before-updating-2011-05-24, you can use a slightly different name like current-is-before-updating-2011-05-24 => touch /before-updating-2011-05-24/current-is-before-updating-2011-05-24). This is extremly useful if you are dealing with several snapshots.
There is no "better" way; it's just a question of personal preference.

Way #1: Fiddle with the default subvolume number
Hypothesis:

Your "production" VFS root partition resides in the root subvolume (subvolid=0),
Your /boot partition (where the bootloader configuration files are stored) is on another standalone partition
First search for the newly created subvolume number:

# btrfs subvolume list /
'''ID 256''' top level 5 path before-updating-2011-05-24
'256' is the ID to be retained (of course, this ID will differ in your case).

Now, change the default subvolume of the BTRFS volume to designate the subvolume (snapshot) before-updating and not the root subvolume then reboot:

# btrfs subvolume set-default 256 /
Once the system has rebooted, and if you followed the advice in the previous paragraph that suggests to create an empty file of the same name as the snapshot, you should be able to see if the mounted VFS root is the copy hold by the snapshot before-updating-2011-05-24:

# ls -l /
...
-rw-rw-rw- 1 root root 0 May 24 20:33 current-is-before-updating-2011-05-24
...
The correct subvolume has been used for mounting the VFS! Excellent! This is now the time to mount your "production" VFS root (remember the root subvolume can only be accessed via its identification number i.e 0):

# mount -o subvolid=0 /mnt
# mount
...
/dev/sda2 on /mnt type btrfs (rw,subvolid=0)
Oh by the way, as the root subvolume is now mounted in /mnt let's try something, just for the sake of the demonstration:

# ls /mnt
...
drwxr-xr-x 1 root root 0 May 24 20:33 current-is-before-updating-2011-05-24
...
# btrfs subvolume list /mnt
ID 256 top level 5 path before-updating-2011-05-24
No doubt possible 🙂 Time to rollback! For this rsync will be used in the following way:

# rsync --progress -aHAX --exclude=/proc --exclude=/dev --exclude=/sys --exclude=/mnt / /mnt
Basically we are asking rsync to:

preserve timestamps, hard and symbolic links, owner/group IDs, ACLs and any extended attributes (refer to the rsync manual page for further details on options used) and to report its progression
ignore the mount points where virtual filesystems are mounted (procfs, sysfs...)
avoid a re-recursion by reprocessing /mnt (you can speed up the process by adding some extra directories if you are sure they don't hold any important changes or any change at all like /var/tmp/portage for example).
Be patient! The resync may take several minutes or hours depending on the amount of data amount to process...

Once finished, you will have to set the default subvolume to be the root subvolume:

# btrfs subvolume set-default 0 /mnt
ID 256 top level 5 path before-updating-2011-05-24
Warning
DO NOT ENTER / instead of /mnt in the above command; it won't work and you will be under the snapshot before-updating-2011-05-24 the next time the machine reboots.

The reason is that subvolume number must be "visible" from the path given at the end of the btrfs subvolume set-default command line. Again refer the boxes analogy: in our context we are in a subbox numbered 256 which is located *inside* the box numbered 0 (so it can't see neither interfere with it). [TODO: better explain]
Now just reboot and you should be in business again! Once you have rebooted just check if you are really under the right subvolume:

# ls /
...
drwxr-xr-x 1 root root 0 May 24 20:33 current-is-before-updating-2011-05-24
...
# btrfs subvolume list /
ID 256 top level 5 path before-updating-2011-05-24
At the right place? Excellent! You can now delete the snapshot if you wish, or better: keep it as a lifeboat of "last good known system state."

Way #2: Change the kernel command line in the bootloader configuration files
First search for the newly created subvolume number:

# btrfs subvolume list /
'''ID 256''' top level 5 path before-updating-2011-05-24
'256' is the ID to be retained (can differ in your case).

Now with your favourite text editor, edit the adequate kernel command line in your bootloader configuration (/etc/boot.conf). This file contains is typically organized in several sections (one per kernel present on the system plus some global settings), like the excerpt below:

set timeout=5
set default=0

# Production kernel
menuentry "Funtoo Linux production kernel (2.6.39-gentoo x86/64)" {
insmod part_msdos
insmod ext2
...
set root=(hd0,1)
linux /kernel-x86_64-2.6.39-gentoo root=/dev/sda2
initrd /initramfs-x86_64-2.6.39-gentoo
}
...
Find the correct kernel line and add one of the following statements after root=/dev/sdX:

rootflags=subvol=before-updating-2011-05-24
- Or -
rootflags=subvolid=256
Warning
If the kernel your want to use has been generated with Genkernel, you MUST use real_rootflags=subvol=... instead of rootflags=subvol=... at the penalty of not having your rootflags taken into consideration by the kernel on reboot.

Applied to the previous example you will get the following if you referred the subvolume by its name:

set timeout=5
set default=0

# Production kernel
menuentry "Funtoo Linux production kernel (2.6.39-gentoo x86/64)" {
insmod part_msdos
insmod ext2
...
set root=(hd0,1)
linux /kernel-x86_64-2.6.39-gentoo root=/dev/sda2 rootflags=subvol=before-updating-2011-05-24
initrd /initramfs-x86_64-2.6.39-gentoo
}
...
Or you will get the following if you referred the subvolume by its identification number:

set timeout=5
set default=0

# Production kernel
menuentry "Funtoo Linux production kernel (2.6.39-gentoo x86/64)" {
insmod part_msdos
insmod ext2
...
set root=(hd0,1)
linux /kernel-x86_64-2.6.39-gentoo root=/dev/sda2 rootflags=subvolid=256
initrd /initramfs-x86_64-2.6.39-gentoo
}
...
Once the modifications are done, save your changes and take the necessary extra steps to commit the configuration changes on the first sectors of the disk if needed (this mostly applies to the users of LILO; Grub and SILO do not need to be refreshed) and reboot.

Once the system has rebooted and if you followed the advice in the previous paragraph that suggests to create an empty file of the same name as the snapshot, you should be able to see if the mounted VFS root is the copy hold by the snapshot before-updating-2011-05-24:

# ls -l /
...
-rw-rw-rw- 1 root root 0 May 24 20:33 current-is-before-updating-2011-05-24
...
The correct subvolume has been used for mounting the VFS! Excellent! This is now the time to mount your "production" VFS root (remember the root subvolume can only be accessed via its identification number 0):

# mount -o subvolid=0 /mnt
# mount
...
/dev/sda2 on /mnt type btrfs (rw,subvolid=0)
Time to rollback! For this rsync will be used in the following way:

# rsync --progress -aHAX --exclude=/proc --exclude=/dev --exclude=/sys --exclude=/mnt / /mnt
Here, please refer to what has been said in Way #1 concerning the used options in rsync. Once everything is in place again, edit your bootloader configuration to remove the rootflags/real_rootflags kernel parameter, reboot and check if you are really under the right subvolume:

# ls /
...
drwxr-xr-x 1 root root 0 May 24 20:33 current-is-before-updating-2011-05-24
...
# btrfs subvolume list /
ID 256 top level 5 path current-is-before-updating-2011-05-24
At the right place? Excellent! You can now delete the snapshot if you wish, or better: keep it as a lifeboat of "last good known system state."

Some BTRFS practices / returns of experience / gotchas
Although BTRFS is still evolving, at the date of writing it (still) is an experimental filesystem and should be not be used for production systems and for storing critical data (even if the data is non critical, having backups on a partition formatted with a "stable" filesystem like Reiser or ext3/4 is recommended).
From time to time some changes are brought to the metadata (BTRFS format is not definitive at date of writing) and a BTRFS partition could not be used with older Linux kernels (this happened with Linux 2.6.31).
More and more Linux distributions are proposing the filesystem as an alternative for ext4
Some reported gotchas: https://btrfs.wiki.kernel.org/index.php/Gotchas
Playing around with BTFRS can be a bit tricky especially when dealing with default volumes and mount point (again: the box analogy)
Using compression (e.g. LZO =>> mount -o compress=lzo) on the filesystem can improve the throughput performance, however many files nowadays are already compressed at application level (music, pictures, videos....).
Using space caching capabilities (mount -o space_cache) seems to brings some extra slight performance improvements.
There is very interesting discussion on BTRFS design limitations with B-Trees lying on LKML. We strongly encourage you to read about on
Deploying a Funtoo instance in a subvolume other than the root subvolume
Some Funtoo core devs have used BTRFS for many months and no major glitches have been reported so far (except an non-aligned memory access trap on SPARC64 in a checksum calculation routine; minor latest kernels may brought a correction) except a long time ago but this was more related to a kernel crash due to a bug that corrupted some internal data rather than the filesystem code in itself.

The following can simplify your life in case of recovery (not tested):

When you prepare the disk space that will hold the root of your future Funtoo instance (and so, will hold /usr /bin /sbin /etc etc...), don't use the root subvolume but take an extra step to define a subvolume like illustrated below:

# fdisk /dev/sda2
....
# mkfs.btrfs /dev/sda2
# mount /dev/sda2 /mnt/funtoo
# subvolume create /mnt/funtoo /mnt/funtoo/live-vfs-root-20110523
# chroot /mnt/funtoo/live-vfs-root-20110523 /bin/bash
Then either:

Set the default subvolume /live-vfs-root-20110523 as being the default subvolume (btrfs subvolume set-default.... remember to inspect the subvolume identification number)
Use rootflag / real_rootfsflags (always use real_rootfsflags for kernel generated with Genkernel) on the kernel command line in your bootloader configuration file
Technically speaking, it won't change your life BUT at system recovery: when you want to rollback to a functional VFS root copy because something happened (buggy system package, too aggressive cleanup that removed Python, dead compiling toolchain...) you can avoid a time costly rsync but at the cost of putting a bit of overhead over your shoulders when taking a snapshot.

Here again you have two ways to recover the system:

fiddling with the default subvolume:
Mount to the no named volume somewhere (e.g. mount -o subvolid=0 /dev/sdX /mnt)
Take a snapshot (remember to check its identification number) of your current subvolume and store it under the root volume you just have just mounted (btrfs snapshot create / /mnt/before-updating-20110524) -- (Where is the "frontier"? If 0 is monted does its contennts also appear in the taken snashot located on the same volume?)
Update your system or do whatever else "dangerous" operation
If you need to return to the latest good known system state, just set the default subvolume to use to the just taken snapshot (btrfs subvolume set-default /mnt)
Reboot
Once you have rebooted, just mount the root subvolume again and delete the subvolume that correspond to the failed system update (btrfs subvolume delete /mnt/)
fiddling with the kernel command line:
Mount to the no named volume somewhere (e.g. mount -o subvolid=0 /dev/sdX /mnt)
Take a snapshot (remember to check its identification number) of your current subvolume and store it under the root volume you just have just mounted (btrfs snapshot create / /mnt/before-updating-20110524) -- (Where is the "frontier"? If 0 is mounted does its contents also appear in the taken snapshot located on the same volume?)
Update your system or do whatever else "dangerous" operation
If you need to return to the latest good known system state, just set the rootflags/real_rootflags as demonstrated in previous paragraphs in your loader configuration file
Reboot
Once you have rebooted, just mount the root subvolume again and delete the subvolume that correspond to the failed system update (btrfs subvolume delete /mnt/)
Space recovery / defragmenting the filesystem
Tip
From time to time it is advised to ask for re-optimizing the filesystem structures and data blocks in a subvolume. In BTRFS terminology this is called a defragmentation and it only be performed when the subvolume is mounted in the VFS (online defragmentation):
# btrfs filesystem defrag /mnt
You can still access the subvolume, even change its contents, while a defragmentation is running.

It is also a good idea to remove the snapshots you don't use anymore especially if huge files and/or lots of files are changed because snapshots will still hold some blocks that could be reused.

SSE 4.2 boost
If your CPU supports hardware calculation of CRC32 (e.g. since Intel Nehalem series and later and AMD Bulldozer series), you are encouraged to enable that support in your kernel since BTRFS makes an aggressive use of those. Just check you have enabled CRC32c INTEL hardware acceleration in Cryptographic API either as a module or as a built-in feature

Recovering an apparent dead BTRFS filesystem
Dealing with a filesystem metadata coherence is a critical in a filesystem design. Losing some data blocks (i.e. having some corrupted files) is less critical than having a screwed-up and unmountable filesystem especially if you do backups on a regular basis (the rule with BTRFS is *do backups*, BTRFS has no mature filesystem repair tool and you *will* end up in having to re-create your filesystem from scratch again sooner or later).

Mounting with recovery option (Linux 3.2 and beyond)
If you are using Linux 3.2 and later (only!), you can use the recovery option to make BTRFS seek for a usable copy of tree root (several copies of it exists on the disk). Just mount your filesystem as:

# mount -o recovery /dev/yourBTFSvolume /mount/point
btrfs-select-super / btrfs-zero-log
Two other handy tools exist but they are not deployed by default by sys-fs/btrfs-progs (even btrfs-progs-9999) ebuilds because they only lie in the branch "next" of the btrfs-progs Git repository:

btrfs-select-super
btrfs-zero-log
Building the btrfs-progs goodies
The two tools this section is about are not build by default and Funtoo ebuilds does not build them as well for the moment. So you must build them manually:

# mkdir ~/src
# cd ~/src
# git clone git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git
# cd btrfs-progs
# make && make btrfs-select-super && make btrfs-zero-log
Note
In the past, btrfs-select-super and btrfs-zero-log were lying in the git-next branch, this is no longer the case and those tools are available in the master branch
Fixing dead superblock
In case of a corrupted superblock, start by asking btrfsck to use an alternate copy of the superblock instead of the superblock #0. This is achieved via the -s option followed by the number of the alternate copy you wish to use. In the following example we ask for using the superblock copy #2 of /dev/sda7:

# ./btrfsck --s 2 /dev/sd7
When btrfsck is happy, use btrfs-super-select to restore the default superblock (copy #0) with a clean copy. In the following example we ask for restoring the superblock of /dev/sda7 with its copy #2:

# ./btrfs-super-select -s 2 /dev/sda7
Note that this will overwrite all the other supers on the disk, which means you really only get one shot at it.

If you run btrfs-super-select prior prior to figuring out which one is good, you've lost your chance to find a good one.

Clearing the BTRFS journal
This will only help with one specific problem!

If you are unable to mount a BTRFS partition after a hard shutdown, crash or power loss, it may be due to faulty log playback in kernels prior to 3.2. The first thing to try is updating your kernel, and mounting. If this isn't possible, an alternate solution lies in truncating the BTRFS journal, but only if you see "replay_one_*" functions in the oops callstack.

To truncate the journal of a BTRFS partition (and thereby lose any changes that only exist in the log!), just give the filesystem to process to btrfs-zero-log:

# ./btrfs-zero-log /dev/sda7
This is not a generic technique, and works by permanently throwing away a small amount of potentially good data.

Using btrfsck
Warning
Extremely experimental...
If one thing is famous in the BTRFS world it would be the so-wished fully functional btrfsck. A read-only version of the tool was existing out there for years, however at the begining of 2012, BTRFS developers made a public and very experimental release: the secret jewel lies in the branch dangerdonteveruse of the BTRFS Git repository hold by Chris Mason on kernel.org.

# git clone git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git
# cd btrfs-progs
# git checkout dangerdonteveruse
# make
So far the tool can:

Fix errors in the extents tree and in blocks groups accounting
Wipe the CRC tree and create a brand new one (you can to mount the filesystem with CRC checking disabled )
To repair:

# btrfsck --repair /dev/''yourBTRFSvolume''
To wipe the CRC tree:

# btrfsck --init-csum-tree /dev/''yourBTRFSvolume''
Two other options exist in the source code: --super (equivalent of btrfs-select-super ?) and --init-extent-tree (clears out any extent?)

Final words
We give the great lines here but BTRFS can be very tricky especially when several subvolumes coming from several BTRFS volumes are used. And remember: BTRFS is still experimental at date of writing 🙂

Lessons learned
Very interesting but still lacks some important features present in ZFS like RAID-Z, virtual volumes, management by attributes, filesystem streaming, etc.
Extremly interesting for Gentoo/Funtoo systems partitions (snapshot/rollback capabilities). However not integrated in portage yet.
If possible, use a file monitoring tool like TripWire this is handy to see what file has been corrupted once the filesystem is recovered or if a bug happens
It is highly advised to not use the root subvolume when deploying a new Funtoo instance or put any kind of data on it in a more general case. Rolling back a data snapshot will be much easier and much less error prone (no copy process, just a matter of 'swapping' the subvolumes).
Backup, backup backup your data! 😉

Support Funtoo and help us grow! Donate $15 per month and get a free SSD-based Funtoo Virtual Container. 5 spots left.
Got Funtoo?
Have you installed Funtoo Linux yet? Discover the power of a from-source meta-distribution optimized for your hardware! See our installation instructions and browse our CPU-optimized builds.

Funtoo News
Drobbins
RSS/Atom Support
You can now follow this news feed at http://www.funtoo.org/news/atom.xml .
10 February 2015 by Drobbins
Drobbins
Creating a Friendly Funtoo Culture
This news item details some recent steps that have been taken to help ensure that Funtoo is a friendly and welcoming place for our users.
2 February 2015 by Drobbins
Mgorny
CPU FLAGS X86
CPU_FLAGS_X86 are being introduced to group together USE flags managing CPU instruction sets.
31 January 2015 by Mgorny View More News...
More Articles
Browse all our Linux-related articles, below:

A
Awk by Example, Part 1
Awk by Example, Part 2
Awk by Example, Part 3
B
BTRFS Fun
Bash by Example, Part 1
Bash by Example, Part 2
Bash by Example, Part 3
F
Funtoo Filesystem Guide, Part 1
Funtoo Filesystem Guide, Part 2
Funtoo Filesystem Guide, Part 3
Funtoo Filesystem Guide, Part 4
Funtoo Filesystem Guide, Part 5
G
GUID Booting Guide
GlusterFS
K
Keychain
L
LVM Fun
Learning Linux LVM, Part 1
Learning Linux LVM, Part 2
Linux Fundamentals, Part 1
Linux Fundamentals, Part 1/pt-br
Linux Fundamentals, Part 2
Linux Fundamentals, Part 3
Linux Fundamentals, Part 4
M
Making the Distribution, Part 1
Making the Distribution, Part 2
Making the Distribution, Part 3
Maximum Swappage
O
OpenSSH Key Management, Part 1
OpenSSH Key Management, Part 2
OpenSSH Key Management, Part 3
P
POSIX Threads Explained, Part 1
POSIX Threads Explained, Part 2
POSIX Threads Explained, Part 3
Partition Planning Tips
Partitioning in Action, Part 1
Partitioning in Action, Part 2
Prompt Magic
S
SAN Box used via iSCSI
Sed by Example, Part 1
Sed by Example, Part 2
Sed by Example, Part 3
Slowloris DOS Mitigation Guide
T
The Gentoo.org Redesign, Part 1
The Gentoo.org Redesign, Part 2
The Gentoo.org Redesign, Part 3
The Gentoo.org Redesign, Part 4
Traffic Control
W
Windows 7 Virtualization with KVM
X
X Window System
Z
ZFS Fun

... further results

Categories: LabsArticlesFeaturedFilesystems
Powered by MediaWiki Powered by Semantic MediaWiki
This page was last modified on December 28, 2014, at 09:41.
Privacy policy
About Funtoo
Disclaimers

Posted in Uncategorized | Leave a comment

linux ifcfg keywords

/usr/share/doc/initscripts*/sysconfig.txt

11.1 About Network Interfaces
Prev Chapter 11 Network Configuration Next
11.1 About Network Interfaces

Each physical and virtual network device on an Oracle Linux system has an associated configuration file named ifcfg-interface in the /etc/sysconfig/network-scripts directory, where interface is the name of the interface. For example:

# cd /etc/sysconfig/network-scripts
# ls ifcfg-*
ifcfg-eth0 ifcfg-eth1 ifcfg-lo
In this example, there are two configuration files for Ethernet interfaces, ifcfg-eth0 and ifcfg-eth1, and one for the loopback interface, ifcfg-lo. The system reads the configuration files at boot time to configure the network interfaces.

The following are sample entries from an ifcfg-eth0 file for a network interface that obtains its IP address using the Dynamic Host Configuration Protocol (DHCP):

DEVICE="eth0"
NM_CONTROLLED="yes"
ONBOOT=yes
USERCTL=no
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03
HWADDR=08:00:27:16:C3:33
PEERDNS=yes
PEERROUTES=yes
If the interface is configured with a static IP address, the file contains entries such as the following:

DEVICE="eth0"
NM_CONTROLLED="yes"
ONBOOT=yes
USERCTL=no
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03
HWADDR=08:00:27:16:C3:33
IPADDR=192.168.1.101
NETMASK=255.255.255.0
BROADCAST=192.168.1.255
PEERDNS=yes
PEERROUTES=yes
The following configuration parameters are typically used in interface configuration files:

BOOTPROTO
How the interface obtains its IP address:

bootp
Bootstrap Protocol (BOOTP).

dhcp
Dynamic Host Configuration Protocol (DHCP).

none
Statically configured IP address.

BROADCAST
IPv4 broadcast address.

DEFROUTE
Whether this interface is the default route.

DEVICE
Name of the physical network interface device (or a PPP logical device).

HWADDR
Media access control (MAC) address of an Ethernet device.

IPADDR
IPv4 address of the interface.

IPV4_FAILURE_FATAL
Whether the device is disabled if IPv4 configuration fails.

IPV6_FAILURE_FATAL
Whether the device is disabled if IPv6 configuration fails.

IPV6ADDR
IPv6 address of the interface in CIDR notation. For example: IPV6ADDR="2001:db8:1e11:115b::1/32"

IPV6INIT
Whether to enable IPv6 for the interface.

MASTER
Specifies the name of the master bonded interface, of which this interface is slave.

NAME
Name of the interface as displayed in the Network Connections GUI.

NETMASK
IPv4 network mask of the interface.

NETWORK
IPV4 address of the network.

NM_CONTROLLED
Whether the network interface device is controlled by the network management daemon, NetworkManager.

ONBOOT
Whether the interface is activated at boot time.

PEERDNS
Whether the /etc/resolv.conf file used for DNS resolution contains information obtained from the DHCP server.

PEERROUTES
Whether the information for the routing table entry that defines the default gateway for the interface is obtained from the DHCP server.

SLAVE
Specifies that this interface is a component of a bonded interface.

TYPE
Interface type.

USERCTL
Whether users other than root can control the state of this interface.

UUID
Universally unique identifier for the network interface device.

Prev Up Next
Chapter 11 Network Configuration Home 11.2 About Network Configuration Files
Oracle logo
Copyright © 2013, 2014, Oracle and/or its affiliates. All rights reserved. Legal Notices

Posted in Uncategorized | Leave a comment

cdot vifproblems

You may have a situation where an interface is configured in the RDB (VifMgr)
but no longer present in the local userinterface.

1. create a new interface
2. delete the userinterface entry
3. try to create it again (fails because it is in VifMgr)
4. view VifMgr entry
5. delete the VifMgr entry

1.
net int create -vserver student2 -lif test2 -role data -data-protocol nfs,cifs,fcache -home-node cluster1-02 -home-port e0c -address 192.168.81.201 -netmask 255.255.255.0 -status-admin up

2.
set diag; net int ids delete -owner student2 -name test2

3.
net int create -vserver student2 -lif data1 -role data - data-protocol nfs,cifs,fcache -home-node cluster1-02 -home-port e0c - address 192.168.81.215 -netmask 255.255.255.0 -status-admin up
(network interface create)
Info: Failed to create virtual interface
Error: command failed: LIF "test2" with ID 1034 (on Vserver ID 6), IP address 192.168.81.201, is configured in the VIFMgr but is not visible in the user interface. Contact support personnel for further assistance.

4.
debug smdb table vifmgr_virtual_interface show -role data
-fields lif-name,lif-id

5.
debug smdb table vifmgr_virtual_interface delete -node cluster1-02 -lif-id lif_ID

Posted in Uncategorized | Leave a comment

cdot aggr troubleshoot with debug vreport

If there is an inconsistency between VLDB and WAFL on a local node,
you can fix that with "debug"

Create an aggregate in the nodeshell
This aggregate will be known in WAFL but not in VLDB

cl1::*> node run -node cl1-01
Type 'exit' or 'Ctrl-D' to return to the CLI
cl1-01> options nodescope.reenabledcmds "aggr=create,destroy,status,offline,online"
cl1-01> aggr create aggrwafl 5
Creation of an aggregate with 5 disks has completed.

Create an aggregate in the clustershell and remove it from WAFL.
This aggregate will then be known in VLDB but removed from WAFL.

cl1::>aggr create rdbaggr -diskcount 5 -node cl1-01
cl1::> node run -node cl1-01
Type 'exit' or 'Ctrl-D' to return to the CLI
cl1-01> aggr offline rdbaggr
Aggregate 'rdbaggr' is now offline.
cl1-01> aggr destroy rdbaggr
Are you sure you want to destroy this aggregate? y
Aggregate 'rdbaggr' destroyed.
cl1-01> exit
logout

Run dbug vreport.

cl1::> set diag
cl1::*> debug vreport show

aggregate Differences:
Name Reason Attributes
-------- ------- ---------------------------------------------------
aggrwafl(8e836c6d-76a0-4202-bb40-22694424f847)
Present in WAFL Only
Node Name: cl1-01
Aggregate UUID: 8e836c6d-76a0-4202-bb40-22694424f847
Aggregate State: online
Aggregate Raid Status: raid_dp

rdbaggr Present in VLDB Only
Node Name: cl1-01
Aggregate UUID: d7cfb17e-a374-43c6-b721-debfcf549191
Aggregate State: unknown
Aggregate Raid Status:
2 entries were displayed.

Fix VLDB ( aggregate will be added to VLDB )
cl1::debug*> debug vreport fix -type aggregate -object aggrwafl(8e836c6d-76a0-4202-bb40-22694424f847)

Fix VLDB ( aggregate will be removed from VLDB )
cl1::debug*> debug vreport fix -type aggregate -object rdbaggr
cl1::debug*> debug vreport show
This table is currently empty.
Info: WAFL and VLDB volume/aggregate records are consistent.

=================================================================

Posted in netapp | Leave a comment

Mac Locale warnings after OS update

vi $HOME/.ssh/config
SendEnv LANG LC_*

Posted in Uncategorized | Leave a comment

netapp 7-mode lun usage -s and lun clone dependency

create a lun and create a busy lun situation.
by default snapshot_clone_dependency is switched to off

If you switch it to on before the below actions then
you can delete a snapshot even if there are more recent
snapshots having pointers to lunblocks.
vol options volume_name snapshot_clone_dependency [on][off]

filer1> snap create lunvol_n1 snapbeforelun

filer1> options licensed_feature.iscsi.enable on
filer1> iscsi start
filer1> lun setup
This setup will take you through the steps needed to create LUNs
and to make them accessible by initiators. You can type ^C (Control-C)
at any time to abort the setup and no unconfirmed changes will be made
to the system.
---output skipped---
Do you want to create another LUN? [n]:

filer1> lun show -m
LUN path Mapped to LUN ID Protocol
-----------------------------------------------------------------------
/vol/lunvol_n1/lun1 win 0 iSCSI
On the windows system connect to the filer target with iscsi initiator and
use computer->manage->storage to rescan disks and initialise the new disk and
put ntfs filesystem disk.

filer1> snap create lunvol_n1 snapafterlun

filer1> lun clone create /vol/lunvol_n1/lunclone -b /vol/lunvol_n1/lun1 snapafterlun

filer1> lun map /vol/lunvol_n1/lunclone win
On windows rescan disks, mark partition as active and specify drive letter.
Now you can restore any files from the new (cloned) lun to the already existing lun

filer1> snap create lunvol_n1 snaprandom

filer1>snap delete lunvol_n1 snapafterlun
Snapshot snapafterlun is busy because of snapmirror, sync mirror, volume clone, snap restore, dump, CIFS share, volume copy, ndmp, WORM volume, SIS Clone, LUN clone

filer1> lun snap usage -s lunvol_n1 snapafterlun
You need to delete the following LUNs before deleting the snapshot
/vol/lunvol_n1/lunclone
You need to delete the following snapshots before deleting the snapshot
Snapshot - hourly.0
Snapshot - snaprandom
 

Posted in Uncategorized | Leave a comment

linux mount cifs share

mkdir /mnt/share

mount -t cifs //windowsserver/pathname /mnt/share -o username=administrator,password=********

 

Posted in linux | Leave a comment

netapp 7-mode snapvault restore files from backuplun

1. filer1 is primary filer and has aggr aggr1.

2. filer2 is secondary filer and has aggr aggr1.

3. windows is connected with iSCSI to filer1 and filer2
the igroup on both filers is called wingroup.

4. create a sourcevolume,qtree,lun on filer1 and map the lun.
- vol create volsource aggr1 500m
- qtree create /vol/volsource/q1
- lun create -s 100m -t windows /vol/volsource/q1/l1
- lun map /vol/volsource/q1/l1 wingroup
(the lun should now be visible on windows after a
rescan of the disks)

5. create a destinationvolume on filer2.
- vol create voldest aggr1 500m

6. setup a snapvault snapshot schedule on filer1.
- snapvault snap sched volsource snap1 2@0-23

7. setup a snapvault snapshot schedule on filer2.
- snapvault snap sched -x voldest snap1 40@0-23

8. Start the snapvault relation from filer2.
- snapvault start -S filer1:/vol/volsource/q1 filer2:/vol/voldest/q1

9. View the snapshot on filer2.
- snap list voldest
filer2(4079432752)_voldest-base.0 (busy,snapvault)
(this is the snapshot to clone the lun from)

10. Clone the lun on filer2.
- lun clone create /vol/voldest/backlun -b /vol/voldest/q1/l1 filer2(4079432752)_voldest-base.0

11. Online the lun and map it to wingroup.
- lun online /vol/voldest/backlun
- lun map /vol/voldest/backlun wingroup

12. On windows, rescan disks and find the drive to restore files
from.

Posted in Uncategorized | Leave a comment

solaris 11 zfs ARC

1. The Adaptive Replacement Cache
An ordered list of recently used resource entries; most recently
used at the top and least recently used at the bottom. Entries used
for a second time are placed at the top of the list and the bottom
entries will be evicted first. It used to be called LRU(Least Recenctly
Used) (LRU is evicted first).
ARC is an improvement because it keeps two lists T1 T2.
T1 for recent cache entries
T2 for frequent entries, referenced at least twice

2. Solaris 11 allows for locked pages by ZFS that cannot be vacated.
This may cause a problem for other programs in need of memory, or
if another filesystem than ZFS is used as well.

3. ZFS claims all memory except for 1 GB, or it claims at least 3/4

4. How much memory do we have?
prtconf | grep Mem
Memory size: 3072 Megabytes

5. See ARC statistics:
mdb -k
>::arc
(output skipped)
c_max = 2291 MB

6. How much memory is in use by ZFS?
mdb -k
>::memstat
Page Summary______Pages______MB____%Tot
Kernel____________136490_____533____17%
ZFS File Data_____94773______370____12%


7. Is the tunable parameter to limit ARC for ZFS set?
0 means no, 1 means yes.
mdb -k
> zfs_arc_max/E
zfs_arc_max:
zfs_arc_max: 0

8. What is the target max size for ARC?
kstat -p zfs::arcstats:c_max
zfs:0:arcstats:c_max 2402985984

(2402985984 bytes is 2291 MB)

9. How much does ZFS use at the moment?
kstat -p zfs::arcstats:size 5 (show results every 5 seconds)
zfs:0:arcstats:size 539604016
zfs:0:arcstats:size 539604016

(now start reading files and see the number increase)

for i in `seq 1 100`; do cat /var/adm/messages >>/test/data;
cat /test/data > /test/data$i; done

kstat -p zfs::arcstats:size 5 (show results every 5 seconds)
zfs:0:arcstats:size 539604016
zfs:0:arcstats:size 1036895816
zfs:0:arcstats:size 1036996288
zfs:0:arcstats:size 1037086728

10. How much memory is in use by ZFS now?
mdb -k
> ::memstat
Page Summary___Pages___MB___%Tot
------------ ---------------- ---------------- ----
Kernel_________140147__547___18%
ZFS File Data__236377__923___30%

11. To set the target tunable parameter, modify /etc/system and reboot.
vi /etc/system and add:
set zfs:zfs_arc_max= 1073741824
(after the reboot it will be 1 GB instead of 2.2 GB.

-----

brendan gregg's archits script.
# cat -n archits.sh
1 #!/usr/bin/sh
2
3 interval=${1:-5} # 5 secs by default
4
5 kstat -p zfs:0:arcstats:hits zfs:0:arcstats:misses $interval | awk '
6 BEGIN {
7 printf "%12s %12s %9s\n", "HITS", "MISSES", "HITRATE"
8 }
9 /hits/ {
10 hits = $2 - hitslast
11 hitslast = $2
12 }
13 /misses/ {
14 misses = $2 - misslast
15 misslast = $2
16 rate = 0
17 total = hits + misses
18 if (total)
19 rate = (hits * 100) / total
20 printf "%12d %12d %8.2f%%\n", hits, misses, rate
21 }
22 '
---------------------------

Posted in Uncategorized | Leave a comment

Performance seconds

cpu regs 300ns
SSL 25us 250us
Disk 5 ms - 20ms
Optical 100ms

nanosecond to second is as second to 31.710 years
microsecond to second is as second to 11.574 days

Screen Shot 2014-10-08 at 6.25.56 PM

Posted in Uncategorized | Leave a comment

Solaris remove empty lines in vi

:v/./d

Posted in Uncategorized | Leave a comment

netapp sim upgrade

By Ron Havenaar/Dirk Oogjen

This is how it works with a new simulator:

1. Download the NetApp CDOT 8.2.1 simulator from support.netapp.com
2. Unzip it to a directory of choice.
3. From that directory, import the DataONTAP.vmx file in VMware.
That can be a recent version of VMware workstation, Fusion or ESX, whatever you have.
4. The name of the imported vsim is now: vsim_netapp-cm. For this first node, you might first want to rename it to something like vsim_netapp-cm1.
5. The default setting for memory usage in the vm is 1.6 Gbyte. However, that is not enough for the upgrade.
You need to set this to at least 4 Gbyte. Later, when done with the upgrade, you can turn that back to 1.6 Gbyte. I tried, but less than 4 Gbyte won’t work…
6. Power On the simulator, but make sure you interrupt the boot process immediately!
You need the VLOADER> prompt.
In case you booted it before (when it is not a new machine), or you accidently booted the machine through from scratch, you cannot continue with it.
You MUST start with a new machine, and you MUST have the VLOADER> prompt at first boot if you want to change the serial numbers.
You CANNOT change the serial numbers on a machine that has been booted through before. (well… there is always a way.. but that takes much longer than starting from scratch again)
Just start over again if in doubt.
7. Two things need to happen now:
a. The serial numbers have to be set.
Each simulator in the set must have its own serial numbers. (more on this later)
b. You need to choose what kind of disks you want with this node.
These are the procedures for this: (next steps)
1. First the serial number:
Take for example the Serial Number 4082368507.
VLOADER> setenv SYS_SERIAL_NUM 4082368507
VLOADER> setenv bootarg.nvram.sysid 4082368507
2. Then choose the disk set to use for this node.
AGGR0 must be at least 10Gbyte for the upgrade, so be aware if you use small disks you have to add a lot of disks to AGGR0.

Pick a simdisk inventory and enter the corresponding setenv commands at the VLOADER> prompt.

The default simdisk inventory is 28x1gb 15k:
setenv bootarg.vm.sim.vdevinit “23:14:0,23:14:1”
setenv bootarg.sim.vdevinit “23:14:0,23:14:1”

This inventory enables the simulation of multiple disk tiers and/or flash pools: (28x1gb 15k+14x2gb 7.2k+14x100mb SSD)
setenv bootarg.vm.sim.vdevinit “23:14:0,23:14:1,32:14:2,34:14:3”
setenv bootarg.sim.vdevinit “23:14:0,23:14:1,32:14:2,34:14:3”

This one makes the usable capacity of the DataONTAP-sim.vmdk even larger (54x4gb 15k)
setenv bootarg.vm.sim.vdevinit “31:14:0,31:14:1,31:14:2,31:14:3”
setenv bootarg.sim.vdevinit “31:14:0,31:14:1,31:14:2,31:14:3”

Using type 36 drives you can create the largest capacity.

Explanation of the above:
31:14:0 means: type 31 (4 Gbyte), 14 disks, shelf 0
Type 23: 1 Gbyte FC
Type 30: 2 Gbyte FC
Type 31: 4 Gbyte FC
Type 35: 500 Mbyte SSD (simulated of course, for flashpool)
Type 36: 9 Gbyte SAS

Choose any combination of disks, but the maximum number of disks a simulator would allow is 56, no matter what size or type.
.
9. Then Boot the simulator.
VLOADER> boot
10. Wait until you see the menu announcement.
Press Ctrl-C for the menu. Do NOT press enter after ctrl-C, ONLY ctrl-c !
If you press ctrl-c too quick, you get the VLOADER> instead.
If so, type: boot and wait for the menu announcement.
11. From the menu, choose option 4.
Confirm twice that you indeed would like to wipe all disks.
The system reboots.
12. Wait for the zeroing process to finish. That could take a while… Follow the dots… 😉
13. When done, follow the setup procedure and build your first Clustered Data Ontap node.
14. The cluster base license key in a cdot vsim is:
SMKQROWJNQYQSDAAAAAAAAAAAAAA (that is 14 x A)
Further licenses can be easily added later using putty, since with putty you can cut and paste (on the VMware console in many cases you cannot), so you can skip the other licenses for now.
15. Turn off auto support:
Cluster::> autosupport modify -support disable -node *
16. Assign all disks:
Cluster::> disk assign -all true -node
17. Now add enough disks to Aggr0 to allow for enough space for the upgrade.
It has to be at least 10 Gbyte.
a. Cluster::> Storage aggregate show
b. Cluster::> Storage aggregate add-disks <# disks>
example: Cluster::> stor aggr add-disks aggr0 2
c. Cluster::> Storage aggregate show
d. Keep adding disks until you reach at least 10 Gbyte for Aggr0.
1. Log in with Putty and add all licenses for this node.
2. Check if the cluster is healthy:
Cluster::> cluster show
It should show the word ‘healthy’
3. Configure NTP time:
a. This is how to check it:
Cluster::> system services ntp server show
Cluster::> system date show (or: cluster date show)
b. This is an example how to set it:
Cluster::> timezone Europe/Amsterdam
Cluster::> system services ntp server create -server -node
Cluster::> cluster date show
1. This is how you can determine which image (OnTap version) is used on the simulator:
Cluster::> system node image show
2. Now it is time to make a VM Snapshot of the simulator, before you do the actual update.
A snapshot would allow you to revert to this state, start again with the update process, or update to a different version. Wait for the VM snapshot to finish before you continue.
3. If you don’t have it yet, download the Clustered Data Ontap image from the download.netapp.com website. You need an account there to do so.
You can use the FAS6290 image, e.g. 822_q_image.tgz of 83_q_image.tgz (or whatever you need)
And yes, the simulator can be updated with this image! Neat huh!
4. We are going to use HTTP File Server (or HFS) to do the upgrade.
If you don’t have it yet, this is a free and very easy to use program, excellent for this job.
You can download it from http://www.rejetto.com/hfs/
5. Add the .tgz image to HFS. Drag and drop will do. Do NOT unzip the image.
6. Make sure HFS is using the right local IP address.
If you have VMware on your local machine or any other program using a different IP range, HFS might use the wrong IP address. In HFS, go to the menu, under IP Address and check if the IP address is correct.
7. In the first line in HFS you see the URL you need to use. Write down this information.
8. Now we are going to do the upgrade:
Cluster::> system node image update -node * -package -replace-package true
Example: Cluster::> system node image update -node * -package http://192.168.178.88:8080/822_q_image.tgz -replace-package true

9. Since we only have one node, you cannot check if this is working fine.
If you decided to set up two or more nodes to start with, this is what you can do to check the progress on a different node:
Cluster::>> system node image show-update-progress -node *
You have to keep doing this until the system says: exited. Exit status Success
Otherwise, wait about half an hour. It takes quite a while! Be patient….!!!
10. When done, check if everything went okay:
Cluster::> system node image show
You should see the new version as image 2, but as you can see, that is not the default yet.

11. If it is indeed image2, choose to set this image as default:
Cluster::> system node image modify -node -image image2 -isdefault true
12. Check if that is okay now:
Cluster::> system node image show
Under ‘Is default’ is should say ‘true’
13. Reboot the node
Cluster::> reboot -node *
14. Just let it reboot completely. Do not interrupt. Wait for the login prompt.
The system would probably tell you the update process is not ready yet. No problem.
Just wait a few minutes. That will be done automatically.
15. If you want to check if it is still busy with the upgrade:
Cluster::> set -privilege advanced
Cluster::> system node upgrade-revert show
If you see the message: upgrade successful, it is ready.
16. You can check to see if everything has been upgraded correctly with:
Cluster::> system node image show
Cluster::> version
17. In VMware, you can set the memory usage back to 1.6 Gbyte now.
18. Now add the second node and follow the same procedure.
19. Alternatively, you could have set up two or more nodes with 8.2.1. first and do the upgrade afterwards, whatever you like better.

Notes (not with detailed instructions):
The same procedure applies to 7-mode. The upgrade instructions are a bit different, but the procedure is the same. What we found out in 7-mode however, is, that we needed to set the /cfcard directory to read/write first, since it was read only. I did not have that issue with the cdot upgrade.
If you encounter that issue, this is what you need to do before the upgrade:
1. Go to the systemshell
2. sudo chmod –r 777 /cfcard

Furthermore, if you want to run the simulator on a Vmware ESX-i box you need to change something there too:
1. Allow remote access with SSH and use putty to log in on your ESX-i server:
2. Edit the file /etc/rc.local.d/local.sh
3. Add the following text as last line:
vmkload_mod multiextent
4. Reboot the ESX box.

Posted in Uncategorized | Leave a comment

clustermode temproot

1. Bootmenu
2. create_temp_root tempvol disk1

Posted in Uncategorized | Leave a comment

clustermode dns loadbalancing

Setup DNS loadbalancing clustermode.
The lifs belonging to the same dns-zone should be in the same vserver.
The vserver is in fact the dns-zone.

1. Create vserver
vserver create nfs -rootvolume rootvol -aggregate aggr1_n2 -ns-switch file -nm-switch file -rootvolume-security-style unix

2. net int create -vserver nfs -lif lif1 -role data -data-protocol nfs -home-node cluster1-01 -home-port e0c -address
192.168.0.108 -netmask 255.255.255.0 -status-admin up -dns-zone nfs.learn.netapp.local -listen-for-dns-query true

net int create -vserver nfs -lif lif2 -role data -data-protocol nfs -home-node cluster1-01 -home-port e0c -address
192.168.0.109 -netmask 255.255.255.0 -status-admin up -dns-zone nfs.learn.netapp.local -listen-for-dns-query true

3. In Windows DNS.
DNS -> learn.netapp.local -> new delegation -> nfs -> ip 192.168.0.109 (resolve) -> next -> finish

4. Open cmd tool in Windows.
nslookup nfs

Done.

Posted in Uncategorized | Leave a comment

clustermode upgrade

1. put image.tgz on webserver.
clustershell: system image update -node <nodename> http://<webserver>/image.tgz

or

2. put image.tgz in /mroot/pkg/
clustershell: system image update -node file:///mroot/pkg/image.tgz

Posted in Uncategorized | Leave a comment

solaris 10 nocacheflush

Solaris 10
Set Dynamically (using the debugger):
echo zfs_nocacheflush/W0t1 | mdb -kw

Revert to Default:
echo zfs_nocacheflush/W0t0 | mdb -kw

Set the following parameter in the /etc/system file:
set zfs:zfs_nocacheflush = 1

Posted in Uncategorized | Leave a comment

7-mode upgrade 32bit to 64bit

filer1> priv set diag
filer1*> aggr 64bit-upgrade start aggr4 -mode grow-all

Posted in Uncategorized | Leave a comment

solaris 11 who uses which port.

# mkdir /scripts

# vi /scripts/ports

#!/bin/ksh

line='---------------------------------------------'
pids=$(/usr/bin/ps -ef | sed 1d | awk '{print $2}')

if [ $# -eq 0 ]; then
   read ans?"Enter port you would like to know pid for: "
else
   ans=$1
fi

for f in $pids
do
   /usr/proc/bin/pfiles $f 2>/dev/null | /usr/xpg4/bin/grep -q "port: $ans"
   if [ $? -eq 0 ]; then
      echo $line
      echo "Port: $ans is being used by PID:\c"
      /usr/bin/ps -ef -o pid -o args | egrep -v "grep|pfiles" | grep $f
   fi
done
exit 0

 # chmod +x /scripts/ports

# /scripts/ports

Enter port you would like to know pid for: 2049
---------------------------------------------
Port: 2049 is being used by PID:18855 /usr/lib/nfs/nfsd

Posted in Uncategorized | Leave a comment

solaris 11 custom zone install

Adding Additional Packages in a Zone by Using a Custom AI Manifest

The process of adding extra software in a zone at installation can be automated by revising the AI manifest. The specified packages and the packages on which they depend will be installed. The default list of packages is obtained from the AI manifest. The default AI manifest is/usr/share/auto_install/manifest/zone_default.xml. See Adding and Updating Software in Oracle Solaris 11.2 for information on locating and working with packages.

Example 9-1  Revising the Manifest

The following procedure adds mercurial and a full installation of the vim editor to a configured zone named my-zone. (Note that only the minimalvim-core that is part of solaris-small-server is installed by default.)

  1. Copy the default AI manifest to the location where you will edit the file, and make the file writable.
    # cp /usr/share/auto_install/manifest/zone_default.xml ~/my-zone-ai.xml
    # chmod 644 ~/my-zone-ai.xml
  2. Edit the file, adding the mercurial and vim packages to the software_data section as follows:
          <software_data action="install">
                   <name>pkg:/group/system/solaris-small-server</name>
                   <name>pkg:/developer/versioning/mercurial</name>
                   <name>pkg:/editor/vim</name>
                </software_data>
  3. Install the zone.
    # zoneadm -z my-zone install -m ~/my-zone-ai.xml
Posted in Uncategorized | Leave a comment

solaris 11 zonereplication and start

Migrate the zone to other machine using replication.
example: zonename azone; zonepath root zones/azone
# zfs snapshot -r zones/azone@0
# zfs send -R zones/azone@0 | ssh root@othermachine zfs receive zones/azone

Run script on other machine
#./script zones/azone/rpool
--------------------------------------
#!/bin/bash

zfsfs=$1
root=${zfsfs}/ROOT
zbe=${root}/solaris

for i in $zbe $root $zfsfs ; do
for j in zoned mountpoint ; do
zfs inherit $j $i
done
done

zfs set mountpoint=legacy $root
zfs set zoned=on $root
zfs set canmount=noauto $zbe
zfs set org.opensolaris.libbe:active=on $zbe

rbe=`zfs list -H -o name /`
uuid=`zfs get -H -o value org.opensolaris.libbe:uuid $rbe`

zfs set org.opensolaris.libbe:parentbe=$uuid $zbe

-----------------------------------------------
# mount -F zfs zones/azone/rpool/ROOT/solaris /zones/azone/root

Add the zone to /etc/zones/index

Create the /etc/zones/azone.xml file

 

# zoneadm -z azone boot

Posted in Uncategorized | Leave a comment

solaris 10 keytable

/usr/openwin/share/etc/keytables/keytable.map file to force the use of  Denmark_x86.kt or Denmark6.kt as default.

Current

Type    Layout  Filename

0       0       US4.kt                  # Default keytable

 

change to

Type    Layout  Filename

0       0       Denmark_x86.kt                  # Default keytable

Remove all lines below that.

 

Posted in solaris | Leave a comment

netapp 7-mode linux iscsi

Steps:
1. on filer create a volume (lunvol1)
2. on filer create a lun in the volume (lunlin)
3. on linux install the iscsi-initiator-utils
4. on linux determine initiatorname
5. on filer create igroup with linux iqn (lingroup)
6. on linux enable iscsi for after reboots
7. on linux discover and login to target
8. on filer map lun to igroup
9. on linux restart iscsi
10. on linux get lun devicename
11. on linux partition lun (primary partition 1)
12. on linux create filestystem on lun
13. on linux create mountpoint (/mnt/sdh1)
14. on linux mount filesystem

Commands:
1. filer1> vol create lunvol1 aggr1 150m
2. filer1> lun create -s 130m -t linux /vol/lunvol1/lunlin

3. linux# yum install iscsi-initiator-utils
4. linux# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:3add18fcc55c

5. filer1> igroup create -i -t linux lingroup iqn.1994-05.com.redhat:3add18fcc55c

6. linux# chkconfig iscsi on
7. linux# iscsiadm -m discovery -t sendtargets -p 192.168.4.128

8. filer1> lun map /vol/lunvol1/lunlin lingroup iqn.1994-05.com.redhat:3add18fcc55c

9. linux1> /etc/init.d/iscsi restart
10. linux1> grep -i "attached scsi disk" /var/log/messages
Jun 25 20:42:52 fritz kernel: sd 9:0:0:0: [sdh] Attached SCSI disk

11. linux# fdisk /dev/sdh
12. linux# mkfs /dev/sdh1
13. linux# mkdir /mnt/sdh1
14. linux# mount /dev/sdh1 /mnt/sdh1

done.

Posted in Uncategorized | Leave a comment

clustermode domain-tunnel

To have the administrator of the AD domain login to the administration vserver and manage the cluster, create a domain-tunnel.

The name of the admin-vserver: cl1

security login create -vserver cl1 -username netapp\administrator -application ssh -authmethod domain -role admin

create a vserver with vserver setup

vservername: vscifs,
protocols: cifs
create datavolume: no
create lif: no
configure dns: yes
domainname: netapp.local
nameserver: 192.168.4.244
configure cifs: yes
cifs servername: vscifs-cl1
active directory domain: netapp.local
ad adminname: administrator
ad adminpassword: *******

security login domain-tunnel create vscifs

On windows in cmd-tool: plink netapp\administrator@cl1

Posted in netapp, Uncategorized | Leave a comment

clustermode networkdiagram

netdiagram

Posted in Uncategorized | Leave a comment

Netapp filefolding

file folding original link

WAFL File Folding Explained
While recently delivering training, one of my partners inquired about a WAFL feature called “file folding”, so I thought I’d take a moment to detail this lesser-known NetApp feature.

File Folding is a Data ONTAP feature that saves space when a user re-writes a file with the same data.

More specifically, it checks the data in the most recent Snapshot copy. If this data is identical to the Snapshot copy currently being created, it references the previous Snapshot copy -- instead of taking up disk space writing the same data in a new Snapshot copy.

Although File Folding saves disk space, it can also impact system performance, as the system must compare block contents when folding a file. If the folding process reaches a maximum limit on memory usage, it is suspended. When memory usage falls below the limit, the processes that were halted are restarted.

This feature has long been part of Data ONTAP and currently exists within 7-Mode (not Cluster-Mode) of Data ONTAP 8.1. It can be enabled with the command:
options cifs.snapshot_file_folding.enable on
But what exactly is going on behind the scenes?

There are two stages to this process: 1.) determining which files are candidates for folding and 2.) actually folding the files into the Snapshot copy. Let’s walk through both of these stages in detail:

Once WAFL has received a message for a file that may be a candidate for folding, it retrieves the inode for the candidate file via pathname (directory) lookups, parsing the directory, and mapping it to a root inode.

In other words, we’re now able to construct a file handle in order to retrieve the block (inode) from disk.

WAFL then grabs the corresponding inode of that file from the most recent snapshot (remember, when snapshots are created, a new root inode is created for that snapshot). If no corresponding file to inode match is found, then the file cannot be in the most recent snapshot and thus no folding will occur.

But let’s assume that WAFL does find a snapshot inode corresponding to the active file.

As the blocks are loaded into the buffer, the file folding process compares the blocks of the active file system vs. the snapshot blocks – on a block-for-block basis. Assuming that each block is identical, it’s time to transition to the second stage.

In the second stage of File Folding, a data block is “freed” by updating the corresponding bit of an active map of the active file system (indicating block is unallocated). Next, the block in the snapshot is allocated by updating a bit within the active map corresponding to the snapshot block (indicating it’s now being used). WAFL then updates the block pointer of the parent buffer to reference the block number of the block in the snapshot.

From this point onward, the parent buffer is tagged as “dirty” to ensure everything else is written out to disk on the next Consistency Point (CP).

And now you know probably more than you’ve ever wanted to know about File Folding!

Posted in netapp | Leave a comment

solaris zdb

Original link

zdb: Examining ZFS At Point-Blank Range
01 Nov '08 - 08:13 by benr

ZFS is an amazing in its simplicity and beauty, however it is also deceivingly complex. The chance that you'll ever be forced to peer behind the veil is unlikely outside of the storage enthusiast ranks, but as it proliferates more questions will come up regarding its internals. We have been given a tool to assist us investigate the inner workings, zdb, but it is, somewhat intentionally I think, undocumented. Only two others that I know have had the courage to talk about it publicly, Max Bruning who is perhaps the single most authoritative voice regarding ZFS outside of Sun, and Marcelo Leal.

In this post, we'll look only at the basics of ZDB to establish a baseline for its use. Running "zdb -h" will produce a summary of its syntax.

In its most basic form, zdb poolname, several bits of information about our pool will be output, including:

Cached pool configuratino (-C)
Uberblock (-u)
Datasets (-d)
Report stats on zdb's I/O (-s), this is similar to the first interval of zpool iostat
Thus, zdb testpool is the same as zdb -Cuds testpool. Lets look at the output. The pool we'll be using is actually a 256MB pre-allocated file with a single dataset... as simple as it can come.

root@quadra /$ zdb testpool
version=12
name='testpool'
state=0
txg=182
pool_guid=1019414024587234776
hostid=446817667
hostname='quadra'
vdev_tree
type='root'
id=0
guid=1019414024587234776
children[0]
type='file'
id=0
guid=6723707841658505514
path='/zdev/disk002'
metaslab_array=23
metaslab_shift=21
ashift=9
asize=263716864
is_log=0
Uberblock

magic = 0000000000bab10c
version = 12
txg = 184
guid_sum = 7743121866245740290
timestamp = 1225486684 UTC = Fri Oct 31 13:58:04 2008

Dataset mos [META], ID 0, cr_txg 4, 87.0K, 49 objects
Dataset testpool/dataset01 [ZPL], ID 30, cr_txg 6, 19.5K, 5 objects
Dataset testpool [ZPL], ID 16, cr_txg 1, 19.0K, 5 objects
capacity operations bandwidth ---- errors ----
description used avail read write read write read write cksum
testpool 139K 250M 638 0 736K 0 0 0 0
/zdev/disk002 139K 250M 638 0 736K 0 0 0 0
And so we see a variety of useful information, including:

Zpool (On Disk Format) Version Number
State
Host ID & Hostname
GUID (This is that numberic value you use when zpool import doesn't like the name)
Children VDEV's that make up the pool
Uberblock magic number (read that hex value as "uba-bloc", get it, 0bab10c, its funny!)
Timestamp
List of datasets
Summary of IO stats
So this information is interesting, but frankly not terribly useful if you already have the pool imported. This would likely be of more value if you couldn't, or wouldn't, import the pool, but those cases are rare and 99% of the time zpool import will tell you want you want to know even if you don't actually import.

There are 3 arguments that are really the core ones of interest, but fefore we get to them, you absolutely must understand something unique about zdb. ZDB is like a magnifying glass, at default magnification you can see that its tissue, turn up the magnification and you see that it has veins, turn it up again and you see how intricate the system is, crank it up one more time and you can see blood cells themselves. With zdb, each time we repeat an argument we increase the verbosity and thus dig deeper. For instance, zdb -d will list the datasets of a pool, but zdb -dd will output the list of objects within the pool. Thus, when you really zoom in you'll see commands that look really odd like zdb -ddddddddd. This takes a little practice to get the hang of, so please toy around on a small test pool to get the hang of it.

Now, here are summaries of the 3 primary arguments you'll use and how things change as you crank up the verbosity:

zdb -b pool: This will traverse blocks looking for leaks like the default form.
-bb: Outputs a breakdown of space (block) usage for various ZFS object types.
-bbb: Same as above, but includes breakdown by DMU/SPA level (L0-L6).
-bbbb: Same as above, but includes line line per object with details about it, including compression, checksum, DVA, object ID, etc.
-bbbbb...: Same as above.
zdb -d dataset: This will output a list of objects within a dataset. More d's means more verbosity:
-d: Output list of datasets, including ID, cr_txg, size, and number of objects.
-dd: Output concise list of objects within the dataset, with object id, lsize, asize, type, etc.
-ddd: Same as dd.
-dddd: Outputs list of datasets and objects in detail, including objects path (filename), a/c/r/mtime, mode, etc.
-ddddd: Same as previous, but includes indirect block addresses (DVAs) as well.
-dddddd....: Same as above.
zdb -R pool:vdev_specifier:offset:size[:flags]: Given a DVA, outputs object contents in hex display format. If given the :r flag it will output in raw binary format. This can be used for manual recovery of files.
So lets play with the first form above, block traversal. This will sweep the blocks of your pool or dataset adding up what it finds and then producing a report of any leakage and how the space breakdown works. This is extremely useful information, but given that it traverses all blocks its going to take a long time depending on how much data you have. On a home box this might take minutes or a couple hours, on a large storage subsystem is could take hours or days. Lets look at both -b and -bb for my simple test pool:

root@quadra ~$ zdb -b testpool

Traversing all blocks to verify nothing leaked ...

No leaks (block sum matches space maps exactly)

bp count: 50
bp logical: 464896 avg: 9297
bp physical: 40960 avg: 819 compression: 11.35
bp allocated: 102912 avg: 2058 compression: 4.52
SPA allocated: 102912 used: 0.04%

root@quadra ~$ zdb -bb testpool

Traversing all blocks to verify nothing leaked ...

No leaks (block sum matches space maps exactly)

bp count: 50
bp logical: 464896 avg: 9297
bp physical: 40960 avg: 819 compression: 11.35
bp allocated: 102912 avg: 2058 compression: 4.52
SPA allocated: 102912 used: 0.04%

Blocks LSIZE PSIZE ASIZE avg comp %Total Type
3 12.0K 1.50K 4.50K 1.50K 8.00 4.48 deferred free
1 512 512 1.50K 1.50K 1.00 1.49 object directory
1 512 512 1.50K 1.50K 1.00 1.49 object array
1 16K 1K 3.00K 3.00K 16.00 2.99 packed nvlist
- - - - - - - packed nvlist size
1 16K 1K 3.00K 3.00K 16.00 2.99 bplist
- - - - - - - bplist header
- - - - - - - SPA space map header
3 12.0K 1.50K 4.50K 1.50K 8.00 4.48 SPA space map
- - - - - - - ZIL intent log
16 256K 18.0K 40.0K 2.50K 14.22 39.80 DMU dnode
3 3.00K 1.50K 3.50K 1.17K 2.00 3.48 DMU objset
- - - - - - - DSL directory
4 2K 2K 6.00K 1.50K 1.00 5.97 DSL directory child map
3 1.50K 1.50K 4.50K 1.50K 1.00 4.48 DSL dataset snap map
4 2K 2K 6.00K 1.50K 1.00 5.97 DSL props
- - - - - - - DSL dataset
- - - - - - - ZFS znode
- - - - - - - ZFS V0 ACL
1 512 512 512 512 1.00 0.50 ZFS plain file
3 1.50K 1.50K 3.00K 1K 1.00 2.99 ZFS directory
2 1K 1K 2K 1K 1.00 1.99 ZFS master node
2 1K 1K 2K 1K 1.00 1.99 ZFS delete queue
- - - - - - - zvol object
- - - - - - - zvol prop
- - - - - - - other uint8[]
- - - - - - - other uint64[]
- - - - - - - other ZAP
- - - - - - - persistent error log
1 128K 4.50K 13.5K 13.5K 28.44 13.43 SPA history
- - - - - - - SPA history offsets
- - - - - - - Pool properties
- - - - - - - DSL permissions
- - - - - - - ZFS ACL
- - - - - - - ZFS SYSACL
- - - - - - - FUID table
- - - - - - - FUID table size
1 512 512 1.50K 1.50K 1.00 1.49 DSL dataset next clones
- - - - - - - scrub work queue
50 454K 40.0K 101K 2.01K 11.35 100.00 Total
Here we can see the "zooming in" effect I described earlier. Here "BP" stands for "Block Pointer". The most common "Type" you'll see is "ZFS plain file", that is, a normal data file like an image or textfile or something... the data you care about.

Moving on to the second form, -d to output datasets and their objects. This is where introspection really occurs. With a simple -d we can see a recursive list of datasets, but as we turn up the verbosity (-dd) we zoom into the objects within the dataset, and then just get more and more detail about those objects.

root@quadra ~$ zdb -d testpool/dataset01
Dataset testpool/dataset01 [ZPL], ID 30, cr_txg 6, 18.5K, 5 objects

root@quadra ~$ zdb -dd testpool/dataset01
Dataset testpool/dataset01 [ZPL], ID 30, cr_txg 6, 18.5K, 5 objects

Object lvl iblk dblk lsize asize type
0 7 16K 16K 16K 14.0K DMU dnode
1 1 16K 512 512 1K ZFS master node
2 1 16K 512 512 1K ZFS delete queue
3 1 16K 512 512 1K ZFS directory
4 1 16K 512 512 512 ZFS plain file
So lets pause here. We can see the list of objects in my testpool/dataset01 by object id. This is important because we can use those id's to dig deeper on an individual object later. But for now, lets zoom in a little bit more (-dddd) on this dataset.

root@quadra ~$ zdb -dddd testpool/dataset01
Dataset testpool/dataset01 [ZPL], ID 30, cr_txg 6, 18.5K, 5 objects, rootbp [L0 DMU objset] 400L/200P DVA[0]=<0:12200:200> DVA[1]=<0:3014c00:200> fletcher4 lzjb LE contiguous birth=8 fill=5 cksum=a525c6edf:45d1513a8c8:ef844ac0e80e:22b9de6164dd69

Object lvl iblk dblk lsize asize type
0 7 16K 16K 16K 14.0K DMU dnode

Object lvl iblk dblk lsize asize type
1 1 16K 512 512 1K ZFS master node
microzap: 512 bytes, 6 entries

casesensitivity = 0
normalization = 0
DELETE_QUEUE = 2
ROOT = 3
VERSION = 3
utf8only = 0

Object lvl iblk dblk lsize asize type
2 1 16K 512 512 1K ZFS delete queue
microzap: 512 bytes, 0 entries

Object lvl iblk dblk lsize asize type
3 1 16K 512 512 1K ZFS directory
264 bonus ZFS znode
path /
uid 0
gid 0
atime Fri Oct 31 12:35:30 2008
mtime Fri Oct 31 12:35:51 2008
ctime Fri Oct 31 12:35:51 2008
crtime Fri Oct 31 12:35:30 2008
gen 6
mode 40755
size 3
parent 3
links 2
xattr 0
rdev 0x0000000000000000
microzap: 512 bytes, 1 entries

testfile01 = 4 (type: Regular File)

Object lvl iblk dblk lsize asize type
4 1 16K 512 512 512 ZFS plain file
264 bonus ZFS znode
path /testfile01
uid 0
gid 0
atime Fri Oct 31 12:35:51 2008
mtime Fri Oct 31 12:35:51 2008
ctime Fri Oct 31 12:35:51 2008
crtime Fri Oct 31 12:35:51 2008
gen 8
mode 100644
size 21
parent 3
links 1
xattr 0
rdev 0x0000000000000000
Now, this output is short because the dataset include only a single file. In the real world this output will be gigantic and should be redirected to a file. When I did this on the dataset containing my home directory the output file was 750MB... its a lot of data.

Look specifically at Object 4, a "ZFS plain file". Notice that I can see that files pathname, uid, gid, a/m/c/crtime, mode, size, etc. This is where things can get really interesting!

In zdb's 3rd form above (-R) we can actually display the contents of a file, however we need its Device Virtual Address (DVA) and size to do so. In order to get that information, we can zoom in using -d little further, but this time just on Object 4:

root@quadra /$ zdb -ddddd testpool/dataset01 4
Dataset testpool/dataset01 [ZPL], ID 30, cr_txg 6, 19.5K, 5 objects, rootbp [L0 DMU objset] 400L/200P DVA[0]=<0:172e000:200> DVA[1]=<0:460e000:200> fletcher4 lzjb LE contiguous birth=168 fill=5 cksum=a280728d9:448b88156d8:eaa0ad340c25:21f1a0a7d45740

Object lvl iblk dblk lsize asize type
4 1 16K 512 512 512 ZFS plain file
264 bonus ZFS znode
path /testfile01
uid 0
gid 0
atime Fri Oct 31 12:35:51 2008
mtime Fri Oct 31 12:35:51 2008
ctime Fri Oct 31 12:35:51 2008
crtime Fri Oct 31 12:35:51 2008
gen 8
mode 100644
size 21
parent 3
links 1
xattr 0
rdev 0x0000000000000000
Indirect blocks:
0 L0 0:11600:200 200L/200P F=1 B=8

segment [0000000000000000, 0000000000000200) size 512
Now, see that "Indirect block" 0? Following L0 (Level 0) is a tuple: "0:11600:200". This is the DVA and Size, or more specifically it is the triple: vdev:offset:size. We can use this information to request its contents directly.

And so, the -R form can display and individual blocks from a device. To do so, we need to know the pool name, vdev/offset (DVA) and its size. Given what we did above, we now know that, so lets try it:

root@quadra /$ zdb -R testpool:0:11600:200
Found vdev: /zdev/disk002

testpool:0:11600:200
0 1 2 3 4 5 6 7 8 9 a b c d e f 0123456789abcdef
000000: 2073692073696854 6620747365742061 This is a test f
000010: 0000000a2e656c69 0000000000000000 ile.............
000020: 0000000000000000 0000000000000000 ................
000030: 0000000000000000 0000000000000000 ................
000040: 0000000000000000 0000000000000000 ................
000050: 0000000000000000 0000000000000000 ................
...
w00t! We can read the file contents!

You'll notice in the zdb syntax ("zdb -h") that this syntax above accepts flags as well. We can find these in the ZDB source. The most interesting is the "r" flag which rather than display the data as above, actually dumps the data in raw form to STDERR.

So why is this useful? Try this on for size:

root@quadra /$ rm /testpool/dataset01/testfile01
root@quadra /$ sync;sync
root@quadra /$ zdb -dd testpool/dataset01
Dataset testpool/dataset01 [ZPL], ID 30, cr_txg 6, 18.0K, 4 objects

Object lvl iblk dblk lsize asize type
0 7 16K 16K 16K 14.0K DMU dnode
1 1 16K 512 512 1K ZFS master node
2 1 16K 512 512 1K ZFS delete queue
3 1 16K 512 512 1K ZFS directory

....... THE FILE IS REALLY GONE! ..........

root@quadra /$ zdb -R testpool:0:11600:200:r 2> /tmp/output
Found vdev: /zdev/disk002
root@quadra /$ ls -lh /tmp/output
-rw-r--r-- 1 root root 512 Nov 1 01:54 /tmp/output
root@quadra /$ cat /tmp/output
This is a test file.
How sweet is that! We delete a file, verify with zdb -dd that it really and truely is gone, and then bring it back out based on its DVA. Super sweet!

Now, before you get overly excited, some things to note... firstly, if you delete a file in the real world you probly don't have its DVA and size already recorded, so your screwed. Also, notice that the origonal file was 21 bytes, but the "recovered" file is 512... its been padded, so if you recovered a file and tried using an MD5 hash or something to verify the content it wouldn't match, even though the data was valid. In other words, the best "undelete" option is snapshots.. they are quick, easy, use them. Using zdb for file recovery isn't practical.

I recently discovered and used this method to deal with a server that suffered extensive corruption as a result of a shitty (Sun Adaptec rebranded STK) RAID controller gone berzerk following a routine disk replacement. I had several "corrupt" files that I could not read or reach, if I tried to do so I'd get a long pause, lots of errors to syslog, and then a "I/O Error" return. Hopeless, this is a "restore from backups" situation. Regardless, I wanted to learn from the experience. Here is an example of the result:

[root@server ~]$ ls -l /xxxxxxxxxxxxxx/images/logo.gif
/xxxxxxxxxxxxxx/images/logo.gif: I/O error

[root@server ~]$ zdb -ddddd pool/xxxxx 181359
Dataset pool/xxx [ZPL], ID 221, cr_txg 1281077, 3.76G, 187142 objects, rootbp [L0 DMU objset] 400L/200P DVA[0]=<0:1803024c00:200> DVA[1]=<0:45007ade00:200> fletcher4
lzjb LE contiguous birth=4543000 fill=1
87142 cksum=8cc6b0fec:3a1b508e8c0:c36726aec831:1be1f0eee0e22c

Object lvl iblk dblk lsize asize type
181359 1 16K 1K 1K 1K ZFS plain file
264 bonus ZFS znode
path /xxxxxxxxxxxxxx/images/logo.gif
atime Wed Aug 27 07:42:17 2008
mtime Wed Apr 16 01:19:06 2008
ctime Thu May 1 00:18:34 2008
crtime Thu May 1 00:18:34 2008
gen 1461218
mode 100644
size 691
parent 181080
links 1
xattr 0
rdev 0x0000000000000000
Indirect blocks:
0 L0 0:b043f0c00:400 400L/400P F=1 B=1461218

segment [0000000000000000, 0000000000000400) size 1K

[root@server ~]$ zdb -R pool:0:b043f0c00:400:r 2> out
Found vdev: /dev/dsk/c0t1d0s0
[root@server ~]$ file out
out: GIF file, v89
Because real data is involved I had to cover up most of the above, but you can see how the methods we learned above were used to gain a positive result. Normal means of accessing the file failed miserably, but using zdb -R I dumped the file out. As a verification I opened the GIF in an image viewer and sure enough it looks perfect!

This is a lot to digest, but this is about as simple a primer to zdb as your going to find. Hopefully I've given you a solid grasp of the fundamentals so that you can experiment on your own.

Where do you go from here? As noted before, I recommend you now check out the following:

Max Bruning's ZFS On-Disk Format Using mdb and zdb: Video presentation from the OpenSolaris Developer Conference in Prague on June 28, 2008. An absolute must watch for the hardcore ZFS enthusiast. Warning, may cause your head to explode!
Marcelo Leal's 5 Part ZFS Internals Series. Leal has tremendous courage to post these, he's doing tremendous work! Read it!
Good luck and happy zdb'ing.... don't tell Sun. 🙂

Posted in solaris | Leave a comment

clustermode export-policies and rules

policies-vs1Click IMAGE

Posted in netapp | Leave a comment

NPIV N_port ID Virtualization

Scott Lowe link

Understanding NPIV and NPV
Friday, November 27, 2009 in Gestalt, Networking, Storage by slowe | 71 comments
Two technologies that seem to have come to the fore recently are NPIV (N_Port ID Virtualization) and NPV (N_Port Virtualization). Judging just by the names, you might think that these two technologies are the same thing. While they are related in some aspects and can be used in a complementary way, they are quite different. What I’d like to do in this post is help explain these two technologies, how they are different, and how they can be used. I hope to follow up in future posts with some hands-on examples of configuring these technologies on various types of equipment.

First, though, I need to cover some basics. This is unnecessary for those of you that are Fibre Channel experts, but for the rest of the world it might be useful:

N_Port: An N_Port is an end node port on the Fibre Channel fabric. This could be an HBA (Host Bus Adapter) in a server or a target port on a storage array.
F_Port: An F_Port is a port on a Fibre Channel switch that is connected to an N_Port. So, the port into which a server’s HBA or a storage array’s target port is connected is an F_Port.
E_Port: An E_Port is a port on a Fibre Channel switch that is connected to another Fibre Channel switch. The connection between two E_Ports forms an Inter-Switch Link (ISL).
There are other types of ports as well—NL_Port, FL_Port, G_Port, TE_Port—but for the purposes of this discussion these three will get us started. With these definitions in mind, I’ll start by discussing N_Port ID Virtualization (NPIV).

N_Port ID Virtualization (NPIV)

Normally, an N_Port would have a single N_Port_ID associated with it; this N_Port_ID is a 24-bit address assigned by the Fibre Channel switch during the FLOGI process. The N_Port_ID is not the same as the World Wide Port Name (WWPN), although there is typically a one-to-one relationship between WWPN and N_Port_ID. Thus, for any given physical N_Port, there would be exactly one WWPN and one N_Port_ID associated with it.

What NPIV does is allow a single physical N_Port to have multiple WWPNs, and therefore multiple N_Port_IDs, associated with it. After the normal FLOGI process, an NPIV-enabled physical N_Port can subsequently issue additional commands to register more WWPNs and receive more N_Port_IDs (one for each WWPN). The Fibre Channel switch must also support NPIV, as the F_Port on the other end of the link would “see” multiple WWPNs and multiple N_Port_IDs coming from the host and must know how to handle this behavior.

Once all the applicable WWPNs have been registered, each of these WWPNs can be used for SAN zoning or LUN presentation. There is no distinction between the physical WWPN and the virtual WWPNs; they all behave in exactly the same fashion and you can use them in exactly the same ways.

So why might this functionality be useful? Consider a virtualized environment, where you would like to be able to present a LUN via Fibre Channel to a specific virtual machine only:

Without NPIV, it’s not possible because the N_Port on the physical host would have only a single WWPN (and N_Port_ID). Any LUNs would have to be zoned and presented to this single WWPN. Because all VMs would be sharing the same WWPN on the one single physical N_Port, any LUNs zoned to this WWPN would be visible to all VMs on that host because all VMs are using the same physical N_Port, same WWPN, and same N_Port_ID.
With NPIV, the physical N_Port can register additional WWPNs (and N_Port_IDs). Each VM can have its own WWPN. When you build SAN zones and present LUNs using the VM-specific WWPN, then the LUNs will only be visible to that VM and not to any other VMs.
Virtualization is not the only use case for NPIV, although it is certainly one of the easiest to understand.

Now that I’ve discussed NPIV, I’d like to turn the discussion to N_Port Virtualization (NPV).

N_Port Virtualization

While NPIV is primarily a host-based solution, NPV is primarily a switch-based technology. It is designed to reduce switch management and overhead in larger SAN deployments. Consider that every Fibre Channel switch in a fabric needs a different domain ID, and that the total number of domain IDs in a fabric is limited. In some cases, this limit can be fairly low depending upon the devices attached to the fabric. The problem, though, is that you often need to add Fibre Channel switches in order to scale the size of your fabric. There is therefore an inherent conflict between trying to reduce the overall number of switches in order to keep the domain ID count low while also needing to add switches in order to have a sufficiently high port count. NPV is intended to help address this problem.

NPV introduces a new type of Fibre Channel port, the NP_Port. The NP_Port connects to an F_Port and acts as a proxy for other N_Ports on the NPV-enabled switch. Essentially, the NP_Port “looks” like an NPIV-enabled host to the F_Port on the other end. An NPV-enabled switch will register additional WWPNs (and receive additional N_Port_IDs) via NPIV on behalf of the N_Ports connected to it. The physical N_Ports don’t have any knowledge this is occurring and don’t need any support for it; it’s all handled by the NPV-enabled switch.

Obviously, this means that the upstream Fibre Channel switch must support NPIV, since the NP_Port “looks” and “acts” like an NPIV-enabled host to the upstream F_Port. Additionally, because the NPV-enabled switch now looks like an end host, it no longer needs a domain ID to participate in the Fibre Channel fabric. Using NPV, you can add switches and ports to your fabric without adding domain IDs.

So why is this functionality useful? There is the immediate benefit of being able to scale your Fibre Channel fabric without having to add domain IDs, yes, but in what sorts of environments might this be particularly useful? Consider a blade server environment, like an HP c7000 chassis, where there are Fibre Channel switches in the back of the chassis. By using NPV on these switches, you can add them to your fabric without having to assign a domain ID to each and every one of them.

Here’s another example. Consider an environment where you are mixing different types of Fibre Channel switches and are concerned about interoperability. As long as there is NPIV support, you can enable NPV on one set of switches. The NPV-enabled switches will then act like NPIV-enabled hosts, and you won’t have to worry about connecting E_Ports and creating ISLs between different brands of Fibre Channel switches.

I hope you’ve found this explanation of NPIV and NPV helpful and accurate. In the future, I hope to follow up with some additional posts—including diagrams—that show how these can be used in action. Until then, feel free to post any questions, thoughts, or corrections in the comments below. Your feedback is always welcome!

Disclosure: Some industry contacts at Cisco Systems provided me with information regarding NPV and its operation and behavior, but this post is neither sponsored nor endorsed by anyone.

Posted in Virtualization | Leave a comment

Ontap 8.2 licenses

CLUSTERED SIMULATE ONTAP LICENSES
+++++++++++++++++++++++++++++++++

These are the licenses that you use with the clustered Data ONTAP version
of Simulate ONTAP to enable Data ONTAP features.

There are four groups of licenses in this file:

- cluster base license
- feature licenses for the ESX build
- feature licenses for the non-ESX build
- feature licenses for the second node of a 2-node cluster

Cluster Base License (Serial Number 1-80-000008)
================================================

You use the cluster base license when setting up the first simulator in a cluster.

Cluster Base license = SMKQROWJNQYQSDAAAAAAAAAAAAAA

Clustered Data ONTAP Feature Licenses
=====================================

You use the feature licenses to enable unique Data ONTAP features on your simulator.

Licenses for the ESX build (Serial Number 4082368511)
-----------------------------------------------------

Use these licenses with the VMware ESX build.

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS CAYHXPKBFDUFZGABGAAAAAAAAAAA CIFS protocol
FCP APTLYPKBFDUFZGABGAAAAAAAAAAA Fibre Channel Protocol
FlexClone WSKTAQKBFDUFZGABGAAAAAAAAAAA FlexClone
Insight_Balance CGVTEQKBFDUFZGABGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI OUVWXPKBFDUFZGABGAAAAAAAAAAA iSCSI protocol
NFS QFATWPKBFDUFZGABGAAAAAAAAAAA NFS protocol
SnapLock UHGXBQKBFDUFZGABGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise QLXEEQKBFDUFZGABGAAAAAAAAAAA SnapLock Enterprise
SnapManager GCEMCQKBFDUFZGABGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror KYMEAQKBFDUFZGABGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect SWBBDQKBFDUFZGABGAAAAAAAAAAA SnapProtect Applications
SnapRestore YDPPZPKBFDUFZGABGAAAAAAAAAAA SnapRestore
SnapVault INIIBQKBFDUFZGABGAAAAAAAAAAA SnapVault primary and secondary

Licenses for the non-ESX build (Serial Number 4082368507)
---------------------------------------------------------

Use these licenses with the VMware Workstation, VMware Player, and VMware Fusion build.

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS YVUCRRRRYVHXCFABGAAAAAAAAAAA CIFS protocol
FCP WKQGSRRRYVHXCFABGAAAAAAAAAAA Fibre Channel Protocol
FlexClone SOHOURRRYVHXCFABGAAAAAAAAAAA FlexClone
Insight_Balance YBSOYRRRYVHXCFABGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI KQSRRRRRYVHXCFABGAAAAAAAAAAA iSCSI protocol
NFS MBXNQRRRYVHXCFABGAAAAAAAAAAA NFS protocol
SnapLock QDDSVRRRYVHXCFABGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise MHUZXRRRYVHXCFABGAAAAAAAAAAA SnapLock Enterprise
SnapManager CYAHWRRRYVHXCFABGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror GUJZTRRRYVHXCFABGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect OSYVWRRRYVHXCFABGAAAAAAAAAAA SnapProtect Applications
SnapRestore UZLKTRRRYVHXCFABGAAAAAAAAAAA SnapRestore
SnapVault EJFDVRRRYVHXCFABGAAAAAAAAAAA SnapVault primary and secondary

Licenses for the second node in a cluster (Serial Number 4034389062)
--------------------------------------------------------------------

Use these licenses with the second simulator in a cluster (either the ESX or non-ESX build).

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS MHEYKUNFXMSMUCEZFAAAAAAAAAAA CIFS protocol
FCP KWZBMUNFXMSMUCEZFAAAAAAAAAAA Fibre Channel Protocol
FlexClone GARJOUNFXMSMUCEZFAAAAAAAAAAA FlexClone
Insight_Balance MNBKSUNFXMSMUCEZFAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI YBCNLUNFXMSMUCEZFAAAAAAAAAAA iSCSI protocol
NFS ANGJKUNFXMSMUCEZFAAAAAAAAAAA NFS protocol
SnapLock EPMNPUNFXMSMUCEZFAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise ATDVRUNFXMSMUCEZFAAAAAAAAAAA SnapLock Enterprise
SnapManager QJKCQUNFXMSMUCEZFAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror UFTUNUNFXMSMUCEZFAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect CEIRQUNFXMSMUCEZFAAAAAAAAAAA SnapProtect Applications
SnapRestore ILVFNUNFXMSMUCEZFAAAAAAAAAAA SnapRestore
SnapVault SUOYOUNFXMSMUCEZFAAAAAAAAAAA SnapVault primary and secondary

Posted in netapp | Leave a comment

solaris 11 automated installer grub entries

When running installadm create-client,
the client grub.conf.01macaddress file is
generated from the following file:
/var/ai/service/"service- name"/menu.conf

to change the menu-order from text-installer to
automated-installer change the followin entry:

[meta]
order = SolarisNetBootInstance|0, SolarisNetBootInstance|1

to
[meta]
order = SolarisNetBootInstance|1, SolarisNetBootInstance|0

Now the first entry will be the automated installer.

Posted in solaris | Leave a comment

solaris 11 zones repository IPS

ips timfoster ips

Posted in Uncategorized | Leave a comment

solaris 11 bad login count

How many bad logins?

With additional thanks to Lambert Rots (ASP4ALL)

original url

What's Up With the flag Field in /etc/shadow on Solaris 11.1? 03Jan13
If you're running Solaris 11.1, and you happen check your /etc/shadow file, you may notice there's been a change to the flags field (the last one)...

bob:$5$GKM8z8qP$ho7oJF3ceAoFo9sH5f.jy4UP16TvzoO7XmSYS81o6QA:15708::::::9874

Prior to Solaris 11.1, this field only contained the following a few easy to read digits which the man page explained as...

flag Failed login count in low order four bits;
remainder reserved for future use, set to zero.
... and this started at 0 and incremented by one every time there was a failed login attempt. Now I'll let you into a secret, the above excerpt was actually taken from Solaris 11.1 which means the documentation hasn't been updated to reflect what you now see in the shadow file. That's correct.

The documentation has deliberately NOT been updated at this stage (Jan 2013) as it is still currently an unstable/private interface and thus not really ready for public consumption. That said, you can easily workout what the rest of the information stored in this field is by looking at the /usr/include/shadow.h file...

/*
* The spwd structure is used in the retreval of information from
* /etc/shadow. It is used by routines in the libos library.
*/
struct spwd {
char *sp_namp; /* user name */
char *sp_pwdp; /* user password */
int sp_lstchg; /* password lastchanged date */
int sp_min; /* minimum number of days between password changes */
int sp_max; /* number of days password is valid */
int sp_warn; /* number of days to warn user to change passwd */
int sp_inact; /* number of days the login may be inactive */
int sp_expire; /* date when the login is no longer valid */
unsigned int sp_flag; /* currently low 15 bits are used */

/* low 4 bits of sp_flag for counting failed login attempts */
#define FAILCOUNT_MASK 0xF
/* next 11 bits of sp_flag for precise time of last change */
#define TIME_MASK 0x7FF0
};
And there's your answer. The last line tells us that the rest of the flag field is used to store the time of the last password change, with the date of that change being stored in the lastchg (3rd) field.

So how do you use that figure?

Well, before I tell you, I must warn...

This is an unstable interface. It can and will most likely change at any time without any notice, so do NOT come to rely on this information.

Right, with that out of the way, lets see how we can interpret this field.

From the shadow.h file we know the last 4 bits are the number of failed login attempts, which can be obtained using (all commands are run at a Bash shell prompt):

$ echo "obase=2;9874" | bc
10011010010010
convert the last 4 bits to decimal
echo $((9874 & 15))
2

$
I've emboldened the last 4 bits. It should be obvious how many failed login attempts there have been, but lets switch these back to decimal to be sure:

$ echo "ibase=2;0010" | bc
2
$
Now for the next 11 bits. To get these we shift up 4 bits:

$ a=9874;((a>>=4));echo $a
617
$
This doesn't tell us much, but I can tell you this is the number of minutes into the day that the password was changed, so lets print this number in base-60:

$ echo "obase=60;617" | bc
10 17
$
Which is correct. I change this user's password today at 10h17, aka 10:17am.

The last two steps can be put into a single command:

$ a=9874;((a>>=4));echo "obase=60;$a" | bc
10 17
$
And there you have it. That is what is going on with the flag field in the /etc/shadow file on Solaris 11.1 and let me reiterate...

This is an unstable interface. It can and will most likely change at any time without any notice, so do NOT come to rely on this information.

I am providing this information just for information's sake and to provide you with a little explanation of what you might see.

Posted in solaris | Leave a comment

solaris 11 zfs recordsize

oracle blog

Posted in solaris, Uncategorized | Leave a comment

solaris 11 exercise ips (4) add local repository

(with thanks to LAM)

In this exercise you will setup a local repository

and add a testpackage to the repository.

You should have a zpool mounted on /software

1. create directory
mkdir /software/site

2. create repository
pkgrepo create /software/site

3. check
pkgrepo info -s /software/site
(it should show online)

4. in /software/site/pkg5.repository
set prefix = site

5. add smf entries
svccfg -s pkg/server add site
svccfg
select pkg/server:site
setprop pkg/inst_root=/software/site
setprop pkg/port=10083
exit

svcadm enable pkg/server:site
svcs pkg/server:site
STATE STIME FMRI
online 18:27:04 svc:/application/pkg/server:site

6. rebuild the repository
pkgrepo rebuild -s /software/site

7. set publisher
pkg set-publisher -g file:///software/site site

8. list your publishers
pkg publisher

9. add softwarepackages to new repository

mkdir /var/tmp/software
echo "a package" > /var/tmp/software/testpkg

eval 'pkgsend -s file:///software/site open testpkg@1.0.1'
export PKG_TRANS_ID=1399221077_pkg%3A%2F%2Fsite%2Ftestpkg%401.0.1%2C5.11%3A20140504T163117Z

export \
PKG_TRANS_ID=1399221077_pkg%3A%2F%2Fsite%2Ftestpkg%401.0.1%2C5.11%3A20140504T163117Z

pkgsend -s file:///software/site add dir mode=0555 owner=root group=bin path=/export/testpkg

pkgsend -s file:///software/site add file /var/tmp/software/testpkg mode=0555 owner=root group=bin path=/export/testpkg/testpkg

pkgsend -s file:///software/site add set name=description value="testpkg"

pkgsend -s file:///software/site close
pkg://site/testpkg@1.0.1,5.11:20140504T163117Z
PUBLISHED

pkgrepo info -s file:///software/site
PUBLISHER PACKAGES STATUS UPDATED
site 1 online 2014-05-04T16:37:26.996595Z

pkg install testpkg
Packages to install: 1
Create boot environment: No
Create backup boot environment: No

DOWNLOAD PKGS FILES XFER (MB) SPEED
Completed 1/1 1/1 0.0/0.0 0B/s

PHASE ITEMS
Installing new actions 4/4
Updating package state database Done
Updating image state Done
Creating fast lookup database Done

pkg list testpkg
NAME (PUBLISHER) VERSION IFO
testpkg 1.0.1 i--

Posted in solaris | Leave a comment

solaris 11 change hostname and ip

1. change the identity:node

root@sol11-1:/tmp# svccfg
svc:> select system/identity
svc:/system/identity> select system/identity:node
svc:/system/identity:node> listprop config/nodename
config/nodename astring sol11-1
svc:/system/identity:node> setprop config/nodename="sol11"
svc:/system/identity:node> refresh
svc:/system/identity:node> exit

2. nwam should be disabled.

root@sol11-1:/tmp# svcs svc:/network/physical:nwam
STATE STIME FMRI
disabled Sep_22 svc:/network/physical:nwam

3. list the current address.

root@sol11-1:/tmp# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
net0/v4 static ok 192.168.4.22/24
lo0/v6 static ok ::1/128
net0/v6 addrconf ok fe80::a00:27ff:febd:7496/10
root@vbox3:/tmp# ipadm delete-addr net0/v4
root@vbox3:/tmp# ipadm create-addr -T static -a 192.168.4.23/24 net0/v4
root@vbox3:/tmp# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
net0/v4 static ok 192.168.4.23/24
lo0/v6 static ok ::1/128
net0/v6 addrconf ok fe80::a00:27ff:febd:7496/10
root@vbox3:/tmp# exit

Posted in solaris | Leave a comment

solaris 11 exercise ips (10) add second pkg server instance

In this exercise you will setup a second instance of the application/pkg/server service.

This means that you will have a second environment on the
same machine from which you can publish your own software
packages.

1. Add second instance.
# svccfg -s pkg/server add s11ReleaseRepo

2. Set the service's port to 10081; set the inst_root and set readonly to true.
# svccfg -s pkg/server:s11ReleaseRepo setprop pkg/port=10081
# svccfg -s pkg/server:s11ReleaseRepo setprop pkg/inst_root=/export/S11ReleaseRepo
# svccfg -s pkg/server:s11ReleaseRepo setprop pkg/readonly=true

3. Set the proxy_base and the number of threads.
# svccfg -s pkg/server:s11ReleaseRepo setprop pkg/proxy_base = astring: http://pkg.example.com/s11ReleaseRepo
# svccfg -s pkg/server:s11ReleaseRepo setprop pkg/threads = 200

4. Refresh and enable
# svcadm refresh application/pkg/server:S11ReleaseRepo
# svcadm enable application/pkg/server:S11ReleaseRepo
Listing 1. Configuring the Release Repository Service

Posted in solaris | Leave a comment

solaris 11 ips repository

original link: blog

Posted in solaris | Leave a comment

solaris 11 exercise recover a lost rootpool

here

Posted in solaris | Leave a comment

solaris 11 exercise users

Log in to your system and switch user to root.

user1@solaris11-1:~$ su -
Password:
Oracle Corporation SunOS 5.11 11.0 November 2011
You have new mail.
root@solaris11-1:~#

1. Add a TERMINAL setting and alias to /etc/profile and test it.

root@solaris11-1:~# echo "export TERM=vt100" >> /etc/profile
root@solaris11-1:~# echo "alias c=clear" >> /etc/profile
root@solaris11-1:~# su - user1
Oracle Corporation SunOS 5.11 11.0 November 2011
user1@solaris11-1:~$ c

2. Add a testuser to your system.

root@solaris11-1:~# useradd -m -d /export/home/testuser testuser
80 blocks
root@solaris11-1:~#

3. Add an alias to the .profile of the testuser, and test it.

root@solaris11-1:~# echo "alias ll='ls -l'" >> /export/home/testuser/.profile
root@solaris11-1:~# su - testuser
Oracle Corporation SunOS 5.11 11.0 November 2011
testuser@solaris11-1:~$ ll
total 6
-rw-r--r-- 1 testuser staff 165 Apr 30 11:29 local.cshrc
-rw-r--r-- 1 testuser staff 170 Apr 30 11:29 local.login
-rw-r--r-- 1 testuser staff 130 Apr 30 11:29 local.profile
testuser@solaris11-1:~$ exit

4. Set user-quota for the testuser.

root@solaris11-1:~# zfs set quota=1M rpool/export/home/testuser
root@solaris11-1:~# zfs get quota rpool/export/home/testuser
NAME PROPERTY VALUE SOURCE
rpool/export/home/testuser quota 1M local
root@solaris11-1:~# zfs userspace rpool/export/home/testuser
TYPE NAME USED QUOTA
POSIX User root 1.50K none
POSIX User testuser 8K none
root@solaris11-1:~#

5. Switch user to the testuser and create a file to exceed the quota.

root@solaris11-1:~# su - testuser
testuser@solaris11-1:~$ mkfile 2m file1
file1: initialized 917504 of 2097152 bytes: Disc quota exceeded
testuser@solaris11-1:~$ exit

Posted in solaris | Leave a comment

solaris 11 exercise processes

Exercise Managing System Processes and Managing Tasks

1. Log in to your system and switch user to root.

user1@solaris11-1:~$ su -
Password:
Oracle Corporation SunOS 5.11 11.0 November 2011
You have new mail.
root@solaris11-1:~#

2. User the ps command to view your processes.

root@solaris11-1:~# ps
PID TTY TIME CMD
1435 pts/1 0:00 ps
1431 pts/1 0:00 bash
1430 pts/1 0:00 su
root@solaris11-1:~#

What is you shell?
Is the 'ps' command also running?

3. Use the ps command to display all processes one page at a time.

root@solaris11-1:~# ps -ef|more
UID PID PPID C STIME TTY TIME CMD
root 0 0 0 10:44:42 ? 0:01 sched
root 5 0 0 10:44:41 ? 0:13 zpool-rpool
root 6 0 0 10:44:43 ? 0:00 kmem_task
root 1 0 0 10:44:43 ? 0:00 /usr/sbin/init
root 2 0 0 10:44:43 ? 0:00 pageout
root 3 0 0 10:44:43 ? 0:00 fsflush
root 7 0 0 10:44:43 ? 0:00 intrd
(snipped)

4. What does the prstat command do?

root@solaris11-1:~# prstat
prstat: failed to load terminal info, defaulting to -c option
Please wait...
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
939 root 38M 7596K run 59 0 0:00:00 0.2% pkg.depotd/64
1213 gdm 136M 27M sleep 59 0 0:00:00 0.1% gdm-simple-gree/1
1438 root 11M 3140K cpu0 59 0 0:00:00 0.1% prstat/1
911 root 31M 23M sleep 59 0 0:00:01 0.0% Xorg/3
1431 root 10M 2380K sleep 49 0 0:00:00 0.0% bash/1
1423 user1 18M 5704K run 59 0 0:00:00 0.0% sshd/1
45 netcfg 3784K 2764K sleep 59 0 0:00:00 0.0% netcfgd/4
129 root 13M 2796K sleep 59 0 0:00:00 0.0% syseventd/18
461 root 8860K 1180K sleep 59 0 0:00:00 0.0% cron/1
70 daemon 14M 4624K sleep 59 0 0:00:00 0.0% kcfd/3
1116 gdm 3768K 1396K sleep 59 0 0:00:00 0.0% dbus-launch/1
46 root 3868K 2560K sleep 59 0 0:00:00 0.0% dlmgmtd/6
118 root 2124K 1176K sleep 59 0 0:00:00 0.0% pfexecd/3
13 root 21M 20M sleep 59 0 0:00:12 0.0% svc.configd/47
11 root 21M 12M sleep 59 0 0:00:03 0.0% svc.startd/12
Total: 101 processes, 935 lwps, load averages: 0.05, 0.41, 0.25

PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP

5. Use prstat to view the highest CPU usage every 5 seconds 10 times.

root@solaris11-1:~# prstat -s cpu 5 10
prstat: failed to load terminal info, defaulting to -c option
Please wait...
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
939 root 38M 7596K sleep 59 0 0:00:00 0.2% pkg.depotd/64
1213 gdm 136M 27M sleep 59 0 0:00:00 0.1% gdm-simple-gree/1
911 root 31M 23M sleep 59 0 0:00:01 0.0% Xorg/3
1441 root 11M 3040K cpu0 59 0 0:00:00 0.0% prstat/1
(snipped)

6. What is the difference between -s and -S in the following command?

root@solaris11-1:~# prstat -S rss 2 2
prstat: failed to load terminal info, defaulting to -c option
Please wait...
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
697 root 0K 0K sleep 60 - 0:00:00 0.0% nfsd_kproc/2
542 root 0K 0K sleep 60 - 0:00:00 0.0% lockd_kproc/2
229 root 0K 0K sleep 99 -20 0:00:00 0.0% zpool-p1/136
248 root 0K 0K sleep 99 -20 0:00:00 0.0% zpool-software/136
533 root 0K 0K sleep 60 - 0:00:00 0.0% nfs4cbd_kproc/2
181 root 0K 0K sleep 99 -20 0:00:00 0.0% zpool-kanweg/136

root@solaris11-1:~# prstat -s rss 2 2
prstat: failed to load terminal info, defaulting to -c option
Please wait...
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
1206 gdm 141M 31M sleep 59 0 0:00:00 0.0% gnome-settings-/1
1213 gdm 136M 27M sleep 59 0 0:00:01 0.1% gdm-simple-gree/1
911 root 31M 23M sleep 59 0 0:00:01 0.0% Xorg/3
13 root 21M 20M sleep 59 0 0:00:12 0.0% svc.configd/47
1212 gdm 129M 18M sleep 59 0 0:00:00 0.0% gnome-power-man/1

7. Check whether sendmail is running.

root@solaris11-1:~# pgrep -l sendmail
1416 sendmail
1418 sendmail

8. Kill sendmail.

root@solaris11-1:~# pkill sendmail
root@solaris11-1:~# pgrep -l sendmail
1464 sendmail
1460 sendmail
root@solaris11-1:~#

9. Start a program in the background that runs for 10000 seconds.

root@solaris11-1:~# sleep 10000 &
[1] 1471

10. Stop the sleep program, and start it again.

root@solaris11-1:~# kill -STOP %1
root@solaris11-1:~# jobs
[1]+ Stopped sleep 10000
root@solaris11-1:~# bg %1
[1]+ sleep 10000 &
root@solaris11-1:~#

9. List your scheduled tasks.

root@solaris11-1:~# crontab -l
#ident "%Z%%M% %I% %E% SMI"
#
# Copyright 2007 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# The root crontab should be used to perform accounting data collection.
#
#
10 3 * * * /usr/sbin/logadm
15 3 * * 0 [ -x /usr/lib/fs/nfs/nfsfind ] && /usr/lib/fs/nfs/nfsfind
30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh
root@solaris11-1:~#

10. Create a script that sleeps for 30 seconds and add it to your tasks. The
script should run every minute.

root@solaris11-1:~# echo "/usr/bin/sleep 30&" > /usr/bin/sleeper
root@solaris11-1:~# chmod +x /usr/bin/sleeper
#ident "%Z%%M% %I% %E% SMI"
#
# Copyright 2007 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# The root crontab should be used to perform accounting data collection.
#
#
10 3 * * * /usr/sbin/logadm
15 3 * * 0 [ -x /usr/lib/fs/nfs/nfsfind ] && /usr/lib/fs/nfs/nfsfind
30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh
* * * * * /usr/bin/sleeper
:wq!

Check whether the taks gets started.
root@solaris11-1:~# pgrep -lf sleep
1513 /usr/bin/sleep 30

11. Confure cron.deny so that user1 (or any other user) is not allowed
to use cron, and test it.

root@solaris11-1:~# echo "user1" >> /etc/cron.d/cron.deny
root@solaris11-1:~# su - user1
Oracle Corporation SunOS 5.11 11.0 November 2011
user1@solaris11-1:~$ crontab -e
crontab: you are not authorized to use cron. Sorry.
user1@solaris11-1:~$exit

12. Where are the cron-files stored?

root@solaris11-1:~# ls /var/spool/cron/crontabs
adm root root.au sys

Posted in solaris | Leave a comment

solaris 11 exercise smf (2) add smf-service

In this exercise you will create a new service.
It is a dummy services that only sleeps for a long
time and is started from a script. The script will
be the start-method that you supply in the service
manifest file.

1. Login to your solaris 11 vm and switch to the root user.
-bash-4.1$ su -
Password: e1car0
Oracle Corporation SunOS 5.11 11.0 November 2011
You have new mail.
root@solaris11-1:~#

2. Change directory to /var/svc/manifest/site
root@solaris11-1:/# cd /var/svc/manifest/site
root@solaris11-1:/var/svc/manifest/site#

3. Export the cron-service to mysvc.xml
root@solaris11-1:~# svccfg export cron > mysvc.xml

4. Make the following changes to mysvc.xml
old: service name='system/cron' type='service' version='0'
new: service name='site/mysvc' type='service' version='0'

old: dependency and dependent lines
new: (removed)

old: exec_method name='start' type='method' exec='/lib/svc/method/svc-cron' timeout_seconds='60'
new: exec_method name='start' type='method' exec='/lib/svc/method/mysvc' timeout_seconds='60'

5. Create a script called mysvc is /lib/svc/method and make it executable.
root@solaris11-1:~# echo "sleep 10000&" > /lib/svc/method/mysvc
root@solaris11-1:~# chmod +x /lib/svc/method/mysvc

6. Restart the manifest-import service.
root@solaris11-1:~# svcadm restart svc:/system/manifest-import

7. Check the status of your service
root@solaris11-1:~# svcs mysvc
STATE STIME FMRI
online 14:39:16 svc:/site/mysvc:default

Posted in solaris | 1 Comment

solaris 11 exercise smf (1) administering services

In this exercise you will work with dependencies between
services. You will check the status of cron, and check
its dependencies. Then you will disable a service that
cron depends on and see how that reflects on cron.

user1@solaris11-1:~$ su -
Password:
Oracle Corporation SunOS 5.11 11.0 November 2011
You have new mail.

root@solaris11-1:~# pgrep -fl cron
466 /usr/sbin/cron

root@solaris11-1:~# svcs cron
STATE STIME FMRI
online 11:30:29 svc:/system/cron:default

root@solaris11-1:~# svcs -p cron
STATE STIME FMRI
online 11:30:29 svc:/system/cron:default
11:30:29 466 cron
root@solaris11-1:~#

root@solaris11-1:~# svcs -d cron
STATE STIME FMRI
online 11:30:25 svc:/milestone/name-services:default
online 11:30:29 svc:/system/filesystem/local:default
root@solaris11-1:~#

root@solaris11-1:~# svcs -D cron
STATE STIME FMRI
online 11:30:42 svc:/milestone/multi-user:default
root@solaris11-1:~#

root@solaris11-1:~# svcadm disable name-services
root@solaris11-1:~# svcs -d cron
STATE STIME FMRI
disabled 20:22:45 svc:/milestone/name-services:default
online 11:30:29 svc:/system/filesystem/local:default

root@solaris11-1:~# svcs -p cron
STATE STIME FMRI
online 11:30:29 svc:/system/cron:default
11:30:29 466 cron

root@solaris11-1:~# svcadm refresh cron

root@solaris11-1:~# svcs -p cron
STATE STIME FMRI
online 20:23:13 svc:/system/cron:default
11:30:29 466 cron

root@solaris11-1:~# svcadm disable cron
root@solaris11-1:~# svcadm enable cron

root@solaris11-1:~# svcs -p cron
STATE STIME FMRI
offline 20:23:40 svc:/system/cron:default
root@solaris11-1:~#

Posted in solaris | Leave a comment

clustermode ports failovergroups interfaces roles firewall-policy

show image

Posted in Uncategorized | Leave a comment

ceph

ceph

Posted in Uncategorized | Leave a comment

solaris 11 exercise zones (1)

1. The zones will have /software as root.

# df -h | grep software
software 20G 33K 16G 1% /software

2. Create a vnic for a new zone.
# dladm show-phys
LINK MEDIA STATE SPEED DUPLEX DEVICE
net0 Ethernet up 1000 full e1000g0
net1 Ethernet unknown 0 unknown e1000g1
net2 Ethernet unknown 0 unknown e1000g2
net3 Ethernet unknown 0 unknown e1000g3
# dladm create-vnic vnic20 -l net3

3. Create a zone called zone20
# zonecfg -z zone20
zone20: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone20> create
create: Using system default template 'SYSdefault'
zonecfg:zone20> set zonepath=/software/zone20
zonecfg:zone20> add net
zonecfg:zone20:net> set physical=vnic20
zonecfg:zone20:net> end
zonecfg:zone20> commit
zonecfg:zone20> exit
zoneadm -z zone20 install
(wait...)

Boot the zone and login to the console.

# zoneadm -z zone20 boot
# zlogin -C zone20
(in the sysidtool select manual network configuration and
select vnic20)
use 192.168.0.20 as the IP-Address.
use the default netmask
use router 192.168.0.1

DNS server: 192.168.4.1

root password : e1car0

4. Migrate a Solaris 10 zone to Solaris 11.
You will use a prepared cpio file from a Solaris 10 VM
to host a solaris 10 zone on Solaris 11.

Use the automounter to access the preconfigured file.
# cd /net/192.168.4.159/zones
# ls
zone10 zone10.cpio.gz

Create a new zone.
# zonecfg -z zone10
zone10: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone10> create -t SYSsolaris10
zonecfg:zone10> set zonepath=/zones/zone1
zonecfg:zone10> set autoboot=true
zonecfg:zone10> select anet linkname=net0
zonecfg:zone10:anet> set allowed-address=192.168.0.30/24
zonecfg:zone10:anet> set configure-allowed-address=true
zonecfg:zone10:anet> end
zonecfg:zone10> set hostid=2ee3a870
zonecfg:zone10> verify
zonecfg:zone10> commit
zonecfg:zone10> exit

Attach the cpio file to the new zone
# zoneadm -z zone10 attach -a /net/192.168.4.159/zones/zone10.cpio.gz
A ZFS file system has been created for this zone.
Progress being logged to /var/log/zones/zoneadm.20140301T180221Z.zone10.attach
Log File: /var/log/zones/zoneadm.20140301T180221Z.zone10.attach
Attaching...
Installing: This may take several minutes...

# zoneadm -z zone10 boot

5. Delegate zonemanagement of zone3 to user peter.

# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
2 zone3 running /rpool/zones/zone3 solaris excl

# zonecfg -z zone3
zonecfg:zone3> add admin
zonecfg:zone3:admin> set user=peter
zonecfg:zone3:admin> set auths=manage
zonecfg:zone3:admin> end
zonecfg:zone3> commit
zonecfg:zone3> exit

# su - peter
# pfexec bash
# zlogin zone3

[Connected to zone 'zone3' pts/7]
Oracle Corporation SunOS 5.11 11.0 August 2012
root@zone3:~# exit

# zoneadm -z zone3 halt

note: another way of setting authorizations
# usermod -P+"Zone Management" -A+solaris.zone.manage/zone1 peter
# usermod -A+solaris.zone.login/zone2 peter

note: use pfexec bash to test because bash is not RBAC aware.

Optional exercise.
6. Create an additional zone called zone21.
The zone will have to vnic interfaces, vnic30 and vnic31.
Vnic30 will be connected to an etherstub. Vnic31 will be
connected to net0.

Zone20 has one vnic called vnic20. This vnic will also be
connected to the etherstub. Zone20 will use zone21
as a router.

|zone20-vnic20 | 192.168.0.0 | vnic30-zone21-vnic31 | 192.168.4.0 |

Posted in solaris | Leave a comment

clustermode SFO

In CMode, when a failover or takeover has taken place, the root-aggregate
of the partnernode is owned by the surviving partner.

How to get to the rootvolume of the partner's root-aggregate?

1. Log in to she systemshell.
2. Run the command 'mount_partner'

The root-volume of the partner is then mounted on /partner

(done)

Posted in netapp | Leave a comment

clustermode switch to switchless

view pdf

Posted in netapp, Uncategorized | Leave a comment

netapp hwassist

There are specific Data ONTAP commands for configuring the hardware-assisted takeover feature.

If you want to... Use this command...
Disable or enable hardware-assisted takeover storage failover modify hwassist
Set the partner address storage failover modify hwassist-partner-ip
Set the partner port storage failover modify hwassist-partner-port
Specify the interval between heartbeats storage failover modify hwassist-health-check-interval
Specify the number of times the hardware-assisted takeover alerts are sent storage failover modify hwassist-retry-count

Posted in netapp | Leave a comment

cdot snapvault example

setting up snapvault cmode example

1. create vserverA and vserverB
cl1::> vserver create vserverA -rootvolume roota -aggregate aggr1_n1 -ns-switch file -nm-switch file -rootvolume-security-style unix

cl1::> vserver create vserverB -rootvolume rootb -aggregate aggr1_n2 -ns-switch file -nm-switch file -rootvolume-security-style unix

for vserverA
2. create a source datavolume in vserverA
cl1::> vol create -vserver vserverA -volume datasource -aggregate aggr1_n1 -size 100m -state online

3. create a destination datavolume in vserverB with type DP
1::> vol create -vserver vserverB -volume datadest -aggregate aggr1_n2 -size 100m -type DP

Using the 5min schedule for snapshots on the source

Create new snapshot-policy for vserverA with the 5min schedule and a count of 2.
Also specify the snapmirror-label for the snapshots.
4. cl1::> volume snapshot policy create -vserver vserverA -policy 5minpol -enabled true -schedule1 5min -count1 2 -prefix1 5min -snapmirror-label1 vserverA-vault-5min

Connect snapshot-policy to the vserverA datasource volume.
5. cl1::> volume modify -vserver vserverA -volume datasource -snapshot-policy 5minpol

for vserverB.
Create snapmirror policy and rule
6. cl1::> snapmirror policy create -vserver vserverB -policy vserverA-vault
7. cl1::> snapmirror policy add-rule -vserver vserverB -policy vserverA-vault -snapmirror-label vserverA-vault-5min -keep 40

Setup a peer relationship
8. cl1::> vserver peer create -vserver vserverA -peer-vserver vserverB -applications snapmirror

Create the snapmirror relation
9. cl1::> snapmirror create -source-path vserverA:datasource -destination-path vserverB:datadest -type XDP -policy vserverA-vault -schedule 5min

Initialize.
10. cl1::> snapmirror initialize -destination-path vserverB:datadest

Monitor:
View the lag time.
cl1::> snapmirror show -destination-path vserverB:datadest -field lag

View the relationship
cl1::> snapmirror show

View the snapshots
cl1::> snapshot show -vserver vserverA -volume datasource -instance
cl1::> snapshot show -vserver vserverB -volume datadest

Posted in netapp | Leave a comment

solaris 11 exercise zfs (1)

Basic operations.

Your machine has 3 disks of 300MB
If your machine has no available disks you
can create 3 files of 300MB in the /dev/dsk
directory and use those.
Perform the following 3 commands only if your
machine has no available disks.
# mkfile 300m /dev/dsk/disk1
# mkfile 300m /dev/dsk/disk2
# mkfile 300m /dev/dsk/disk3

You will perform the following basic operations:
1. Create a mirrored zpool of 2 disks and 1 spare.
2. Create 2 zfs filesystems in the pool.
3. Set quota and reservations.
4. Rename the pool by exporting and importing.

Locate the 3 disks first, format.

1. Create a mirrored zpool of 2 disks with 1 spare.
# zpool create pool1 mirror c4t1d0 c4t3d0 spare c4t4d0
# zpool status pool1
pool: pool1
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c4t1d0 ONLINE 0 0 0
c4t3d0 ONLINE 0 0 0
spares
c4t4d0 AVAIL

errors: No known data errors

2. Create 2 zfs filesystems in the pool.
# zfs create pool1/fs1
# zfs create pool1/fs2

3. Set quota and reservations
# zfs set quota=100m pool1/fs1
# zfs set reservation=100m pool1/fs1

check the available space in the pool

# df -h |grep pool1
pool1 254M 33K 154M 1% /pool1
pool1/fs1 100M 31K 100M 1% /pool1/fs1
pool1/fs2 254M 31K 154M 1% /pool1/fs2

4. Rename the pool.
# zpool export pool1
# zpool import pool1 newpool

Shadow Migration.
user3 and user4 work togethers

user3:
create a zfs filesystem and put some files in it and
share it with NFS.

# zfs create software/source
# cp -r /var/adm/* /software/source/
# share -F nfs -o ro /software/source
# dfshares
RESOURCE SERVER ACCESS TRANSPORT
solaris11-3:/software/source solaris11-3 - -

user4:
Check whether shadow-migration is installed.
# pkg list shadow-migration
NAME (PUBLISHER) VERSION IFO
system/file-system/shadow-migration 0.5.11-0.175.0.0.0.2.1 i--

# svcadm enable shadowd
# svcs shadowd
STATE STIME FMRI
online 21:42:19 svc:/system/filesystem/shadowd:default

Create a zfs filesystem and mount it with the shadow argument.

# zfs create -o shadow=nfs://192.168.4.153/software/source \
software/destination

# ls /software/destination

Replication.

setup simple replication.
user3 and user4 work together.
# zfs create software/replisource
# zfs send software/replisource@snap1 | ssh root@192.168.4.153 \
zfs receive software/replidest

setup automatic replication.
user3 and user4 work together.

1. setup automatic rootlogin at the destination.
user4:
# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
26:d7:aa:4e:50:6d:26:ac:78:e8:56:d7:27:d9:de:29 root@solaris11-4

2. copy the new public key to the destination server.
# cd $HOME/.ssh
# scp id_rsa.pub root@192.168.4.153:/root/.ssh/authorized_keys
Password: ******
id_rsa.pub 100% |******************************************************************| 398 00:00

3. Create a simple script to automate the replication.
- source filesystem is software/replisource
- destination filesystem is software/replidest
- atime should be set to off on the destination

user4:
=============================================
#!/usr/bin/bash
# create the baseline transfer
zfs snapshot software/replisource@snap0
zfs send software/replisource@snap0 | ssh root@192.168.4.153 zfs recv -F software/replidest
# switch of atime on destination
ssh root@192.168.4.153 zfs set atime=off software/replidest

#loop to increment
while true
do

#manage destinationsnaps
echo manage destinationsnaps
ssh root@192.168.4.153 zfs destroy software/replidest@snap1
ssh root@192.168.4.153 zfs rename software/replidest@snap0 software/replidest@snap1
echo done

#manage localsnaps
echo manage source snaps
zfs destroy software/replisource@snap1
zfs rename software/replisource@snap0 software/replisource@snap1
zfs snapshot software/replisource@snap0
echo done

#incremental replication
echo increment
zfs send -i software/replisource@snap1 software/replisource@snap0 | ssh root@192.168.4.153 zfs receive -F software/replidest
echo done
sleep 5
done
==========================

Run the script.
# chmod +x repli.sh
# ./repli.sh

Now add files to the source filesystem and read the contents of the
destination to check the replication.

Posted in solaris | Leave a comment

netapp clustered ontap 8.2.1RC

CLUSTERED SIMULATE ONTAP LICENSES
+++++++++++++++++++++++++++++++++

These are the licenses that you use with the clustered Data ONTAP version
of Simulate ONTAP to enable Data ONTAP features.

There are four groups of licenses in this file:

- cluster base license
- feature licenses for the ESX build
- feature licenses for the non-ESX build
- feature licenses for the second node of a 2-node cluster

Cluster Base License (Serial Number 1-80-000008)
================================================

You use the cluster base license when setting up the first simulator in a cluster.

Cluster Base license = SMKQROWJNQYQSDAAAAAAAAAAAAAA

Clustered Data ONTAP Feature Licenses
=====================================

You use the feature licenses to enable unique Data ONTAP features on your simulator.

Licenses for the ESX build (Serial Number 4082367724)
-----------------------------------------------------

Use these licenses with the VMware ESX build.

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS CEASNGAUTXKZOFABGAAAAAAAAAAA CIFS protocol
FCP ATVVOGAUTXKZOFABGAAAAAAAAAAA Fibre Channel Protocol
FlexClone WWMDRGAUTXKZOFABGAAAAAAAAAAA FlexClone
Insight_Balance CKXDVGAUTXKZOFABGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI OYXGOGAUTXKZOFABGAAAAAAAAAAA iSCSI protocol
NFS QJCDNGAUTXKZOFABGAAAAAAAAAAA NFS protocol
SnapLock ULIHSGAUTXKZOFABGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise QPZOUGAUTXKZOFABGAAAAAAAAAAA SnapLock Enterprise
SnapManager GGGWSGAUTXKZOFABGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror KCPOQGAUTXKZOFABGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect SAELTGAUTXKZOFABGAAAAAAAAAAA SnapProtect Applications
SnapRestore YHRZPGAUTXKZOFABGAAAAAAAAAAA SnapRestore
SnapVault IRKSRGAUTXKZOFABGAAAAAAAAAAA SnapVault primary and secondary

Licenses for the non-ESX build (Serial Number 4082367722)
---------------------------------------------------------

Use these licenses with the VMware Workstation, VMware Player, and VMware Fusion build.

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS APLPKUDPDUEVQEABGAAAAAAAAAAA CIFS protocol
FCP YDHTLUDPDUEVQEABGAAAAAAAAAAA Fibre Channel Protocol
FlexClone UHYAOUDPDUEVQEABGAAAAAAAAAAA FlexClone
Insight_Balance AVIBSUDPDUEVQEABGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI MJJELUDPDUEVQEABGAAAAAAAAAAA iSCSI protocol
NFS OUNAKUDPDUEVQEABGAAAAAAAAAAA NFS protocol
SnapLock SWTEPUDPDUEVQEABGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise OALMRUDPDUEVQEABGAAAAAAAAAAA SnapLock Enterprise
SnapManager ERRTPUDPDUEVQEABGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror INAMNUDPDUEVQEABGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect QLPIQUDPDUEVQEABGAAAAAAAAAAA SnapProtect Applications
SnapRestore WSCXMUDPDUEVQEABGAAAAAAAAAAA SnapRestore
SnapVault GCWPOUDPDUEVQEABGAAAAAAAAAAA SnapVault primary and secondary

Licenses for the second node in a cluster (Serial Number 4034389062)
--------------------------------------------------------------------

Use these licenses with the second simulator in a cluster (either the ESX or non-ESX build).

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS MHEYKUNFXMSMUCEZFAAAAAAAAAAA CIFS protocol
FCP KWZBMUNFXMSMUCEZFAAAAAAAAAAA Fibre Channel Protocol
FlexClone GARJOUNFXMSMUCEZFAAAAAAAAAAA FlexClone
Insight_Balance MNBKSUNFXMSMUCEZFAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI YBCNLUNFXMSMUCEZFAAAAAAAAAAA iSCSI protocol
NFS ANGJKUNFXMSMUCEZFAAAAAAAAAAA NFS protocol
SnapLock EPMNPUNFXMSMUCEZFAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise ATDVRUNFXMSMUCEZFAAAAAAAAAAA SnapLock Enterprise
SnapManager QJKCQUNFXMSMUCEZFAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror UFTUNUNFXMSMUCEZFAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect CEIRQUNFXMSMUCEZFAAAAAAAAAAA SnapProtect Applications
SnapRestore ILVFNUNFXMSMUCEZFAAAAAAAAAAA SnapRestore
SnapVault SUOYOUNFXMSMUCEZFAAAAAAAAAAA SnapVault primary and secondary

Posted in netapp | Leave a comment

solaris 11 exercise ip (1)

Your machine has 4 network interfaces.
Adapter 1 is configured up.

1. View the dladm subcommands.
# dladm
usage: dladm ...
rename-link
(output skipped)

2. View your physical network interfaces.
# dladm show-phys
NK MEDIA STATE SPEED DUPLEX DEVICE
net1 Ethernet unknown 1000 full e1000g1
net2 Ethernet unknown 0 unknown e1000g2
net0 Ethernet up 1000 full e1000g0
net3 Ethernet unknown 0 unknown e1000g3

View your links.
# dladm show-link
LINK CLASS MTU STATE OVER
net1 phys 1500 unknown --
net2 phys 1500 unknown --
net0 phys 1500 up --
net3 phys 1500 unknown --
vnic1 vnic 1500 up net0
zone1/vnic1 vnic 1500 up net0
zone1/net0 vnic 1500 up net0

View your interfaces
# ipadm show-if
IFNAME CLASS STATE ACTIVE OVER
lo0 loopback ok yes --
net0 ip ok yes --

View your addresses
# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
net0/v4 static ok 192.168.4.154/24
lo0/v6 static ok ::1/128
net0/v6 addrconf ok fe80::250:56ff:fe96:d471/10

How is your network service configured?
# svcs network/physical
STATE STIME FMRI
disabled 17:34:14 svc:/network/physical:nwam
online 17:34:19 svc:/network/physical:upgrade
online 17:34:21 svc:/network/physical:default

3. Pick a free physical interface to create an ip-interface.
# ipadm create-ip net1
# ipadm show-if|grep net1
net1 ip down no --

Configure an ip-address on the new interface.
Make sure you select a unique address in your network.
# ipadm create-addr -T static -a 192.168.4.140/24 net1/v4

# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
net0/v4 static ok 192.168.4.154/24
net1/v4 static ok 192.168.4.140/24
lo0/v6 static ok ::1/128
net0/v6 addrconf ok fe80::250:56ff:fe96:d471/10

Down the interface and remove it.
# ipadm down-addr net1/v4
# ipadm delete-ip net1

4. Create a vnic, create an ip-interface on it and address.
# dladm create-vnic -l net1 vnic10
# ipadm create-ip vnic10
# ipadm create-addr -T static -a 192.168.4.140/24 vnic10/v4
# ipadm down-addr
# ipadm delete-ip vnic10
# dladm delete-vnic vnic10

5. Setup Link Aggragation.
You use 2 of the available physical interface to create an aggregate
# dladm create-aggr -l net2 -l net3 aggr1
# dladm show-aggr
LINK POLICY ADDRPOLICY LACPACTIVITY LACPTIMER FLAGS
aggr1 L4 auto off short -----

Setup an ip-address on the aggregate.
# ipadm create-addr -T static -a 192.168.4.140/24 aggr1/v4aggr
# ipadm show-if|grep aggr1
aggr1 ip ok yes --

Remove the link-aggregation
# ipadm down-addr aggr1/v4aggr
# ipadm delete-addr aggr1/v4aggr
# ipadm delete-ip aggr1
# dladm delete-aggr aggr1

6. Create an etherstub and connect vnics to the stub.
These vnics could be connected to zones.

# dladm create-etherstub stub0
# dladm create-vnic -l stub0 vnic5
# dladm create-vnic -l stub0 vnic10
# dladm create-vnic -l stub0 vnic1
# dladm show-vnic
LINK OVER SPEED MACADDRESS MACADDRTYPE VID
vnic1 net0 1000 2:8:20:25:26:38 random 0
zone1/vnic1 net0 1000 2:8:20:25:26:38 random 0
zone1/net0 net0 1000 2:8:20:80:87:6 random 0
vnic0 stub0 0 2:8:20:46:6a:77 random 0
vnic5 stub0 0 2:8:20:9b:e1:3b random 0
vnic10 stub0 0 2:8:20:7:6e:5d random 0

7. Setup two zones to use vnic5 and vnic10
(setup zone5 and zone6 and set the zonepath in /software)

# zonecfg -z zone5
zonecfg:zone5> add net
zonecfg:zone5:net> set physical=vnic5
zonecfg:zone5:net> end
zonecfg:zone5> commit
zonecfg:zone5> exit

Repeat this for zone6

Login to zone5 and setup an ip-address
# zlogin zone5
# dladm show-link
LINK CLASS MTU STATE OVER
vnic5 vnic 9000 up ?
# ipadm create-addr -T static -a 192.168.4.140/24 vnic5/v4

Repeat this in zone6.

Login to zone5 and ping the zone6 ip-address.
Can you also ping your global zone?

8. Optional exercise: NWAM

In this exercise you will create a classroom ncp and with a physical
interface and ipaddress.

Create a new ncp called classncp

root@solaris11-1:~# netcfg
netcfg> create ncp classncp
netcfg:ncp:classncp> create ncu phys net1
Created ncu 'net1'. Walking properties ...
activation-mode (manual) [manual|prioritized]> manual
link-mac-addr>enter
link-autopush>enter
link-mtu>enter
netcfg:ncp:classncp:ncu:net1> end
Committed changes
netcfg:ncp:classncp> create ncu ip net1
Created ncu 'net1'. Walking properties ...
ip-version (ipv4,ipv6) [ipv4|ipv6]> ipv4
ipv4-addrsrc (dhcp) [dhcp|static]> static
ipv4-addr> 192.168.4.110 (Note, ask your instructor for the ipaddress)
ipv4-default-route>enter
netcfg:ncp:classncp:ncu:net1> end
Committed changes
netcfg:ncp:classncp> end
netcfg> end
root@solaris11-1:~#

2. Enable the new ncp

root@solaris11-1:~# netadm enable -p ncp classncp

Do you still have connection to your system?

Posted in solaris | Leave a comment

solaris 11 exercise bootenvironments (1)

A boot environment is a bootable Oracle Solaris environment consisting of a root dataset and, optionally,
other datasets mounted underneath it. Exactly one boot environment can be active at a time.

A dataset is a generic name for ZFS entities such as clones, file systems, or snapshots. In the context
of boot environment administration, the dataset more specifically refers to the file
system specifications for a particular boot environment or snapshot.

A snapshot is a read-only image of a dataset or boot environment at a given point in time.
A snapshot is not bootable.

A clone of a boot environment is created by copying another boot environment. A clone is bootable.

Shared datasets are user-defined directories, such as /export, that contain the same mount point
in both the active and inactive boot environments. Shared datasets are located outside the root
dataset area of each boot environment.

Note - A clone of the boot environment includes everything hierarchically under the main root
dataset of the original boot environment. Shared datasets are not under the root dataset and are
not cloned. Instead, the boot environment accesses the original, shared dataset.

A boot environment's critical datasets are included within the root dataset area for that environment.

1. Create a new bootenvironment
# beadm create newbe
# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
newbe - - 226.0K static 2014-02-17 17:20
solaris NR / 6.40G static 2014-02-16 14:02

2. View the new bootenvironment filesystems
# zfs list
(output skipped)
rpool/ROOT/newbe 142K 12.5G 3.35G /
rpool/ROOT/newbe/var 84K 12.5G 221M /var
(output skipped)

3. Activate the new bootenvironment
# beadm activate newbe
# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
newbe R - 6.40G static 2014-02-17 17:20
solaris N / 41.0K static 2014-02-16 14:02

4. Mount the new bootenvironment.
# beadm mount newbe /newbe
Add a file to the mounted bootenvironment and reboot your system.
# touch /newbe/added
# shutdown -y -g0 -i6
Log in and check the file that you added.

5. Activate the original bootenvironment.
# beadm activate solaris
# shutdown -y -g0 -i6
Log in and destroy the new bootenvironment.
# beadm destroy newbe
Are you sure you want to destroy newbe? This action cannot be undone(y/[n]): y

6. Before installing a package, you can have a backup bootenvironment created.
First remove the telnet package and then install it again.
# pkg uninstall pkg://solaris/network/telnet
# pkg install --require-backup-be pkg://solaris/network/telnet
# beadm list | grep backup
solaris-backup-1 - - 145.0K static 2014-02-16 16:18

7. If there is no zone yet, create one.
# zonecfg -z zone1
# zonecfg -z zone2
zone1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create
create: Using system default template 'SYSdefault'
zonecfg:zone1> set zonepath=/software/zone1
zonecfg:zone1> exit
# zoneadm -z zone1 install
A ZFS file system has been created for this zone.
Progress being logged to /var/log/zones/zoneadm.20140217T224151Z.zone2.install
Image: Preparing at /software/zone2/root.

8. Create a bootenvironment in a the zone.
# zlogin zone1
# beadm create newbe
# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
newbe - - 58.0K static 2014-02-17 22:30
solaris NR / 421.67M static 2014-02-17 17:08
# exit

Log in using the console of the zone.
# zlogin -C zone1
solaris console login: root
Password: *******

Change the rootpassword and then activate the new bootenvironment, and reboot.
# passwd
New Password:
Re-enter new Password:
# beadm activate newbe
# reboot

Can you use the new rootpassword to login?

Posted in solaris | Leave a comment

solaris 11 exercise ips (1)

1. Log in to your machine.
ssh user1@192.168.4.151

switch to root
# sudo bash
Password: e1car0

2. Your machine has the Full Repository iso mounted.
# df -h | grep media
/dev/dsk/c3t0d0s2 6.8G 6.8G 0K 100% /media/SOL_11_1_REPO_FULL

Check the available disks.
# format
AVAILABLE DISK SELECTIONS:
0. c4t0d0
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c4t2d0
/pci@0,0/pci15ad,1976@10/sd@2,0

Use disk c4t2d0 (for example) to create a zpool for your repository.
# zpool create software c4t2d0
# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 19.9G 6.96G 12.9G 35% 1.00x ONLINE -
software 19.9G 112K 19.9G 0% 1.00x ONLINE -

Create a zfs filesystem for your repository.
# zfs create software/ips

3. Check your publisher.
# pkg publisher
PUBLISHER TYPE STATUS URI
solaris origin online http://pkg.oracle.com/solaris/release/

Create a repository.
# pkgrepo create /software/ips/
# ls /software/ips
pkg5.repository

To pupulate your repository choose 4a or 4b as a method.
4a. Populate your new repository.
# rsync -aP /media/SOL_11_1_REPO_FULL/repo /software/ips
(this will take some time 6.8GB)

Refresh your repository.
# pkgrepo refresh -s /software/ips

4b.
# pkgrepo create /software/ips
# pkgrecv -s http://pkg.oracle.com/solaris/release/ -d /software/ips '*'

5. Make your repository accessible for others.
# zfs set share=name=s11repo,path=/software/ips,prot=nfs software/ips
# zfs set sharenfs=on software/ips
# dfshares
RESOURCE SERVER ACCESS TRANSPORT
solaris11-4:/software/ips solaris11-4 - -

6. Set the inst_root for the pkg service.
# svccfg -s application/pkg/server setprop pkg/inst_root=/software/ips
# svcadm refresh pkg/server
# svcadm disable pkg/server
# svcadm enable pkg/server

Check your colleague's work and use his repository.
# pkg set-publisher -G'*' -M'*' -g /net/192.168.4.154/software/ips/ solaris

7. Remove the xkill package from your system.
# pkg uninstall xkill

Install the package again.
# pkg install xkill

8. Change the package, verify and repair.
# pkg contents xkill
PATH
usr/X11/bin/xkill
usr/bin/xkill
usr/share/man/man1/xkill.1

# chmod 000 /usr/bin/xkill
# pkg verify xkill
PACKAGE STATUS
pkg://solaris/x11/xkill ERROR
file: usr/bin/xkill
Mode: 0000 should be 0555

# pkg fix xkill

9. Create your own package and publish it.

# svcadm disable application/pkg/server
# svccfg -s application/pkg/server setprop pkg/readonly=false
# svcadm refresh application/pkg/server
# svcadm enable application/pkg/server

Create a directory to hold your software.
# mkdir -p /var/tmp/new
# cd /var/tmp/new
# vi newpackage
This is a new package.
:wq!

# eval 'pkgsend -s http://192.168.4.154 open newpackage@1.0-1'
export PKG_TRANS_ID=1392650521_pkg%3A%2F%2Fsolaris%2Fnewpackage%401.0%2C5.11-1%3A20 140217T152201Z

# export PKG_TRANS_ID=1392650521_pkg%3A%2F%2Fsolaris%2Fnewpackage%401.0%2C5.11-1%3A20140217T152201Z

# pkgsend -s \
> http://192.168.4.154 add dir mode=0555 owner=root \
> group=bin path=/export/newpackage

# pkgsend -s http://192.168.4.154 \
> add file /var/tmp/new/newpackage mode=0555 owner=root group=bin \
> path=/export/newpackage/newpackage

# pkgsend -s http://192.168.4.154 \
> add set name=description value="MyPackage"

# pkgsend -s http://192.168.4.154 \
> close
PUBLISHED
pkg://solaris/newpackage@1.0,5.11-1:20140217T152201Z

# svccfg -s pkg/server setprop pkg/readonly=true
# svcadm enable pkg/server

# pkg search newpackage
INDEX ACTION VALUE PACKAGE
basename file software/ips/new/newpackage pkg:/newpackage@1.0-1
pkg.fmri set solaris/newpackage pkg:/newpackage@1.0-1

# pkg install newpackage

More IPS

In a terminal window on the Sol11-Desktop virtual machine, determine if the apptrace
software package is currently installed.
# pkg list apptrace
pkg list: no packages matching 'apptrace' installed

Search the IPS package repository for the apptrace software package.
# pkg search apptrace
INDEX ACTION VALUE PACKAGE
pkg.description set Apptrace utility for application tracing,
including shared objects pkg:/developer/apptrace@0.5.11-0.175.0.0.0.2.1

Display detailed information about the apptrace package from the remote repository by
using the -r option
# pkg info -r apptrace
Name: developer/apptrace
Summary: Apptrace Utility
Description: Apptrace utility for application tracing,
including shared objects
Category: Development/System
State: Not installed
Publisher: solaris

Perform a dry run  on the apptrace package installation.
# pkg install -nv apptrace

The dry run shows that one package will be installed. The package installation will not affect the boot environment. No currently installed packages will be changed. Note that
an FMRI is the fault management resource identifier. The FMRI is the identifier for this
package. The FMRI includes the package publisher, name, and version. The pkg
command uses FMRIs, or portions of FMRIs, to operate on packages.

Install the apptrace package.
# pkg install apptrace

Verify the apptrace package installation.
# pkg verify -v apptrace
PACKAGE STATUS
pkg://solaris/developer/apptrace OK

Remove the apptrace package from the system image on your host.
# pkg uninstall apptrace

Posted in solaris, Uncategorized | Leave a comment

solaris 11 vnc

1. Install the Solaris Desktop environment.

# pkg install slim_install
# /usr/bin/vncserver
# vi /etc/gdm/custom.conf
[daemon]
[security]
[xdmcp]
Enable=true
[greeter]
[chooser]
[debug]

# svcadm restart gdm

# inetadm -e xvnc-inetd
or
# svcadm enable xvnc-inetd

Posted in solaris | Leave a comment

centos and adito

setting up adito on centos

Posted in linux | Leave a comment

solaris zfs nfs share

root@solaris11-1:~# zfs set share=name=kanweg,path=/rpool/kanweg,prot=nfs rpool/kanweg
name=kanweg,path=/rpool/kanweg,prot=nfs
root@solaris11-1:~# zfs sharenfs=on rpool/kanweg
root@solaris11-1:~# dfshares|grep kanweg
solaris11-1:/rpool/kanweg solaris11-1 - -

other example:
zfs set share=name=fs1,path=/temppool/fs1,prot=nfs,root=192.168.4.235,rw=192.168.4.235 temppool/fs1

Posted in solaris | Leave a comment

7000 factoryreset at boot

add "-c" as parameter to the boot line.

kernel$ /platform........... -c

Posted in solaris | Leave a comment

Windows MPIO iscsi

mpio

Posted in Uncategorized | Leave a comment

netapp 7-mode upgrade simulator (exercise)

This is just an exercise for installing a second image on your simulator and boot
from it.

First the existing image is tarred and zipped and put on the root volume.
Then the update is done and after that you will have the same image twice.
I know it is not very useful but as I said, it is just an exercise.

1. Log in to the filer and unlock the diaguser.
login: root
password: *****
priv set advanced
useradmin diaguser unlock
useradmin diaguser password

systemshell
login: diag
password: *****
sudo bash
cd /cfcard/x86_64/freebsd/image1
mkdir /mroot/etc/software
tar cvf /mroot/etc/software/8.tar .
gzip /mroot/etc/software/8.tar
cd /mroot/etc/software
gzip 8.tar
mv 8.tar.gz 8.tgz

(done)
exit
exit

(now you are back in the nodeshell)
software list
software update 8.tgz
software: You can cancel this operation by hitting Ctrl-C in the next 6 seconds.
software: Depending on system load, it may take many minutes
software: to complete this operation. Until it finishes, you will
software: not be able to use the console.
Software update started on node 7mode1. Updating image2 package: file://localhost/mroot/etc/software/82.tgz current image: image1
Listing package contents.
Decompressing package contents.
Invoking script (validation phase).
INSTALL running in check only mode
Mode of operation is UPDATE
Current image is image1
Alternate image is image2

reboot

Note: I also tried to run software update 82.tgz (a file I created from a running 8.2.1 simulator).
The update failed. Then I unzipped the 82.tgz and untarred it to the image2 directory. I booted
from the image2/kernel. Panic... Needs more work.

Posted in Uncategorized | Leave a comment

netapp add disks to simulator

f1> priv set advanced
( or in cdot: set d )
f1*> useradmin diaguser unlock
f1*> useradmin diaguser password
( or in cdot: security login unlock -username diag
security login password -username diag)

Enter a new password:*****
Enter it again:*****
f1*> systemshell -node nodename
login: diag
Password: *****
f1% sudo bash
bash-3.2# export PATH=${PATH}:/usr/sbin
bash-3.2# cd /sim/dev
bash-3.2# vsim_makedisks -h
(this will show all possible disk-types)

bash-3.2# vsim_makedisks -n 6 -t 23 -a 2
(this will create 6 drives of type 23 on
adapter 2. you can check the drives as follows)

bash-3.2# ls ,disks | more
,reservations
Shelf:DiskShelf14
v0.16:NETAPP__:VD-1000MB-FZ-520:11700900:2104448
v0.17:NETAPP__:VD-1000MB-FZ-520:11700901:2104448
v0.18:NETAPP__:VD-1000MB-FZ-520:11700902:2104448
v0.19:NETAPP__:VD-1000MB-FZ-520:11700903:2104448
(output snipped)
v1.22:NETAPP__:VD-1000MB-FZ-520:14091006:2104448
--More--(byte 1061)

(reboot your system)
bash-3.2# reboot
(the system reboots)
Password: *****
f1> disk show -n
(this will show the new unowned disks)

f1> disk assign all
(or in cdot: disk assign -all true -node )
(this will assign the new disks to the controller)

(done)

Posted in netapp | Leave a comment

netapp adding disks

7-mode and cmode

Posted in netapp | Leave a comment

linux allocate memory

#!/bin/bash

echo "Provide sleep time in the form of NUMBER[SUFFIX]"
echo " SUFFIX may be 's' for seconds (default), 'm' for minutes,"
echo " 'h' for hours, or 'd' for days."
read -p "> " delay

echo "begin allocating memory..."
for index in $(seq 1000); do
value=$(seq -w -s '' $index $(($index + 100000)))
eval array$index=$value
done
echo "...end allocating memory"

echo "sleeping for $delay"
sleep $delay

Posted in linux | Leave a comment

cdot exercises

1. Your cluster has 2 cluster-interfaces per node.

Find an available network port per node and add
a third cluster-interface on each node to increase
the bandwidth on the cluster-network.

example

2. Create a new user (user1) in a vserver and allow
the user to create volumes.

example

3. Make sure you can login to your cluster without having
to specify a password.

example

4. Find out which disks are in which raidgroups.

example

5. Create a new rootaggregate

example

Posted in Uncategorized | Leave a comment

clustermode 8.2.1 qtree export

Support for qtree nfs-exports.

A new qtree
volume qtree create -vserver vserver_name -qtree-path /vol/volume_name/qtree_name -export-policy export_policy_name

An existing qtree
volume qtree modify -vserver vserver_name -qtree-path /vol/volume_name/qtree_name -export-policy export_policy_name

Posted in Uncategorized | Leave a comment

esx power on vm

To power on a virtual machine from the command line:
List the inventory ID of the virtual machine with the command:

vim-cmd vmsvc/getallvms |grep

Note: The first column of the output shows the vmid.

Check the power state of the virtual machine with the command:

vim-cmd vmsvc/power.getstate

Power-on the virtual machine with the command:

vim-cmd vmsvc/power.on

Posted in Virtualization | Leave a comment

mysql phpmyadmin install

phpmyadmin_installation

Step #1: Turn on EPEL repo

phpMyAdmin is not included in default RHEL / CentOS repo. So turn on EPEL repo as described here:
$ cd /tmp
$ wget http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
# rpm -ivh epel-release-6-8.noarch.rpm

Step #2: Install phpMyAdmin on a CentOS / RHEL Linux

Type the following yum command to download and install phpMyAdmin:
# yum search phpmyadmin
# yum -y install phpmyadmin

Install MySQL server on a CentOS/RHEL

You need download and install MySQL server on CentOS/RHEL using the following yum command:
# yum install mysql-server mysql

Turn on and start the mysql service, type:
# chkconfig mysqld on
# service mysqld start

Set root password and secure mysql installation by running the following command:
# mysql_secure_installation

Step #3: Configure phpMyAdmin

You need to edit /etc/httpd/conf.d/phpMyAdmin.conf file, enter:
# vi /etc/httpd/conf.d/phpMyAdmin.conf

It allows only localhost by default. You can setup HTTPD SSL as described here (mod_ssl) and allow LAN / WAN users or DBA user to manage the database over www. Find line that read follows

Require ip 127.0.0.1
Replace with your workstation IP address:

Require ip 10.1.3.53
Again find the following line:

Allow from 127.0.0.1
Replace as follows:

Allow from 10.1.3.53
Save and close the file. Restart Apache / httpd server:
# service httpd restart

Open a web browser and type the following url:
https://your-server-ip/phpMyAdmin/

Posted in Uncategorized | Leave a comment

mysql reset lost rootpassword

source: reset lost rootpassword

# /etc/init.d/mysql stop

Output:

Stopping MySQL database server: mysqld.
Step # 2: Start to MySQL server w/o password:

# mysqld_safe --skip-grant-tables &

Output:

[1] 5988
Starting mysqld daemon with databases from /var/lib/mysql
mysqld_safe[6025]: started
Step # 3: Connect to mysql server using mysql client:

# mysql -u root

Output:

Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 4.1.15-Debian_1-log
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql>
Step # 4: Setup new MySQL root user password

mysql> use mysql;
mysql> update user set password=PASSWORD("NEW-ROOT-PASSWORD") where User='root';
mysql> flush privileges;
mysql> quit

Step # 5: Stop MySQL Server:

# /etc/init.d/mysql stop

Output:

Stopping MySQL database server: mysqld
STOPPING server from pid file /var/run/mysqld/mysqld.pid
mysqld_safe[6186]: ended
[1]+ Done mysqld_safe --skip-grant-tables
Step # 6: Start MySQL server and test it

# /etc/init.d/mysql start
# mysql -u root -p

----

or: /usr/bin/mysql_secure_installation

Posted in Uncategorized | Leave a comment

clustermode nice documents

documents

Posted in Uncategorized | Leave a comment

clustermode 8.2 vm setup

By Gerard Bosmann

Kopieer de uitgepakte simulator naar de ESX of VMWorkstation

Edit de vmx file

zoek naar de regel scsi0.pciSlotNumber = ì16î
en plak op die plaats het volgende

pciBridge0.present = "TRUE"
pciBridge0.pciSlotNumber = "16"
scsi0.pciSlotNumber = "17"
ethernet0.pciSlotNumber = "18"
ethernet1.pciSlotNumber = "19"
ethernet2.pciSlotNumber = "20"
ethernet3.pciSlotNumber = "21"
ethernet4.pciSlotNumber = "22"
ethernet5.pciSlotNumber = "23"

Start node 1
onderbreek het opstarten en open de loader
setenv bootarg.nvram.sysid 4079432737
setenv SYS_SERIAL_NUM 4079432-73-7
boot
bootmenu 4
Create het Cluster
Cluster Base license = SMKQROWJNQYQSDAAAAAAAAAAAAAA

CIFS WKGVRYETVDDCMAXAGAAAAAAAAAAA CIFS protocol
FCP UZBZSYETVDDCMAXAGAAAAAAAAAAA Fibre Channel Protocol
FlexClone QDTGVYETVDDCMAXAGAAAAAAAAAAA FlexClone
Insight_Balance WQDHZYETVDDCMAXAGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI IFEKSYETVDDCMAXAGAAAAAAAAAAA iSCSI protocol
NFS KQIGRYETVDDCMAXAGAAAAAAAAAAA NFS protocol
SnapLock OSOKWYETVDDCMAXAGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise KWFSYYETVDDCMAXAGAAAAAAAAAAA SnapLock Enterprise
SnapManager ANMZWYETVDDCMAXAGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror EJVRUYETVDDCMAXAGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect MHKOXYETVDDCMAXAGAAAAAAAAAAA SnapProtect Applications
SnapRestore SOXCUYETVDDCMAXAGAAAAAAAAAAA SnapRestore
SnapVault CYQVVYETVDDCMAXAGAAAAAAAAAAA SnapVault primary and secondary

Maak de SSD schijven
security login unlock -username diag
security login password -username diag ( kies wachtwoord )
set diag
systemshell -node node-x
login: diag
setenv PATH /sim/bin:$PATH
cd /sim/dev
sudo vsim_makedisks -t 35 -a 2 -n 14
exit
reboot -node node-x

Na de reboot:
disk assign all true -node node-x

Controleer of de node eigenaar van de licenties is ( lincense show )
Controleer of de schijven er zijn ( disk show -type ssd, disk show -type fcal )

Node 2

Edit de vmx file
zoek naar de regel scsi0.pciSlotNumber = ì16î
en plak op die plaats het volgende

pciBridge0.present = "TRUE"
pciBridge0.pciSlotNumber = "16"
scsi0.pciSlotNumber = "17"
ethernet0.pciSlotNumber = "18"
ethernet1.pciSlotNumber = "19"
ethernet2.pciSlotNumber = "20"
ethernet3.pciSlotNumber = "21"
ethernet4.pciSlotNumber = "22"
ethernet5.pciSlotNumber = "23"

Start node 2
onderbreek het opstarten en open de loader
setenv bootarg.nvram.sysid 4079432741
setenv SYS_SERIAL_NUM 4079432-74-1
bootmenu 4
join de cluster

CIFS APJAYWXCCLPKICXAGAAAAAAAAAAA CIFS protocol
FCP YDFEZWXCCLPKICXAGAAAAAAAAAAA Fibre Channel Protocol
FlexClone UHWLBXXCCLPKICXAGAAAAAAAAAAA FlexClone
Insight_Balance AVGMFXXCCLPKICXAGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI MJHPYWXCCLPKICXAGAAAAAAAAAAA iSCSI protocol
NFS OULLXWXCCLPKICXAGAAAAAAAAAAA NFS protocol
SnapLock SWRPCXXCCLPKICXAGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise OAJXEXXCCLPKICXAGAAAAAAAAAAA SnapLock Enterprise
SnapManager ERPEDXXCCLPKICXAGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror INYWAXXCCLPKICXAGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect QLNTDXXCCLPKICXAGAAAAAAAAAAA SnapProtect Applications
SnapRestore WSAIAXXCCLPKICXAGAAAAAAAAAAA SnapRestore
SnapVault GCUACXXCCLPKICXAGAAAAAAAAAAA SnapVault primary and secondary

Maak de SSD schijven
security login unlock -username diag
security login password -username diag ( kies wachtwoord )
set diag
systemshell -node node-x
login: diag
setenv PATH /sim/bin:$PATH
cd /sim/dev
sudo vsim_makedisks -t 35 -a 2 -n 14
exit
reboot -node node-x

Na de reboot:
disk assign -all true -node node-x

Controleer of de node eigenaar van de licenties is ( lincense show )
Controleer of de schijven er zijn ( disk show -type ssd, disk show -type fcal )

Posted in netapp | Leave a comment