netapp 7-mode snapvault restore files from backuplun

1. filer1 is primary filer and has aggr aggr1.

2. filer2 is secondary filer and has aggr aggr1.

3. windows is connected with iSCSI to filer1 and filer2
the igroup on both filers is called wingroup.

4. create a sourcevolume,qtree,lun on filer1 and map the lun.
- vol create volsource aggr1 500m
- qtree create /vol/volsource/q1
- lun create -s 100m -t windows /vol/volsource/q1/l1
- lun map /vol/volsource/q1/l1 wingroup
(the lun should now be visible on windows after a
rescan of the disks)

5. create a destinationvolume on filer2.
- vol create voldest aggr1 500m

6. setup a snapvault snapshot schedule on filer1.
- snapvault snap sched volsource snap1 2@0-23

7. setup a snapvault snapshot schedule on filer2.
- snapvault snap sched -x voldest snap1 40@0-23

8. Start the snapvault relation from filer2.
- snapvault start -S filer1:/vol/volsource/q1 filer2:/vol/voldest/q1

9. View the snapshot on filer2.
- snap list voldest
filer2(4079432752)_voldest-base.0 (busy,snapvault)
(this is the snapshot to clone the lun from)

10. Clone the lun on filer2.
- lun clone create /vol/voldest/backlun -b /vol/voldest/q1/l1 filer2(4079432752)_voldest-base.0

11. Online the lun and map it to wingroup.
- lun online /vol/voldest/backlun
- lun map /vol/voldest/backlun wingroup

12. On windows, rescan disks and find the drive to restore files
from.

Posted in Uncategorized | Leave a comment

solaris 11 zfs ARC

1. The Adaptive Replacement Cache
An ordered list of recently used resource entries; most recently
used at the top and least recently used at the bottom. Entries used
for a second time are placed at the top of the list and the bottom
entries will be evicted first. It used to be called LRU(Least Recenctly
Used) (LRU is evicted first).
ARC is an improvement because it keeps two lists T1 T2.
T1 for recent cache entries
T2 for frequent entries, referenced at least twice

2. Solaris 11 allows for locked pages by ZFS that cannot be vacated.
This may cause a problem for other programs in need of memory, or
if another filesystem than ZFS is used as well.

3. ZFS claims all memory except for 1 GB, or it claims at least 3/4

4. How much memory do we have?
prtconf | grep Mem
Memory size: 3072 Megabytes

5. See ARC statistics:
mdb -k
>::arc
(output skipped)
c_max = 2291 MB

6. How much memory is in use by ZFS?
mdb -k
>::memstat
Page Summary______Pages______MB____%Tot
Kernel____________136490_____533____17%
ZFS File Data_____94773______370____12%


7. Is the tunable parameter to limit ARC for ZFS set?
0 means no, 1 means yes.
mdb -k
> zfs_arc_max/E
zfs_arc_max:
zfs_arc_max: 0

8. What is the target max size for ARC?
kstat -p zfs::arcstats:c_max
zfs:0:arcstats:c_max 2402985984

(2402985984 bytes is 2291 MB)

9. How much does ZFS use at the moment?
kstat -p zfs::arcstats:size 5 (show results every 5 seconds)
zfs:0:arcstats:size 539604016
zfs:0:arcstats:size 539604016

(now start reading files and see the number increase)

for i in `seq 1 100`; do cat /var/adm/messages >>/test/data;
cat /test/data > /test/data$i; done

kstat -p zfs::arcstats:size 5 (show results every 5 seconds)
zfs:0:arcstats:size 539604016
zfs:0:arcstats:size 1036895816
zfs:0:arcstats:size 1036996288
zfs:0:arcstats:size 1037086728

10. How much memory is in use by ZFS now?
mdb -k
> ::memstat
Page Summary___Pages___MB___%Tot
------------ ---------------- ---------------- ----
Kernel_________140147__547___18%
ZFS File Data__236377__923___30%

11. To set the target tunable parameter, modify /etc/system and reboot.
vi /etc/system and add:
set zfs:zfs_arc_max= 1073741824
(after the reboot it will be 1 GB instead of 2.2 GB.

-----

brendan gregg's archits script.
# cat -n archits.sh
1 #!/usr/bin/sh
2
3 interval=${1:-5} # 5 secs by default
4
5 kstat -p zfs:0:arcstats:hits zfs:0:arcstats:misses $interval | awk '
6 BEGIN {
7 printf "%12s %12s %9s\n", "HITS", "MISSES", "HITRATE"
8 }
9 /hits/ {
10 hits = $2 - hitslast
11 hitslast = $2
12 }
13 /misses/ {
14 misses = $2 - misslast
15 misslast = $2
16 rate = 0
17 total = hits + misses
18 if (total)
19 rate = (hits * 100) / total
20 printf "%12d %12d %8.2f%%\n", hits, misses, rate
21 }
22 '
---------------------------

Posted in Uncategorized | Leave a comment

Performance seconds

cpu regs 300ns
SSL 25us 250us
Disk 5 ms - 20ms
Optical 100ms

nanosecond to second is as second to 31.710 years
microsecond to second is as second to 11.574 days

Screen Shot 2014-10-08 at 6.25.56 PM

Posted in Uncategorized | Leave a comment

Solaris remove empty lines in vi

:v/./d

Posted in Uncategorized | Leave a comment

netapp sim upgrade

By Ron Havenaar/Dirk Oogjen

This is how it works with a new simulator:

1. Download the NetApp CDOT 8.2.1 simulator from support.netapp.com
2. Unzip it to a directory of choice.
3. From that directory, import the DataONTAP.vmx file in VMware.
That can be a recent version of VMware workstation, Fusion or ESX, whatever you have.
4. The name of the imported vsim is now: vsim_netapp-cm. For this first node, you might first want to rename it to something like vsim_netapp-cm1.
5. The default setting for memory usage in the vm is 1.6 Gbyte. However, that is not enough for the upgrade.
You need to set this to at least 4 Gbyte. Later, when done with the upgrade, you can turn that back to 1.6 Gbyte. I tried, but less than 4 Gbyte won’t work…
6. Power On the simulator, but make sure you interrupt the boot process immediately!
You need the VLOADER> prompt.
In case you booted it before (when it is not a new machine), or you accidently booted the machine through from scratch, you cannot continue with it.
You MUST start with a new machine, and you MUST have the VLOADER> prompt at first boot if you want to change the serial numbers.
You CANNOT change the serial numbers on a machine that has been booted through before. (well… there is always a way.. but that takes much longer than starting from scratch again)
Just start over again if in doubt.
7. Two things need to happen now:
a. The serial numbers have to be set.
Each simulator in the set must have its own serial numbers. (more on this later)
b. You need to choose what kind of disks you want with this node.
These are the procedures for this: (next steps)
1. First the serial number:
Take for example the Serial Number 4082368507.
VLOADER> setenv SYS_SERIAL_NUM 4082368507
VLOADER> setenv bootarg.nvram.sysid 4082368507
2. Then choose the disk set to use for this node.
AGGR0 must be at least 10Gbyte for the upgrade, so be aware if you use small disks you have to add a lot of disks to AGGR0.

Pick a simdisk inventory and enter the corresponding setenv commands at the VLOADER> prompt.

The default simdisk inventory is 28x1gb 15k:
setenv bootarg.vm.sim.vdevinit “23:14:0,23:14:1”
setenv bootarg.sim.vdevinit “23:14:0,23:14:1”

This inventory enables the simulation of multiple disk tiers and/or flash pools: (28x1gb 15k+14x2gb 7.2k+14x100mb SSD)
setenv bootarg.vm.sim.vdevinit “23:14:0,23:14:1,32:14:2,34:14:3”
setenv bootarg.sim.vdevinit “23:14:0,23:14:1,32:14:2,34:14:3”

This one makes the usable capacity of the DataONTAP-sim.vmdk even larger (54x4gb 15k)
setenv bootarg.vm.sim.vdevinit “31:14:0,31:14:1,31:14:2,31:14:3”
setenv bootarg.sim.vdevinit “31:14:0,31:14:1,31:14:2,31:14:3”

Using type 36 drives you can create the largest capacity.

Explanation of the above:
31:14:0 means: type 31 (4 Gbyte), 14 disks, shelf 0
Type 23: 1 Gbyte FC
Type 30: 2 Gbyte FC
Type 31: 4 Gbyte FC
Type 35: 500 Mbyte SSD (simulated of course, for flashpool)
Type 36: 9 Gbyte SAS

Choose any combination of disks, but the maximum number of disks a simulator would allow is 56, no matter what size or type.
.
9. Then Boot the simulator.
VLOADER> boot
10. Wait until you see the menu announcement.
Press Ctrl-C for the menu. Do NOT press enter after ctrl-C, ONLY ctrl-c !
If you press ctrl-c too quick, you get the VLOADER> instead.
If so, type: boot and wait for the menu announcement.
11. From the menu, choose option 4.
Confirm twice that you indeed would like to wipe all disks.
The system reboots.
12. Wait for the zeroing process to finish. That could take a while… Follow the dots… ;-)
13. When done, follow the setup procedure and build your first Clustered Data Ontap node.
14. The cluster base license key in a cdot vsim is:
SMKQROWJNQYQSDAAAAAAAAAAAAAA (that is 14 x A)
Further licenses can be easily added later using putty, since with putty you can cut and paste (on the VMware console in many cases you cannot), so you can skip the other licenses for now.
15. Turn off auto support:
Cluster::> autosupport modify -support disable -node *
16. Assign all disks:
Cluster::> disk assign -all true -node
17. Now add enough disks to Aggr0 to allow for enough space for the upgrade.
It has to be at least 10 Gbyte.
a. Cluster::> Storage aggregate show
b. Cluster::> Storage aggregate add-disks <# disks>
example: Cluster::> stor aggr add-disks aggr0 2
c. Cluster::> Storage aggregate show
d. Keep adding disks until you reach at least 10 Gbyte for Aggr0.
1. Log in with Putty and add all licenses for this node.
2. Check if the cluster is healthy:
Cluster::> cluster show
It should show the word ‘healthy’
3. Configure NTP time:
a. This is how to check it:
Cluster::> system services ntp server show
Cluster::> system date show (or: cluster date show)
b. This is an example how to set it:
Cluster::> timezone Europe/Amsterdam
Cluster::> system services ntp server create -server -node
Cluster::> cluster date show
1. This is how you can determine which image (OnTap version) is used on the simulator:
Cluster::> system node image show
2. Now it is time to make a VM Snapshot of the simulator, before you do the actual update.
A snapshot would allow you to revert to this state, start again with the update process, or update to a different version. Wait for the VM snapshot to finish before you continue.
3. If you don’t have it yet, download the Clustered Data Ontap image from the download.netapp.com website. You need an account there to do so.
You can use the FAS6290 image, e.g. 822_q_image.tgz of 83_q_image.tgz (or whatever you need)
And yes, the simulator can be updated with this image! Neat huh!
4. We are going to use HTTP File Server (or HFS) to do the upgrade.
If you don’t have it yet, this is a free and very easy to use program, excellent for this job.
You can download it from http://www.rejetto.com/hfs/
5. Add the .tgz image to HFS. Drag and drop will do. Do NOT unzip the image.
6. Make sure HFS is using the right local IP address.
If you have VMware on your local machine or any other program using a different IP range, HFS might use the wrong IP address. In HFS, go to the menu, under IP Address and check if the IP address is correct.
7. In the first line in HFS you see the URL you need to use. Write down this information.
8. Now we are going to do the upgrade:
Cluster::> system node image update -node * -package -replace-package true
Example: Cluster::> system node image update -node * -package http://192.168.178.88:8080/822_q_image.tgz -replace-package true

9. Since we only have one node, you cannot check if this is working fine.
If you decided to set up two or more nodes to start with, this is what you can do to check the progress on a different node:
Cluster::>> system node image show-update-progress -node *
You have to keep doing this until the system says: exited. Exit status Success
Otherwise, wait about half an hour. It takes quite a while! Be patient….!!!
10. When done, check if everything went okay:
Cluster::> system node image show
You should see the new version as image 2, but as you can see, that is not the default yet.

11. If it is indeed image2, choose to set this image as default:
Cluster::> system node image modify -node -image image2 -isdefault true
12. Check if that is okay now:
Cluster::> system node image show
Under ‘Is default’ is should say ‘true’
13. Reboot the node
Cluster::> reboot -node *
14. Just let it reboot completely. Do not interrupt. Wait for the login prompt.
The system would probably tell you the update process is not ready yet. No problem.
Just wait a few minutes. That will be done automatically.
15. If you want to check if it is still busy with the upgrade:
Cluster::> set -privilege advanced
Cluster::> system node upgrade-revert show
If you see the message: upgrade successful, it is ready.
16. You can check to see if everything has been upgraded correctly with:
Cluster::> system node image show
Cluster::> version
17. In VMware, you can set the memory usage back to 1.6 Gbyte now.
18. Now add the second node and follow the same procedure.
19. Alternatively, you could have set up two or more nodes with 8.2.1. first and do the upgrade afterwards, whatever you like better.

Notes (not with detailed instructions):
The same procedure applies to 7-mode. The upgrade instructions are a bit different, but the procedure is the same. What we found out in 7-mode however, is, that we needed to set the /cfcard directory to read/write first, since it was read only. I did not have that issue with the cdot upgrade.
If you encounter that issue, this is what you need to do before the upgrade:
1. Go to the systemshell
2. sudo chmod –r 777 /cfcard

Furthermore, if you want to run the simulator on a Vmware ESX-i box you need to change something there too:
1. Allow remote access with SSH and use putty to log in on your ESX-i server:
2. Edit the file /etc/rc.local.d/local.sh
3. Add the following text as last line:
vmkload_mod multiextent
4. Reboot the ESX box.

Posted in Uncategorized | Leave a comment

clustermode temproot

1. Bootmenu
2. create_temp_root tempvol disk1

Posted in Uncategorized | Leave a comment

clustermode dns loadbalancing

Setup DNS loadbalancing clustermode.
The lifs belonging to the same dns-zone should be in the same vserver.
The vserver is in fact the dns-zone.

1. Create vserver
vserver create nfs -rootvolume rootvol -aggregate aggr1_n2 -ns-switch file -nm-switch file -rootvolume-security-style unix

2. net int create -vserver nfs -lif lif1 -role data -data-protocol nfs -home-node cluster1-01 -home-port e0c -address
192.168.0.108 -netmask 255.255.255.0 -status-admin up -dns-zone nfs.learn.netapp.local -listen-for-dns-query true

net int create -vserver nfs -lif lif2 -role data -data-protocol nfs -home-node cluster1-01 -home-port e0c -address
192.168.0.109 -netmask 255.255.255.0 -status-admin up -dns-zone nfs.learn.netapp.local -listen-for-dns-query true

3. In Windows DNS.
DNS -> learn.netapp.local -> new delegation -> nfs -> ip 192.168.0.109 (resolve) -> next -> finish

4. Open cmd tool in Windows.
nslookup nfs

Done.

Posted in Uncategorized | Leave a comment

clustermode upgrade

1. put image.tgz on webserver.
clustershell: system image update -node <nodename> http://<webserver>/image.tgz

or

2. put image.tgz in /mroot/pkg/
clustershell: system image update -node file:///mroot/pkg/image.tgz

Posted in Uncategorized | Leave a comment

solaris 10 nocacheflush

Solaris 10
Set Dynamically (using the debugger):
echo zfs_nocacheflush/W0t1 | mdb -kw

Revert to Default:
echo zfs_nocacheflush/W0t0 | mdb -kw

Set the following parameter in the /etc/system file:
set zfs:zfs_nocacheflush = 1

Posted in Uncategorized | Leave a comment

7-mode upgrade 32bit to 64bit

filer1> priv set diag
filer1*> aggr 64bit-upgrade start aggr4 -mode grow-all

Posted in Uncategorized | Leave a comment

solaris 11 who uses which port.

# mkdir /scripts

# vi /scripts/ports

#!/bin/ksh

line='---------------------------------------------'
pids=$(/usr/bin/ps -ef | sed 1d | awk '{print $2}')

if [ $# -eq 0 ]; then
   read ans?"Enter port you would like to know pid for: "
else
   ans=$1
fi

for f in $pids
do
   /usr/proc/bin/pfiles $f 2>/dev/null | /usr/xpg4/bin/grep -q "port: $ans"
   if [ $? -eq 0 ]; then
      echo $line
      echo "Port: $ans is being used by PID:\c"
      /usr/bin/ps -ef -o pid -o args | egrep -v "grep|pfiles" | grep $f
   fi
done
exit 0

 # chmod +x /scripts/ports

# /scripts/ports

Enter port you would like to know pid for: 2049
---------------------------------------------
Port: 2049 is being used by PID:18855 /usr/lib/nfs/nfsd

Posted in Uncategorized | Leave a comment

solaris 11 custom zone install

Adding Additional Packages in a Zone by Using a Custom AI Manifest

The process of adding extra software in a zone at installation can be automated by revising the AI manifest. The specified packages and the packages on which they depend will be installed. The default list of packages is obtained from the AI manifest. The default AI manifest is/usr/share/auto_install/manifest/zone_default.xml. See Adding and Updating Software in Oracle Solaris 11.2 for information on locating and working with packages.

Example 9-1  Revising the Manifest

The following procedure adds mercurial and a full installation of the vim editor to a configured zone named my-zone. (Note that only the minimalvim-core that is part of solaris-small-server is installed by default.)

  1. Copy the default AI manifest to the location where you will edit the file, and make the file writable.
    # cp /usr/share/auto_install/manifest/zone_default.xml ~/my-zone-ai.xml
    # chmod 644 ~/my-zone-ai.xml
  2. Edit the file, adding the mercurial and vim packages to the software_data section as follows:
          <software_data action="install">
                   <name>pkg:/group/system/solaris-small-server</name>
                   <name>pkg:/developer/versioning/mercurial</name>
                   <name>pkg:/editor/vim</name>
                </software_data>
  3. Install the zone.
    # zoneadm -z my-zone install -m ~/my-zone-ai.xml
Posted in Uncategorized | Leave a comment

solaris 11 zonereplication and start

Migrate the zone to other machine using replication.
example: zonename azone; zonepath root zones/azone
# zfs snapshot -r zones/azone@0
# zfs send -R zones/azone@0 | ssh root@othermachine zfs receive zones/azone

Run script on other machine
#./script zones/azone/rpool
--------------------------------------
#!/bin/bash

zfsfs=$1
root=${zfsfs}/ROOT
zbe=${root}/solaris

for i in $zbe $root $zfsfs ; do
for j in zoned mountpoint ; do
zfs inherit $j $i
done
done

zfs set mountpoint=legacy $root
zfs set zoned=on $root
zfs set canmount=noauto $zbe
zfs set org.opensolaris.libbe:active=on $zbe

rbe=`zfs list -H -o name /`
uuid=`zfs get -H -o value org.opensolaris.libbe:uuid $rbe`

zfs set org.opensolaris.libbe:parentbe=$uuid $zbe

-----------------------------------------------
# mount -F zfs zones/azone/rpool/ROOT/solaris /zones/azone/root

Add the zone to /etc/zones/index

Create the /etc/zones/azone.xml file

 

# zoneadm -z azone boot

Posted in Uncategorized | Leave a comment

solaris 10 keytable

/usr/openwin/share/etc/keytables/keytable.map file to force the use of  Denmark_x86.kt or Denmark6.kt as default.

Current

Type    Layout  Filename

0       0       US4.kt                  # Default keytable

 

change to

Type    Layout  Filename

0       0       Denmark_x86.kt                  # Default keytable

Remove all lines below that.

 

Posted in solaris | Leave a comment

netapp 7-mode linux iscsi

Steps:
1. on filer create a volume (lunvol1)
2. on filer create a lun in the volume (lunlin)
3. on linux install the iscsi-initiator-utils
4. on linux determine initiatorname
5. on filer create igroup with linux iqn (lingroup)
6. on linux enable iscsi for after reboots
7. on linux discover and login to target
8. on filer map lun to igroup
9. on linux restart iscsi
10. on linux get lun devicename
11. on linux partition lun (primary partition 1)
12. on linux create filestystem on lun
13. on linux create mountpoint (/mnt/sdh1)
14. on linux mount filesystem

Commands:
1. filer1> vol create lunvol1 aggr1 150m
2. filer1> lun create -s 130m -t linux /vol/lunvol1/lunlin

3. linux# yum install iscsi-initiator-utils
4. linux# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:3add18fcc55c

5. filer1> igroup create -i -t linux lingroup iqn.1994-05.com.redhat:3add18fcc55c

6. linux# chkconfig iscsi on
7. linux# iscsiadm -m discovery -t sendtargets -p 192.168.4.128

8. filer1> lun map /vol/lunvol1/lunlin lingroup iqn.1994-05.com.redhat:3add18fcc55c

9. linux1> /etc/init.d/iscsi restart
10. linux1> grep -i "attached scsi disk" /var/log/messages
Jun 25 20:42:52 fritz kernel: sd 9:0:0:0: [sdh] Attached SCSI disk

11. linux# fdisk /dev/sdh
12. linux# mkfs /dev/sdh1
13. linux# mkdir /mnt/sdh1
14. linux# mount /dev/sdh1 /mnt/sdh1

done.

Posted in Uncategorized | Leave a comment

clustermode domain-tunnel

To have the administrator of the AD domain login to the administration vserver and manage the cluster, create a domain-tunnel.

The name of the admin-vserver: cl1

security login create -vserver cl1 -username netapp\administrator -application ssh -authmethod domain -role admin

create a vserver with vserver setup

vservername: vscifs,
protocols: cifs
create datavolume: no
create lif: no
configure dns: yes
domainname: netapp.local
nameserver: 192.168.4.244
configure cifs: yes
cifs servername: vscifs-cl1
active directory domain: netapp.local
ad adminname: administrator
ad adminpassword: *******

security login domain-tunnel create vscifs

On windows in cmd-tool: plink netapp\administrator@cl1

Posted in netapp, Uncategorized | Leave a comment

clustermode networkdiagram

netdiagram

Posted in Uncategorized | Leave a comment

Netapp filefolding

file folding original link

WAFL File Folding Explained
While recently delivering training, one of my partners inquired about a WAFL feature called “file folding”, so I thought I’d take a moment to detail this lesser-known NetApp feature.

File Folding is a Data ONTAP feature that saves space when a user re-writes a file with the same data.

More specifically, it checks the data in the most recent Snapshot copy. If this data is identical to the Snapshot copy currently being created, it references the previous Snapshot copy -- instead of taking up disk space writing the same data in a new Snapshot copy.

Although File Folding saves disk space, it can also impact system performance, as the system must compare block contents when folding a file. If the folding process reaches a maximum limit on memory usage, it is suspended. When memory usage falls below the limit, the processes that were halted are restarted.

This feature has long been part of Data ONTAP and currently exists within 7-Mode (not Cluster-Mode) of Data ONTAP 8.1. It can be enabled with the command:
options cifs.snapshot_file_folding.enable on
But what exactly is going on behind the scenes?

There are two stages to this process: 1.) determining which files are candidates for folding and 2.) actually folding the files into the Snapshot copy. Let’s walk through both of these stages in detail:

Once WAFL has received a message for a file that may be a candidate for folding, it retrieves the inode for the candidate file via pathname (directory) lookups, parsing the directory, and mapping it to a root inode.

In other words, we’re now able to construct a file handle in order to retrieve the block (inode) from disk.

WAFL then grabs the corresponding inode of that file from the most recent snapshot (remember, when snapshots are created, a new root inode is created for that snapshot). If no corresponding file to inode match is found, then the file cannot be in the most recent snapshot and thus no folding will occur.

But let’s assume that WAFL does find a snapshot inode corresponding to the active file.

As the blocks are loaded into the buffer, the file folding process compares the blocks of the active file system vs. the snapshot blocks – on a block-for-block basis. Assuming that each block is identical, it’s time to transition to the second stage.

In the second stage of File Folding, a data block is “freed” by updating the corresponding bit of an active map of the active file system (indicating block is unallocated). Next, the block in the snapshot is allocated by updating a bit within the active map corresponding to the snapshot block (indicating it’s now being used). WAFL then updates the block pointer of the parent buffer to reference the block number of the block in the snapshot.

From this point onward, the parent buffer is tagged as “dirty” to ensure everything else is written out to disk on the next Consistency Point (CP).

And now you know probably more than you’ve ever wanted to know about File Folding!

Posted in netapp | Leave a comment

solaris zdb

Original link

zdb: Examining ZFS At Point-Blank Range
01 Nov '08 - 08:13 by benr

ZFS is an amazing in its simplicity and beauty, however it is also deceivingly complex. The chance that you'll ever be forced to peer behind the veil is unlikely outside of the storage enthusiast ranks, but as it proliferates more questions will come up regarding its internals. We have been given a tool to assist us investigate the inner workings, zdb, but it is, somewhat intentionally I think, undocumented. Only two others that I know have had the courage to talk about it publicly, Max Bruning who is perhaps the single most authoritative voice regarding ZFS outside of Sun, and Marcelo Leal.

In this post, we'll look only at the basics of ZDB to establish a baseline for its use. Running "zdb -h" will produce a summary of its syntax.

In its most basic form, zdb poolname, several bits of information about our pool will be output, including:

Cached pool configuratino (-C)
Uberblock (-u)
Datasets (-d)
Report stats on zdb's I/O (-s), this is similar to the first interval of zpool iostat
Thus, zdb testpool is the same as zdb -Cuds testpool. Lets look at the output. The pool we'll be using is actually a 256MB pre-allocated file with a single dataset... as simple as it can come.

root@quadra /$ zdb testpool
version=12
name='testpool'
state=0
txg=182
pool_guid=1019414024587234776
hostid=446817667
hostname='quadra'
vdev_tree
type='root'
id=0
guid=1019414024587234776
children[0]
type='file'
id=0
guid=6723707841658505514
path='/zdev/disk002'
metaslab_array=23
metaslab_shift=21
ashift=9
asize=263716864
is_log=0
Uberblock

magic = 0000000000bab10c
version = 12
txg = 184
guid_sum = 7743121866245740290
timestamp = 1225486684 UTC = Fri Oct 31 13:58:04 2008

Dataset mos [META], ID 0, cr_txg 4, 87.0K, 49 objects
Dataset testpool/dataset01 [ZPL], ID 30, cr_txg 6, 19.5K, 5 objects
Dataset testpool [ZPL], ID 16, cr_txg 1, 19.0K, 5 objects
capacity operations bandwidth ---- errors ----
description used avail read write read write read write cksum
testpool 139K 250M 638 0 736K 0 0 0 0
/zdev/disk002 139K 250M 638 0 736K 0 0 0 0
And so we see a variety of useful information, including:

Zpool (On Disk Format) Version Number
State
Host ID & Hostname
GUID (This is that numberic value you use when zpool import doesn't like the name)
Children VDEV's that make up the pool
Uberblock magic number (read that hex value as "uba-bloc", get it, 0bab10c, its funny!)
Timestamp
List of datasets
Summary of IO stats
So this information is interesting, but frankly not terribly useful if you already have the pool imported. This would likely be of more value if you couldn't, or wouldn't, import the pool, but those cases are rare and 99% of the time zpool import will tell you want you want to know even if you don't actually import.

There are 3 arguments that are really the core ones of interest, but fefore we get to them, you absolutely must understand something unique about zdb. ZDB is like a magnifying glass, at default magnification you can see that its tissue, turn up the magnification and you see that it has veins, turn it up again and you see how intricate the system is, crank it up one more time and you can see blood cells themselves. With zdb, each time we repeat an argument we increase the verbosity and thus dig deeper. For instance, zdb -d will list the datasets of a pool, but zdb -dd will output the list of objects within the pool. Thus, when you really zoom in you'll see commands that look really odd like zdb -ddddddddd. This takes a little practice to get the hang of, so please toy around on a small test pool to get the hang of it.

Now, here are summaries of the 3 primary arguments you'll use and how things change as you crank up the verbosity:

zdb -b pool: This will traverse blocks looking for leaks like the default form.
-bb: Outputs a breakdown of space (block) usage for various ZFS object types.
-bbb: Same as above, but includes breakdown by DMU/SPA level (L0-L6).
-bbbb: Same as above, but includes line line per object with details about it, including compression, checksum, DVA, object ID, etc.
-bbbbb...: Same as above.
zdb -d dataset: This will output a list of objects within a dataset. More d's means more verbosity:
-d: Output list of datasets, including ID, cr_txg, size, and number of objects.
-dd: Output concise list of objects within the dataset, with object id, lsize, asize, type, etc.
-ddd: Same as dd.
-dddd: Outputs list of datasets and objects in detail, including objects path (filename), a/c/r/mtime, mode, etc.
-ddddd: Same as previous, but includes indirect block addresses (DVAs) as well.
-dddddd....: Same as above.
zdb -R pool:vdev_specifier:offset:size[:flags]: Given a DVA, outputs object contents in hex display format. If given the :r flag it will output in raw binary format. This can be used for manual recovery of files.
So lets play with the first form above, block traversal. This will sweep the blocks of your pool or dataset adding up what it finds and then producing a report of any leakage and how the space breakdown works. This is extremely useful information, but given that it traverses all blocks its going to take a long time depending on how much data you have. On a home box this might take minutes or a couple hours, on a large storage subsystem is could take hours or days. Lets look at both -b and -bb for my simple test pool:

root@quadra ~$ zdb -b testpool

Traversing all blocks to verify nothing leaked ...

No leaks (block sum matches space maps exactly)

bp count: 50
bp logical: 464896 avg: 9297
bp physical: 40960 avg: 819 compression: 11.35
bp allocated: 102912 avg: 2058 compression: 4.52
SPA allocated: 102912 used: 0.04%

root@quadra ~$ zdb -bb testpool

Traversing all blocks to verify nothing leaked ...

No leaks (block sum matches space maps exactly)

bp count: 50
bp logical: 464896 avg: 9297
bp physical: 40960 avg: 819 compression: 11.35
bp allocated: 102912 avg: 2058 compression: 4.52
SPA allocated: 102912 used: 0.04%

Blocks LSIZE PSIZE ASIZE avg comp %Total Type
3 12.0K 1.50K 4.50K 1.50K 8.00 4.48 deferred free
1 512 512 1.50K 1.50K 1.00 1.49 object directory
1 512 512 1.50K 1.50K 1.00 1.49 object array
1 16K 1K 3.00K 3.00K 16.00 2.99 packed nvlist
- - - - - - - packed nvlist size
1 16K 1K 3.00K 3.00K 16.00 2.99 bplist
- - - - - - - bplist header
- - - - - - - SPA space map header
3 12.0K 1.50K 4.50K 1.50K 8.00 4.48 SPA space map
- - - - - - - ZIL intent log
16 256K 18.0K 40.0K 2.50K 14.22 39.80 DMU dnode
3 3.00K 1.50K 3.50K 1.17K 2.00 3.48 DMU objset
- - - - - - - DSL directory
4 2K 2K 6.00K 1.50K 1.00 5.97 DSL directory child map
3 1.50K 1.50K 4.50K 1.50K 1.00 4.48 DSL dataset snap map
4 2K 2K 6.00K 1.50K 1.00 5.97 DSL props
- - - - - - - DSL dataset
- - - - - - - ZFS znode
- - - - - - - ZFS V0 ACL
1 512 512 512 512 1.00 0.50 ZFS plain file
3 1.50K 1.50K 3.00K 1K 1.00 2.99 ZFS directory
2 1K 1K 2K 1K 1.00 1.99 ZFS master node
2 1K 1K 2K 1K 1.00 1.99 ZFS delete queue
- - - - - - - zvol object
- - - - - - - zvol prop
- - - - - - - other uint8[]
- - - - - - - other uint64[]
- - - - - - - other ZAP
- - - - - - - persistent error log
1 128K 4.50K 13.5K 13.5K 28.44 13.43 SPA history
- - - - - - - SPA history offsets
- - - - - - - Pool properties
- - - - - - - DSL permissions
- - - - - - - ZFS ACL
- - - - - - - ZFS SYSACL
- - - - - - - FUID table
- - - - - - - FUID table size
1 512 512 1.50K 1.50K 1.00 1.49 DSL dataset next clones
- - - - - - - scrub work queue
50 454K 40.0K 101K 2.01K 11.35 100.00 Total
Here we can see the "zooming in" effect I described earlier. Here "BP" stands for "Block Pointer". The most common "Type" you'll see is "ZFS plain file", that is, a normal data file like an image or textfile or something... the data you care about.

Moving on to the second form, -d to output datasets and their objects. This is where introspection really occurs. With a simple -d we can see a recursive list of datasets, but as we turn up the verbosity (-dd) we zoom into the objects within the dataset, and then just get more and more detail about those objects.

root@quadra ~$ zdb -d testpool/dataset01
Dataset testpool/dataset01 [ZPL], ID 30, cr_txg 6, 18.5K, 5 objects

root@quadra ~$ zdb -dd testpool/dataset01
Dataset testpool/dataset01 [ZPL], ID 30, cr_txg 6, 18.5K, 5 objects

Object lvl iblk dblk lsize asize type
0 7 16K 16K 16K 14.0K DMU dnode
1 1 16K 512 512 1K ZFS master node
2 1 16K 512 512 1K ZFS delete queue
3 1 16K 512 512 1K ZFS directory
4 1 16K 512 512 512 ZFS plain file
So lets pause here. We can see the list of objects in my testpool/dataset01 by object id. This is important because we can use those id's to dig deeper on an individual object later. But for now, lets zoom in a little bit more (-dddd) on this dataset.

root@quadra ~$ zdb -dddd testpool/dataset01
Dataset testpool/dataset01 [ZPL], ID 30, cr_txg 6, 18.5K, 5 objects, rootbp [L0 DMU objset] 400L/200P DVA[0]=<0:12200:200> DVA[1]=<0:3014c00:200> fletcher4 lzjb LE contiguous birth=8 fill=5 cksum=a525c6edf:45d1513a8c8:ef844ac0e80e:22b9de6164dd69

Object lvl iblk dblk lsize asize type
0 7 16K 16K 16K 14.0K DMU dnode

Object lvl iblk dblk lsize asize type
1 1 16K 512 512 1K ZFS master node
microzap: 512 bytes, 6 entries

casesensitivity = 0
normalization = 0
DELETE_QUEUE = 2
ROOT = 3
VERSION = 3
utf8only = 0

Object lvl iblk dblk lsize asize type
2 1 16K 512 512 1K ZFS delete queue
microzap: 512 bytes, 0 entries

Object lvl iblk dblk lsize asize type
3 1 16K 512 512 1K ZFS directory
264 bonus ZFS znode
path /
uid 0
gid 0
atime Fri Oct 31 12:35:30 2008
mtime Fri Oct 31 12:35:51 2008
ctime Fri Oct 31 12:35:51 2008
crtime Fri Oct 31 12:35:30 2008
gen 6
mode 40755
size 3
parent 3
links 2
xattr 0
rdev 0x0000000000000000
microzap: 512 bytes, 1 entries

testfile01 = 4 (type: Regular File)

Object lvl iblk dblk lsize asize type
4 1 16K 512 512 512 ZFS plain file
264 bonus ZFS znode
path /testfile01
uid 0
gid 0
atime Fri Oct 31 12:35:51 2008
mtime Fri Oct 31 12:35:51 2008
ctime Fri Oct 31 12:35:51 2008
crtime Fri Oct 31 12:35:51 2008
gen 8
mode 100644
size 21
parent 3
links 1
xattr 0
rdev 0x0000000000000000
Now, this output is short because the dataset include only a single file. In the real world this output will be gigantic and should be redirected to a file. When I did this on the dataset containing my home directory the output file was 750MB... its a lot of data.

Look specifically at Object 4, a "ZFS plain file". Notice that I can see that files pathname, uid, gid, a/m/c/crtime, mode, size, etc. This is where things can get really interesting!

In zdb's 3rd form above (-R) we can actually display the contents of a file, however we need its Device Virtual Address (DVA) and size to do so. In order to get that information, we can zoom in using -d little further, but this time just on Object 4:

root@quadra /$ zdb -ddddd testpool/dataset01 4
Dataset testpool/dataset01 [ZPL], ID 30, cr_txg 6, 19.5K, 5 objects, rootbp [L0 DMU objset] 400L/200P DVA[0]=<0:172e000:200> DVA[1]=<0:460e000:200> fletcher4 lzjb LE contiguous birth=168 fill=5 cksum=a280728d9:448b88156d8:eaa0ad340c25:21f1a0a7d45740

Object lvl iblk dblk lsize asize type
4 1 16K 512 512 512 ZFS plain file
264 bonus ZFS znode
path /testfile01
uid 0
gid 0
atime Fri Oct 31 12:35:51 2008
mtime Fri Oct 31 12:35:51 2008
ctime Fri Oct 31 12:35:51 2008
crtime Fri Oct 31 12:35:51 2008
gen 8
mode 100644
size 21
parent 3
links 1
xattr 0
rdev 0x0000000000000000
Indirect blocks:
0 L0 0:11600:200 200L/200P F=1 B=8

segment [0000000000000000, 0000000000000200) size 512
Now, see that "Indirect block" 0? Following L0 (Level 0) is a tuple: "0:11600:200". This is the DVA and Size, or more specifically it is the triple: vdev:offset:size. We can use this information to request its contents directly.

And so, the -R form can display and individual blocks from a device. To do so, we need to know the pool name, vdev/offset (DVA) and its size. Given what we did above, we now know that, so lets try it:

root@quadra /$ zdb -R testpool:0:11600:200
Found vdev: /zdev/disk002

testpool:0:11600:200
0 1 2 3 4 5 6 7 8 9 a b c d e f 0123456789abcdef
000000: 2073692073696854 6620747365742061 This is a test f
000010: 0000000a2e656c69 0000000000000000 ile.............
000020: 0000000000000000 0000000000000000 ................
000030: 0000000000000000 0000000000000000 ................
000040: 0000000000000000 0000000000000000 ................
000050: 0000000000000000 0000000000000000 ................
...
w00t! We can read the file contents!

You'll notice in the zdb syntax ("zdb -h") that this syntax above accepts flags as well. We can find these in the ZDB source. The most interesting is the "r" flag which rather than display the data as above, actually dumps the data in raw form to STDERR.

So why is this useful? Try this on for size:

root@quadra /$ rm /testpool/dataset01/testfile01
root@quadra /$ sync;sync
root@quadra /$ zdb -dd testpool/dataset01
Dataset testpool/dataset01 [ZPL], ID 30, cr_txg 6, 18.0K, 4 objects

Object lvl iblk dblk lsize asize type
0 7 16K 16K 16K 14.0K DMU dnode
1 1 16K 512 512 1K ZFS master node
2 1 16K 512 512 1K ZFS delete queue
3 1 16K 512 512 1K ZFS directory

....... THE FILE IS REALLY GONE! ..........

root@quadra /$ zdb -R testpool:0:11600:200:r 2> /tmp/output
Found vdev: /zdev/disk002
root@quadra /$ ls -lh /tmp/output
-rw-r--r-- 1 root root 512 Nov 1 01:54 /tmp/output
root@quadra /$ cat /tmp/output
This is a test file.
How sweet is that! We delete a file, verify with zdb -dd that it really and truely is gone, and then bring it back out based on its DVA. Super sweet!

Now, before you get overly excited, some things to note... firstly, if you delete a file in the real world you probly don't have its DVA and size already recorded, so your screwed. Also, notice that the origonal file was 21 bytes, but the "recovered" file is 512... its been padded, so if you recovered a file and tried using an MD5 hash or something to verify the content it wouldn't match, even though the data was valid. In other words, the best "undelete" option is snapshots.. they are quick, easy, use them. Using zdb for file recovery isn't practical.

I recently discovered and used this method to deal with a server that suffered extensive corruption as a result of a shitty (Sun Adaptec rebranded STK) RAID controller gone berzerk following a routine disk replacement. I had several "corrupt" files that I could not read or reach, if I tried to do so I'd get a long pause, lots of errors to syslog, and then a "I/O Error" return. Hopeless, this is a "restore from backups" situation. Regardless, I wanted to learn from the experience. Here is an example of the result:

[root@server ~]$ ls -l /xxxxxxxxxxxxxx/images/logo.gif
/xxxxxxxxxxxxxx/images/logo.gif: I/O error

[root@server ~]$ zdb -ddddd pool/xxxxx 181359
Dataset pool/xxx [ZPL], ID 221, cr_txg 1281077, 3.76G, 187142 objects, rootbp [L0 DMU objset] 400L/200P DVA[0]=<0:1803024c00:200> DVA[1]=<0:45007ade00:200> fletcher4
lzjb LE contiguous birth=4543000 fill=1
87142 cksum=8cc6b0fec:3a1b508e8c0:c36726aec831:1be1f0eee0e22c

Object lvl iblk dblk lsize asize type
181359 1 16K 1K 1K 1K ZFS plain file
264 bonus ZFS znode
path /xxxxxxxxxxxxxx/images/logo.gif
atime Wed Aug 27 07:42:17 2008
mtime Wed Apr 16 01:19:06 2008
ctime Thu May 1 00:18:34 2008
crtime Thu May 1 00:18:34 2008
gen 1461218
mode 100644
size 691
parent 181080
links 1
xattr 0
rdev 0x0000000000000000
Indirect blocks:
0 L0 0:b043f0c00:400 400L/400P F=1 B=1461218

segment [0000000000000000, 0000000000000400) size 1K

[root@server ~]$ zdb -R pool:0:b043f0c00:400:r 2> out
Found vdev: /dev/dsk/c0t1d0s0
[root@server ~]$ file out
out: GIF file, v89
Because real data is involved I had to cover up most of the above, but you can see how the methods we learned above were used to gain a positive result. Normal means of accessing the file failed miserably, but using zdb -R I dumped the file out. As a verification I opened the GIF in an image viewer and sure enough it looks perfect!

This is a lot to digest, but this is about as simple a primer to zdb as your going to find. Hopefully I've given you a solid grasp of the fundamentals so that you can experiment on your own.

Where do you go from here? As noted before, I recommend you now check out the following:

Max Bruning's ZFS On-Disk Format Using mdb and zdb: Video presentation from the OpenSolaris Developer Conference in Prague on June 28, 2008. An absolute must watch for the hardcore ZFS enthusiast. Warning, may cause your head to explode!
Marcelo Leal's 5 Part ZFS Internals Series. Leal has tremendous courage to post these, he's doing tremendous work! Read it!
Good luck and happy zdb'ing.... don't tell Sun. :)

Posted in solaris | Leave a comment

clustermode export-policies and rules

policies-vs1Click IMAGE

Posted in netapp | Leave a comment

NPIV N_port ID Virtualization

Scott Lowe link

Understanding NPIV and NPV
Friday, November 27, 2009 in Gestalt, Networking, Storage by slowe | 71 comments
Two technologies that seem to have come to the fore recently are NPIV (N_Port ID Virtualization) and NPV (N_Port Virtualization). Judging just by the names, you might think that these two technologies are the same thing. While they are related in some aspects and can be used in a complementary way, they are quite different. What I’d like to do in this post is help explain these two technologies, how they are different, and how they can be used. I hope to follow up in future posts with some hands-on examples of configuring these technologies on various types of equipment.

First, though, I need to cover some basics. This is unnecessary for those of you that are Fibre Channel experts, but for the rest of the world it might be useful:

N_Port: An N_Port is an end node port on the Fibre Channel fabric. This could be an HBA (Host Bus Adapter) in a server or a target port on a storage array.
F_Port: An F_Port is a port on a Fibre Channel switch that is connected to an N_Port. So, the port into which a server’s HBA or a storage array’s target port is connected is an F_Port.
E_Port: An E_Port is a port on a Fibre Channel switch that is connected to another Fibre Channel switch. The connection between two E_Ports forms an Inter-Switch Link (ISL).
There are other types of ports as well—NL_Port, FL_Port, G_Port, TE_Port—but for the purposes of this discussion these three will get us started. With these definitions in mind, I’ll start by discussing N_Port ID Virtualization (NPIV).

N_Port ID Virtualization (NPIV)

Normally, an N_Port would have a single N_Port_ID associated with it; this N_Port_ID is a 24-bit address assigned by the Fibre Channel switch during the FLOGI process. The N_Port_ID is not the same as the World Wide Port Name (WWPN), although there is typically a one-to-one relationship between WWPN and N_Port_ID. Thus, for any given physical N_Port, there would be exactly one WWPN and one N_Port_ID associated with it.

What NPIV does is allow a single physical N_Port to have multiple WWPNs, and therefore multiple N_Port_IDs, associated with it. After the normal FLOGI process, an NPIV-enabled physical N_Port can subsequently issue additional commands to register more WWPNs and receive more N_Port_IDs (one for each WWPN). The Fibre Channel switch must also support NPIV, as the F_Port on the other end of the link would “see” multiple WWPNs and multiple N_Port_IDs coming from the host and must know how to handle this behavior.

Once all the applicable WWPNs have been registered, each of these WWPNs can be used for SAN zoning or LUN presentation. There is no distinction between the physical WWPN and the virtual WWPNs; they all behave in exactly the same fashion and you can use them in exactly the same ways.

So why might this functionality be useful? Consider a virtualized environment, where you would like to be able to present a LUN via Fibre Channel to a specific virtual machine only:

Without NPIV, it’s not possible because the N_Port on the physical host would have only a single WWPN (and N_Port_ID). Any LUNs would have to be zoned and presented to this single WWPN. Because all VMs would be sharing the same WWPN on the one single physical N_Port, any LUNs zoned to this WWPN would be visible to all VMs on that host because all VMs are using the same physical N_Port, same WWPN, and same N_Port_ID.
With NPIV, the physical N_Port can register additional WWPNs (and N_Port_IDs). Each VM can have its own WWPN. When you build SAN zones and present LUNs using the VM-specific WWPN, then the LUNs will only be visible to that VM and not to any other VMs.
Virtualization is not the only use case for NPIV, although it is certainly one of the easiest to understand.

Now that I’ve discussed NPIV, I’d like to turn the discussion to N_Port Virtualization (NPV).

N_Port Virtualization

While NPIV is primarily a host-based solution, NPV is primarily a switch-based technology. It is designed to reduce switch management and overhead in larger SAN deployments. Consider that every Fibre Channel switch in a fabric needs a different domain ID, and that the total number of domain IDs in a fabric is limited. In some cases, this limit can be fairly low depending upon the devices attached to the fabric. The problem, though, is that you often need to add Fibre Channel switches in order to scale the size of your fabric. There is therefore an inherent conflict between trying to reduce the overall number of switches in order to keep the domain ID count low while also needing to add switches in order to have a sufficiently high port count. NPV is intended to help address this problem.

NPV introduces a new type of Fibre Channel port, the NP_Port. The NP_Port connects to an F_Port and acts as a proxy for other N_Ports on the NPV-enabled switch. Essentially, the NP_Port “looks” like an NPIV-enabled host to the F_Port on the other end. An NPV-enabled switch will register additional WWPNs (and receive additional N_Port_IDs) via NPIV on behalf of the N_Ports connected to it. The physical N_Ports don’t have any knowledge this is occurring and don’t need any support for it; it’s all handled by the NPV-enabled switch.

Obviously, this means that the upstream Fibre Channel switch must support NPIV, since the NP_Port “looks” and “acts” like an NPIV-enabled host to the upstream F_Port. Additionally, because the NPV-enabled switch now looks like an end host, it no longer needs a domain ID to participate in the Fibre Channel fabric. Using NPV, you can add switches and ports to your fabric without adding domain IDs.

So why is this functionality useful? There is the immediate benefit of being able to scale your Fibre Channel fabric without having to add domain IDs, yes, but in what sorts of environments might this be particularly useful? Consider a blade server environment, like an HP c7000 chassis, where there are Fibre Channel switches in the back of the chassis. By using NPV on these switches, you can add them to your fabric without having to assign a domain ID to each and every one of them.

Here’s another example. Consider an environment where you are mixing different types of Fibre Channel switches and are concerned about interoperability. As long as there is NPIV support, you can enable NPV on one set of switches. The NPV-enabled switches will then act like NPIV-enabled hosts, and you won’t have to worry about connecting E_Ports and creating ISLs between different brands of Fibre Channel switches.

I hope you’ve found this explanation of NPIV and NPV helpful and accurate. In the future, I hope to follow up with some additional posts—including diagrams—that show how these can be used in action. Until then, feel free to post any questions, thoughts, or corrections in the comments below. Your feedback is always welcome!

Disclosure: Some industry contacts at Cisco Systems provided me with information regarding NPV and its operation and behavior, but this post is neither sponsored nor endorsed by anyone.

Posted in Virtualization | Leave a comment

Ontap 8.2 licenses

CLUSTERED SIMULATE ONTAP LICENSES
+++++++++++++++++++++++++++++++++

These are the licenses that you use with the clustered Data ONTAP version
of Simulate ONTAP to enable Data ONTAP features.

There are four groups of licenses in this file:

- cluster base license
- feature licenses for the ESX build
- feature licenses for the non-ESX build
- feature licenses for the second node of a 2-node cluster

Cluster Base License (Serial Number 1-80-000008)
================================================

You use the cluster base license when setting up the first simulator in a cluster.

Cluster Base license = SMKQROWJNQYQSDAAAAAAAAAAAAAA

Clustered Data ONTAP Feature Licenses
=====================================

You use the feature licenses to enable unique Data ONTAP features on your simulator.

Licenses for the ESX build (Serial Number 4082368511)
-----------------------------------------------------

Use these licenses with the VMware ESX build.

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS CAYHXPKBFDUFZGABGAAAAAAAAAAA CIFS protocol
FCP APTLYPKBFDUFZGABGAAAAAAAAAAA Fibre Channel Protocol
FlexClone WSKTAQKBFDUFZGABGAAAAAAAAAAA FlexClone
Insight_Balance CGVTEQKBFDUFZGABGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI OUVWXPKBFDUFZGABGAAAAAAAAAAA iSCSI protocol
NFS QFATWPKBFDUFZGABGAAAAAAAAAAA NFS protocol
SnapLock UHGXBQKBFDUFZGABGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise QLXEEQKBFDUFZGABGAAAAAAAAAAA SnapLock Enterprise
SnapManager GCEMCQKBFDUFZGABGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror KYMEAQKBFDUFZGABGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect SWBBDQKBFDUFZGABGAAAAAAAAAAA SnapProtect Applications
SnapRestore YDPPZPKBFDUFZGABGAAAAAAAAAAA SnapRestore
SnapVault INIIBQKBFDUFZGABGAAAAAAAAAAA SnapVault primary and secondary

Licenses for the non-ESX build (Serial Number 4082368507)
---------------------------------------------------------

Use these licenses with the VMware Workstation, VMware Player, and VMware Fusion build.

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS YVUCRRRRYVHXCFABGAAAAAAAAAAA CIFS protocol
FCP WKQGSRRRYVHXCFABGAAAAAAAAAAA Fibre Channel Protocol
FlexClone SOHOURRRYVHXCFABGAAAAAAAAAAA FlexClone
Insight_Balance YBSOYRRRYVHXCFABGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI KQSRRRRRYVHXCFABGAAAAAAAAAAA iSCSI protocol
NFS MBXNQRRRYVHXCFABGAAAAAAAAAAA NFS protocol
SnapLock QDDSVRRRYVHXCFABGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise MHUZXRRRYVHXCFABGAAAAAAAAAAA SnapLock Enterprise
SnapManager CYAHWRRRYVHXCFABGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror GUJZTRRRYVHXCFABGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect OSYVWRRRYVHXCFABGAAAAAAAAAAA SnapProtect Applications
SnapRestore UZLKTRRRYVHXCFABGAAAAAAAAAAA SnapRestore
SnapVault EJFDVRRRYVHXCFABGAAAAAAAAAAA SnapVault primary and secondary

Licenses for the second node in a cluster (Serial Number 4034389062)
--------------------------------------------------------------------

Use these licenses with the second simulator in a cluster (either the ESX or non-ESX build).

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS MHEYKUNFXMSMUCEZFAAAAAAAAAAA CIFS protocol
FCP KWZBMUNFXMSMUCEZFAAAAAAAAAAA Fibre Channel Protocol
FlexClone GARJOUNFXMSMUCEZFAAAAAAAAAAA FlexClone
Insight_Balance MNBKSUNFXMSMUCEZFAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI YBCNLUNFXMSMUCEZFAAAAAAAAAAA iSCSI protocol
NFS ANGJKUNFXMSMUCEZFAAAAAAAAAAA NFS protocol
SnapLock EPMNPUNFXMSMUCEZFAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise ATDVRUNFXMSMUCEZFAAAAAAAAAAA SnapLock Enterprise
SnapManager QJKCQUNFXMSMUCEZFAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror UFTUNUNFXMSMUCEZFAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect CEIRQUNFXMSMUCEZFAAAAAAAAAAA SnapProtect Applications
SnapRestore ILVFNUNFXMSMUCEZFAAAAAAAAAAA SnapRestore
SnapVault SUOYOUNFXMSMUCEZFAAAAAAAAAAA SnapVault primary and secondary

Posted in netapp | Leave a comment

solaris 11 automated installer grub entries

When running installadm create-client,
the client grub.conf.01macaddress file is
generated from the following file:
/var/ai/service/"service- name"/menu.conf

to change the menu-order from text-installer to
automated-installer change the followin entry:

[meta]
order = SolarisNetBootInstance|0, SolarisNetBootInstance|1

to
[meta]
order = SolarisNetBootInstance|1, SolarisNetBootInstance|0

Now the first entry will be the automated installer.

Posted in solaris | Leave a comment

solaris 11 zones repository IPS

ips timfoster ips

Posted in Uncategorized | Leave a comment

solaris 11 bad login count

How many bad logins?

With additional thanks to Lambert Rots (ASP4ALL)

original url

What's Up With the flag Field in /etc/shadow on Solaris 11.1? 03Jan13
If you're running Solaris 11.1, and you happen check your /etc/shadow file, you may notice there's been a change to the flags field (the last one)...

bob:$5$GKM8z8qP$ho7oJF3ceAoFo9sH5f.jy4UP16TvzoO7XmSYS81o6QA:15708::::::9874

Prior to Solaris 11.1, this field only contained the following a few easy to read digits which the man page explained as...

flag Failed login count in low order four bits;
remainder reserved for future use, set to zero.
... and this started at 0 and incremented by one every time there was a failed login attempt. Now I'll let you into a secret, the above excerpt was actually taken from Solaris 11.1 which means the documentation hasn't been updated to reflect what you now see in the shadow file. That's correct.

The documentation has deliberately NOT been updated at this stage (Jan 2013) as it is still currently an unstable/private interface and thus not really ready for public consumption. That said, you can easily workout what the rest of the information stored in this field is by looking at the /usr/include/shadow.h file...

/*
* The spwd structure is used in the retreval of information from
* /etc/shadow. It is used by routines in the libos library.
*/
struct spwd {
char *sp_namp; /* user name */
char *sp_pwdp; /* user password */
int sp_lstchg; /* password lastchanged date */
int sp_min; /* minimum number of days between password changes */
int sp_max; /* number of days password is valid */
int sp_warn; /* number of days to warn user to change passwd */
int sp_inact; /* number of days the login may be inactive */
int sp_expire; /* date when the login is no longer valid */
unsigned int sp_flag; /* currently low 15 bits are used */

/* low 4 bits of sp_flag for counting failed login attempts */
#define FAILCOUNT_MASK 0xF
/* next 11 bits of sp_flag for precise time of last change */
#define TIME_MASK 0x7FF0
};
And there's your answer. The last line tells us that the rest of the flag field is used to store the time of the last password change, with the date of that change being stored in the lastchg (3rd) field.

So how do you use that figure?

Well, before I tell you, I must warn...

This is an unstable interface. It can and will most likely change at any time without any notice, so do NOT come to rely on this information.

Right, with that out of the way, lets see how we can interpret this field.

From the shadow.h file we know the last 4 bits are the number of failed login attempts, which can be obtained using (all commands are run at a Bash shell prompt):

$ echo "obase=2;9874" | bc
10011010010010
convert the last 4 bits to decimal
echo $((9874 & 15))
2

$
I've emboldened the last 4 bits. It should be obvious how many failed login attempts there have been, but lets switch these back to decimal to be sure:

$ echo "ibase=2;0010" | bc
2
$
Now for the next 11 bits. To get these we shift up 4 bits:

$ a=9874;((a>>=4));echo $a
617
$
This doesn't tell us much, but I can tell you this is the number of minutes into the day that the password was changed, so lets print this number in base-60:

$ echo "obase=60;617" | bc
10 17
$
Which is correct. I change this user's password today at 10h17, aka 10:17am.

The last two steps can be put into a single command:

$ a=9874;((a>>=4));echo "obase=60;$a" | bc
10 17
$
And there you have it. That is what is going on with the flag field in the /etc/shadow file on Solaris 11.1 and let me reiterate...

This is an unstable interface. It can and will most likely change at any time without any notice, so do NOT come to rely on this information.

I am providing this information just for information's sake and to provide you with a little explanation of what you might see.

Posted in solaris | Leave a comment

solaris 11 zfs recordsize

oracle blog

Posted in solaris, Uncategorized | Leave a comment

solaris 11 exercise ips (4) add local repository

(with thanks to LAM)

In this exercise you will setup a local repository

and add a testpackage to the repository.

You should have a zpool mounted on /software

1. create directory
mkdir /software/site

2. create repository
pkgrepo create /software/site

3. check
pkgrepo info -s /software/site
(it should show online)

4. in /software/site/pkg5.repository
set prefix = site

5. add smf entries
svccfg -s pkg/server add site
svccfg
select pkg/server:site
setprop pkg/inst_root=/software/site
setprop pkg/port=10083
exit

svcadm enable pkg/server:site
svcs pkg/server:site
STATE STIME FMRI
online 18:27:04 svc:/application/pkg/server:site

6. rebuild the repository
pkgrepo rebuild -s /software/site

7. set publisher
pkg set-publisher -g file:///software/site site

8. list your publishers
pkg publisher

9. add softwarepackages to new repository

mkdir /var/tmp/software
echo "a package" > /var/tmp/software/testpkg

eval 'pkgsend -s file:///software/site open testpkg@1.0.1'
export PKG_TRANS_ID=1399221077_pkg%3A%2F%2Fsite%2Ftestpkg%401.0.1%2C5.11%3A20140504T163117Z

export \
PKG_TRANS_ID=1399221077_pkg%3A%2F%2Fsite%2Ftestpkg%401.0.1%2C5.11%3A20140504T163117Z

pkgsend -s file:///software/site add dir mode=0555 owner=root group=bin path=/export/testpkg

pkgsend -s file:///software/site add file /var/tmp/software/testpkg mode=0555 owner=root group=bin path=/export/testpkg/testpkg

pkgsend -s file:///software/site add set name=description value="testpkg"

pkgsend -s file:///software/site close
pkg://site/testpkg@1.0.1,5.11:20140504T163117Z
PUBLISHED

pkgrepo info -s file:///software/site
PUBLISHER PACKAGES STATUS UPDATED
site 1 online 2014-05-04T16:37:26.996595Z

pkg install testpkg
Packages to install: 1
Create boot environment: No
Create backup boot environment: No

DOWNLOAD PKGS FILES XFER (MB) SPEED
Completed 1/1 1/1 0.0/0.0 0B/s

PHASE ITEMS
Installing new actions 4/4
Updating package state database Done
Updating image state Done
Creating fast lookup database Done

pkg list testpkg
NAME (PUBLISHER) VERSION IFO
testpkg 1.0.1 i--

Posted in solaris | Leave a comment

solaris 11 change hostname and ip

1. change the identity:node

root@sol11-1:/tmp# svccfg
svc:> select system/identity
svc:/system/identity> select system/identity:node
svc:/system/identity:node> listprop config/nodename
config/nodename astring sol11-1
svc:/system/identity:node> setprop config/nodename="sol11"
svc:/system/identity:node> refresh
svc:/system/identity:node> exit

2. nwam should be disabled.

root@sol11-1:/tmp# svcs svc:/network/physical:nwam
STATE STIME FMRI
disabled Sep_22 svc:/network/physical:nwam

3. list the current address.

root@sol11-1:/tmp# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
net0/v4 static ok 192.168.4.22/24
lo0/v6 static ok ::1/128
net0/v6 addrconf ok fe80::a00:27ff:febd:7496/10
root@vbox3:/tmp# ipadm delete-addr net0/v4
root@vbox3:/tmp# ipadm create-addr -T static -a 192.168.4.23/24 net0/v4
root@vbox3:/tmp# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
net0/v4 static ok 192.168.4.23/24
lo0/v6 static ok ::1/128
net0/v6 addrconf ok fe80::a00:27ff:febd:7496/10
root@vbox3:/tmp# exit

Posted in solaris | Leave a comment

solaris 11 exercise ips (10) add second pkg server instance

In this exercise you will setup a second instance of the application/pkg/server service.

This means that you will have a second environment on the
same machine from which you can publish your own software
packages.

1. Add second instance.
# svccfg -s pkg/server add s11ReleaseRepo

2. Set the service's port to 10081; set the inst_root and set readonly to true.
# svccfg -s pkg/server:s11ReleaseRepo setprop pkg/port=10081
# svccfg -s pkg/server:s11ReleaseRepo setprop pkg/inst_root=/export/S11ReleaseRepo
# svccfg -s pkg/server:s11ReleaseRepo setprop pkg/readonly=true

3. Set the proxy_base and the number of threads.
# svccfg -s pkg/server:s11ReleaseRepo setprop pkg/proxy_base = astring: http://pkg.example.com/s11ReleaseRepo
# svccfg -s pkg/server:s11ReleaseRepo setprop pkg/threads = 200

4. Refresh and enable
# svcadm refresh application/pkg/server:S11ReleaseRepo
# svcadm enable application/pkg/server:S11ReleaseRepo
Listing 1. Configuring the Release Repository Service

Posted in solaris | Leave a comment

solaris 11 ips repository

original link: blog

Posted in solaris | Leave a comment

solaris 11 exercise recover a lost rootpool

here

Posted in solaris | Leave a comment

solaris 11 exercise users

Log in to your system and switch user to root.

user1@solaris11-1:~$ su -
Password:
Oracle Corporation SunOS 5.11 11.0 November 2011
You have new mail.
root@solaris11-1:~#

1. Add a TERMINAL setting and alias to /etc/profile and test it.

root@solaris11-1:~# echo "export TERM=vt100" >> /etc/profile
root@solaris11-1:~# echo "alias c=clear" >> /etc/profile
root@solaris11-1:~# su - user1
Oracle Corporation SunOS 5.11 11.0 November 2011
user1@solaris11-1:~$ c

2. Add a testuser to your system.

root@solaris11-1:~# useradd -m -d /export/home/testuser testuser
80 blocks
root@solaris11-1:~#

3. Add an alias to the .profile of the testuser, and test it.

root@solaris11-1:~# echo "alias ll='ls -l'" >> /export/home/testuser/.profile
root@solaris11-1:~# su - testuser
Oracle Corporation SunOS 5.11 11.0 November 2011
testuser@solaris11-1:~$ ll
total 6
-rw-r--r-- 1 testuser staff 165 Apr 30 11:29 local.cshrc
-rw-r--r-- 1 testuser staff 170 Apr 30 11:29 local.login
-rw-r--r-- 1 testuser staff 130 Apr 30 11:29 local.profile
testuser@solaris11-1:~$ exit

4. Set user-quota for the testuser.

root@solaris11-1:~# zfs set quota=1M rpool/export/home/testuser
root@solaris11-1:~# zfs get quota rpool/export/home/testuser
NAME PROPERTY VALUE SOURCE
rpool/export/home/testuser quota 1M local
root@solaris11-1:~# zfs userspace rpool/export/home/testuser
TYPE NAME USED QUOTA
POSIX User root 1.50K none
POSIX User testuser 8K none
root@solaris11-1:~#

5. Switch user to the testuser and create a file to exceed the quota.

root@solaris11-1:~# su - testuser
testuser@solaris11-1:~$ mkfile 2m file1
file1: initialized 917504 of 2097152 bytes: Disc quota exceeded
testuser@solaris11-1:~$ exit

Posted in solaris | Leave a comment

solaris 11 exercise processes

Exercise Managing System Processes and Managing Tasks

1. Log in to your system and switch user to root.

user1@solaris11-1:~$ su -
Password:
Oracle Corporation SunOS 5.11 11.0 November 2011
You have new mail.
root@solaris11-1:~#

2. User the ps command to view your processes.

root@solaris11-1:~# ps
PID TTY TIME CMD
1435 pts/1 0:00 ps
1431 pts/1 0:00 bash
1430 pts/1 0:00 su
root@solaris11-1:~#

What is you shell?
Is the 'ps' command also running?

3. Use the ps command to display all processes one page at a time.

root@solaris11-1:~# ps -ef|more
UID PID PPID C STIME TTY TIME CMD
root 0 0 0 10:44:42 ? 0:01 sched
root 5 0 0 10:44:41 ? 0:13 zpool-rpool
root 6 0 0 10:44:43 ? 0:00 kmem_task
root 1 0 0 10:44:43 ? 0:00 /usr/sbin/init
root 2 0 0 10:44:43 ? 0:00 pageout
root 3 0 0 10:44:43 ? 0:00 fsflush
root 7 0 0 10:44:43 ? 0:00 intrd
(snipped)

4. What does the prstat command do?

root@solaris11-1:~# prstat
prstat: failed to load terminal info, defaulting to -c option
Please wait...
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
939 root 38M 7596K run 59 0 0:00:00 0.2% pkg.depotd/64
1213 gdm 136M 27M sleep 59 0 0:00:00 0.1% gdm-simple-gree/1
1438 root 11M 3140K cpu0 59 0 0:00:00 0.1% prstat/1
911 root 31M 23M sleep 59 0 0:00:01 0.0% Xorg/3
1431 root 10M 2380K sleep 49 0 0:00:00 0.0% bash/1
1423 user1 18M 5704K run 59 0 0:00:00 0.0% sshd/1
45 netcfg 3784K 2764K sleep 59 0 0:00:00 0.0% netcfgd/4
129 root 13M 2796K sleep 59 0 0:00:00 0.0% syseventd/18
461 root 8860K 1180K sleep 59 0 0:00:00 0.0% cron/1
70 daemon 14M 4624K sleep 59 0 0:00:00 0.0% kcfd/3
1116 gdm 3768K 1396K sleep 59 0 0:00:00 0.0% dbus-launch/1
46 root 3868K 2560K sleep 59 0 0:00:00 0.0% dlmgmtd/6
118 root 2124K 1176K sleep 59 0 0:00:00 0.0% pfexecd/3
13 root 21M 20M sleep 59 0 0:00:12 0.0% svc.configd/47
11 root 21M 12M sleep 59 0 0:00:03 0.0% svc.startd/12
Total: 101 processes, 935 lwps, load averages: 0.05, 0.41, 0.25

PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP

5. Use prstat to view the highest CPU usage every 5 seconds 10 times.

root@solaris11-1:~# prstat -s cpu 5 10
prstat: failed to load terminal info, defaulting to -c option
Please wait...
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
939 root 38M 7596K sleep 59 0 0:00:00 0.2% pkg.depotd/64
1213 gdm 136M 27M sleep 59 0 0:00:00 0.1% gdm-simple-gree/1
911 root 31M 23M sleep 59 0 0:00:01 0.0% Xorg/3
1441 root 11M 3040K cpu0 59 0 0:00:00 0.0% prstat/1
(snipped)

6. What is the difference between -s and -S in the following command?

root@solaris11-1:~# prstat -S rss 2 2
prstat: failed to load terminal info, defaulting to -c option
Please wait...
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
697 root 0K 0K sleep 60 - 0:00:00 0.0% nfsd_kproc/2
542 root 0K 0K sleep 60 - 0:00:00 0.0% lockd_kproc/2
229 root 0K 0K sleep 99 -20 0:00:00 0.0% zpool-p1/136
248 root 0K 0K sleep 99 -20 0:00:00 0.0% zpool-software/136
533 root 0K 0K sleep 60 - 0:00:00 0.0% nfs4cbd_kproc/2
181 root 0K 0K sleep 99 -20 0:00:00 0.0% zpool-kanweg/136

root@solaris11-1:~# prstat -s rss 2 2
prstat: failed to load terminal info, defaulting to -c option
Please wait...
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
1206 gdm 141M 31M sleep 59 0 0:00:00 0.0% gnome-settings-/1
1213 gdm 136M 27M sleep 59 0 0:00:01 0.1% gdm-simple-gree/1
911 root 31M 23M sleep 59 0 0:00:01 0.0% Xorg/3
13 root 21M 20M sleep 59 0 0:00:12 0.0% svc.configd/47
1212 gdm 129M 18M sleep 59 0 0:00:00 0.0% gnome-power-man/1

7. Check whether sendmail is running.

root@solaris11-1:~# pgrep -l sendmail
1416 sendmail
1418 sendmail

8. Kill sendmail.

root@solaris11-1:~# pkill sendmail
root@solaris11-1:~# pgrep -l sendmail
1464 sendmail
1460 sendmail
root@solaris11-1:~#

9. Start a program in the background that runs for 10000 seconds.

root@solaris11-1:~# sleep 10000 &
[1] 1471

10. Stop the sleep program, and start it again.

root@solaris11-1:~# kill -STOP %1
root@solaris11-1:~# jobs
[1]+ Stopped sleep 10000
root@solaris11-1:~# bg %1
[1]+ sleep 10000 &
root@solaris11-1:~#

9. List your scheduled tasks.

root@solaris11-1:~# crontab -l
#ident "%Z%%M% %I% %E% SMI"
#
# Copyright 2007 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# The root crontab should be used to perform accounting data collection.
#
#
10 3 * * * /usr/sbin/logadm
15 3 * * 0 [ -x /usr/lib/fs/nfs/nfsfind ] && /usr/lib/fs/nfs/nfsfind
30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh
root@solaris11-1:~#

10. Create a script that sleeps for 30 seconds and add it to your tasks. The
script should run every minute.

root@solaris11-1:~# echo "/usr/bin/sleep 30&" > /usr/bin/sleeper
root@solaris11-1:~# chmod +x /usr/bin/sleeper
#ident "%Z%%M% %I% %E% SMI"
#
# Copyright 2007 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# The root crontab should be used to perform accounting data collection.
#
#
10 3 * * * /usr/sbin/logadm
15 3 * * 0 [ -x /usr/lib/fs/nfs/nfsfind ] && /usr/lib/fs/nfs/nfsfind
30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh
* * * * * /usr/bin/sleeper
:wq!

Check whether the taks gets started.
root@solaris11-1:~# pgrep -lf sleep
1513 /usr/bin/sleep 30

11. Confure cron.deny so that user1 (or any other user) is not allowed
to use cron, and test it.

root@solaris11-1:~# echo "user1" >> /etc/cron.d/cron.deny
root@solaris11-1:~# su - user1
Oracle Corporation SunOS 5.11 11.0 November 2011
user1@solaris11-1:~$ crontab -e
crontab: you are not authorized to use cron. Sorry.
user1@solaris11-1:~$exit

12. Where are the cron-files stored?

root@solaris11-1:~# ls /var/spool/cron/crontabs
adm root root.au sys

Posted in solaris | Leave a comment

solaris 11 exercise smf (2) add smf-service

In this exercise you will create a new service.
It is a dummy services that only sleeps for a long
time and is started from a script. The script will
be the start-method that you supply in the service
manifest file.

1. Login to your solaris 11 vm and switch to the root user.
-bash-4.1$ su -
Password: e1car0
Oracle Corporation SunOS 5.11 11.0 November 2011
You have new mail.
root@solaris11-1:~#

2. Change directory to /var/svc/manifest/site
root@solaris11-1:/# cd /var/svc/manifest/site
root@solaris11-1:/var/svc/manifest/site#

3. Export the cron-service to mysvc.xml
root@solaris11-1:~# svccfg export cron > mysvc.xml

4. Make the following changes to mysvc.xml
old: service name='system/cron' type='service' version='0'
new: service name='site/mysvc' type='service' version='0'

old: dependency and dependent lines
new: (removed)

old: exec_method name='start' type='method' exec='/lib/svc/method/svc-cron' timeout_seconds='60'
new: exec_method name='start' type='method' exec='/lib/svc/method/mysvc' timeout_seconds='60'

5. Create a script called mysvc is /lib/svc/method and make it executable.
root@solaris11-1:~# echo "sleep 10000&" > /lib/svc/method/mysvc
root@solaris11-1:~# chmod +x /lib/svc/method/mysvc

6. Restart the manifest-import service.
root@solaris11-1:~# svcadm restart svc:/system/manifest-import

7. Check the status of your service
root@solaris11-1:~# svcs mysvc
STATE STIME FMRI
online 14:39:16 svc:/site/mysvc:default

Posted in solaris | 1 Comment

solaris 11 exercise smf (1) administering services

In this exercise you will work with dependencies between
services. You will check the status of cron, and check
its dependencies. Then you will disable a service that
cron depends on and see how that reflects on cron.

user1@solaris11-1:~$ su -
Password:
Oracle Corporation SunOS 5.11 11.0 November 2011
You have new mail.

root@solaris11-1:~# pgrep -fl cron
466 /usr/sbin/cron

root@solaris11-1:~# svcs cron
STATE STIME FMRI
online 11:30:29 svc:/system/cron:default

root@solaris11-1:~# svcs -p cron
STATE STIME FMRI
online 11:30:29 svc:/system/cron:default
11:30:29 466 cron
root@solaris11-1:~#

root@solaris11-1:~# svcs -d cron
STATE STIME FMRI
online 11:30:25 svc:/milestone/name-services:default
online 11:30:29 svc:/system/filesystem/local:default
root@solaris11-1:~#

root@solaris11-1:~# svcs -D cron
STATE STIME FMRI
online 11:30:42 svc:/milestone/multi-user:default
root@solaris11-1:~#

root@solaris11-1:~# svcadm disable name-services
root@solaris11-1:~# svcs -d cron
STATE STIME FMRI
disabled 20:22:45 svc:/milestone/name-services:default
online 11:30:29 svc:/system/filesystem/local:default

root@solaris11-1:~# svcs -p cron
STATE STIME FMRI
online 11:30:29 svc:/system/cron:default
11:30:29 466 cron

root@solaris11-1:~# svcadm refresh cron

root@solaris11-1:~# svcs -p cron
STATE STIME FMRI
online 20:23:13 svc:/system/cron:default
11:30:29 466 cron

root@solaris11-1:~# svcadm disable cron
root@solaris11-1:~# svcadm enable cron

root@solaris11-1:~# svcs -p cron
STATE STIME FMRI
offline 20:23:40 svc:/system/cron:default
root@solaris11-1:~#

Posted in solaris | Leave a comment

clustermode ports failovergroups interfaces roles firewall-policy

show image

Posted in Uncategorized | Leave a comment

ceph

ceph

Posted in Uncategorized | Leave a comment

solaris 11 exercise zones (1)

1. The zones will have /software as root.

# df -h | grep software
software 20G 33K 16G 1% /software

2. Create a vnic for a new zone.
# dladm show-phys
LINK MEDIA STATE SPEED DUPLEX DEVICE
net0 Ethernet up 1000 full e1000g0
net1 Ethernet unknown 0 unknown e1000g1
net2 Ethernet unknown 0 unknown e1000g2
net3 Ethernet unknown 0 unknown e1000g3
# dladm create-vnic vnic20 -l net3

3. Create a zone called zone20
# zonecfg -z zone20
zone20: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone20> create
create: Using system default template 'SYSdefault'
zonecfg:zone20> set zonepath=/software/zone20
zonecfg:zone20> add net
zonecfg:zone20:net> set physical=vnic20
zonecfg:zone20:net> end
zonecfg:zone20> commit
zonecfg:zone20> exit
zoneadm -z zone20 install
(wait...)

Boot the zone and login to the console.

# zoneadm -z zone20 boot
# zlogin -C zone20
(in the sysidtool select manual network configuration and
select vnic20)
use 192.168.0.20 as the IP-Address.
use the default netmask
use router 192.168.0.1

DNS server: 192.168.4.1

root password : e1car0

4. Migrate a Solaris 10 zone to Solaris 11.
You will use a prepared cpio file from a Solaris 10 VM
to host a solaris 10 zone on Solaris 11.

Use the automounter to access the preconfigured file.
# cd /net/192.168.4.159/zones
# ls
zone10 zone10.cpio.gz

Create a new zone.
# zonecfg -z zone10
zone10: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone10> create -t SYSsolaris10
zonecfg:zone10> set zonepath=/zones/zone1
zonecfg:zone10> set autoboot=true
zonecfg:zone10> select anet linkname=net0
zonecfg:zone10:anet> set allowed-address=192.168.0.30/24
zonecfg:zone10:anet> set configure-allowed-address=true
zonecfg:zone10:anet> end
zonecfg:zone10> set hostid=2ee3a870
zonecfg:zone10> verify
zonecfg:zone10> commit
zonecfg:zone10> exit

Attach the cpio file to the new zone
# zoneadm -z zone10 attach -a /net/192.168.4.159/zones/zone10.cpio.gz
A ZFS file system has been created for this zone.
Progress being logged to /var/log/zones/zoneadm.20140301T180221Z.zone10.attach
Log File: /var/log/zones/zoneadm.20140301T180221Z.zone10.attach
Attaching...
Installing: This may take several minutes...

# zoneadm -z zone10 boot

5. Delegate zonemanagement of zone3 to user peter.

# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
2 zone3 running /rpool/zones/zone3 solaris excl

# zonecfg -z zone3
zonecfg:zone3> add admin
zonecfg:zone3:admin> set user=peter
zonecfg:zone3:admin> set auths=manage
zonecfg:zone3:admin> end
zonecfg:zone3> commit
zonecfg:zone3> exit

# su - peter
# pfexec bash
# zlogin zone3

[Connected to zone 'zone3' pts/7]
Oracle Corporation SunOS 5.11 11.0 August 2012
root@zone3:~# exit

# zoneadm -z zone3 halt

note: another way of setting authorizations
# usermod -P+"Zone Management" -A+solaris.zone.manage/zone1 peter
# usermod -A+solaris.zone.login/zone2 peter

note: use pfexec bash to test because bash is not RBAC aware.

Optional exercise.
6. Create an additional zone called zone21.
The zone will have to vnic interfaces, vnic30 and vnic31.
Vnic30 will be connected to an etherstub. Vnic31 will be
connected to net0.

Zone20 has one vnic called vnic20. This vnic will also be
connected to the etherstub. Zone20 will use zone21
as a router.

|zone20-vnic20 | 192.168.0.0 | vnic30-zone21-vnic31 | 192.168.4.0 |

Posted in solaris | Leave a comment

clustermode SFO

In CMode, when a failover or takeover has taken place, the root-aggregate
of the partnernode is owned by the surviving partner.

How to get to the rootvolume of the partner's root-aggregate?

1. Log in to she systemshell.
2. Run the command 'mount_partner'

The root-volume of the partner is then mounted on /partner

(done)

Posted in netapp | Leave a comment

clustermode switch to switchless

view pdf

Posted in netapp, Uncategorized | Leave a comment

netapp hwassist

There are specific Data ONTAP commands for configuring the hardware-assisted takeover feature.

If you want to... Use this command...
Disable or enable hardware-assisted takeover storage failover modify hwassist
Set the partner address storage failover modify hwassist-partner-ip
Set the partner port storage failover modify hwassist-partner-port
Specify the interval between heartbeats storage failover modify hwassist-health-check-interval
Specify the number of times the hardware-assisted takeover alerts are sent storage failover modify hwassist-retry-count

Posted in netapp | Leave a comment

cdot snapvault example

setting up snapvault cmode example

1. create vserverA and vserverB
cl1::> vserver create vserverA -rootvolume roota -aggregate aggr1_n1 -ns-switch file -nm-switch file -rootvolume-security-style unix

cl1::> vserver create vserverB -rootvolume rootb -aggregate aggr1_n2 -ns-switch file -nm-switch file -rootvolume-security-style unix

for vserverA
2. create a source datavolume in vserverA
cl1::> vol create -vserver vserverA -volume datasource -aggregate aggr1_n1 -size 100m -state online

3. create a destination datavolume in vserverB with type DP
1::> vol create -vserver vserverB -volume datadest -aggregate aggr1_n2 -size 100m -type DP

Using the 5min schedule for snapshots on the source

Create new snapshot-policy for vserverA with the 5min schedule and a count of 2.
Also specify the snapmirror-label for the snapshots.
4. cl1::> volume snapshot policy create -vserver vserverA -policy 5minpol -enabled true -schedule1 5min -count1 2 -prefix1 5min -snapmirror-label1 vserverA-vault-5min

Connect snapshot-policy to the vserverA datasource volume.
5. cl1::> volume modify -vserver vserverA -volume datasource -snapshot-policy 5minpol

for vserverB.
Create snapmirror policy and rule
6. cl1::> snapmirror policy create -vserver vserverB -policy vserverA-vault
7. cl1::> snapmirror policy add-rule -vserver vserverB -policy vserverA-vault -snapmirror-label vserverA-vault-5min -keep 40

Setup a peer relationship
8. cl1::> vserver peer create -vserver vserverA -peer-vserver vserverB -applications snapmirror

Create the snapmirror relation
9. cl1::> snapmirror create -source-path vserverA:datasource -destination-path vserverB:datadest -type XDP -policy vserverA-vault -schedule 5min

Initialize.
10. cl1::> snapmirror initialize -destination-path vserverB:datadest

Monitor:
View the lag time.
cl1::> snapmirror show -destination-path vserverB:datadest -field lag

View the relationship
cl1::> snapmirror show

View the snapshots
cl1::> snapshot show -vserver vserverA -volume datasource -instance
cl1::> snapshot show -vserver vserverB -volume datadest

Posted in netapp | Leave a comment

solaris 11 exercise zfs (1)

Basic operations.

Your machine has 3 disks of 300MB
If your machine has no available disks you
can create 3 files of 300MB in the /dev/dsk
directory and use those.
Perform the following 3 commands only if your
machine has no available disks.
# mkfile 300m /dev/dsk/disk1
# mkfile 300m /dev/dsk/disk2
# mkfile 300m /dev/dsk/disk3

You will perform the following basic operations:
1. Create a mirrored zpool of 2 disks and 1 spare.
2. Create 2 zfs filesystems in the pool.
3. Set quota and reservations.
4. Rename the pool by exporting and importing.

Locate the 3 disks first, format.

1. Create a mirrored zpool of 2 disks with 1 spare.
# zpool create pool1 mirror c4t1d0 c4t3d0 spare c4t4d0
# zpool status pool1
pool: pool1
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c4t1d0 ONLINE 0 0 0
c4t3d0 ONLINE 0 0 0
spares
c4t4d0 AVAIL

errors: No known data errors

2. Create 2 zfs filesystems in the pool.
# zfs create pool1/fs1
# zfs create pool1/fs2

3. Set quota and reservations
# zfs set quota=100m pool1/fs1
# zfs set reservation=100m pool1/fs1

check the available space in the pool

# df -h |grep pool1
pool1 254M 33K 154M 1% /pool1
pool1/fs1 100M 31K 100M 1% /pool1/fs1
pool1/fs2 254M 31K 154M 1% /pool1/fs2

4. Rename the pool.
# zpool export pool1
# zpool import pool1 newpool

Shadow Migration.
user3 and user4 work togethers

user3:
create a zfs filesystem and put some files in it and
share it with NFS.

# zfs create software/source
# cp -r /var/adm/* /software/source/
# share -F nfs -o ro /software/source
# dfshares
RESOURCE SERVER ACCESS TRANSPORT
solaris11-3:/software/source solaris11-3 - -

user4:
Check whether shadow-migration is installed.
# pkg list shadow-migration
NAME (PUBLISHER) VERSION IFO
system/file-system/shadow-migration 0.5.11-0.175.0.0.0.2.1 i--

# svcadm enable shadowd
# svcs shadowd
STATE STIME FMRI
online 21:42:19 svc:/system/filesystem/shadowd:default

Create a zfs filesystem and mount it with the shadow argument.

# zfs create -o shadow=nfs://192.168.4.153/software/source \
software/destination

# ls /software/destination

Replication.

setup simple replication.
user3 and user4 work together.
# zfs create software/replisource
# zfs send software/replisource@snap1 | ssh root@192.168.4.153 \
zfs receive software/replidest

setup automatic replication.
user3 and user4 work together.

1. setup automatic rootlogin at the destination.
user4:
# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
26:d7:aa:4e:50:6d:26:ac:78:e8:56:d7:27:d9:de:29 root@solaris11-4

2. copy the new public key to the destination server.
# cd $HOME/.ssh
# scp id_rsa.pub root@192.168.4.153:/root/.ssh/authorized_keys
Password: ******
id_rsa.pub 100% |******************************************************************| 398 00:00

3. Create a simple script to automate the replication.
- source filesystem is software/replisource
- destination filesystem is software/replidest
- atime should be set to off on the destination

user4:
=============================================
#!/usr/bin/bash
# create the baseline transfer
zfs snapshot software/replisource@snap0
zfs send software/replisource@snap0 | ssh root@192.168.4.153 zfs recv -F software/replidest
# switch of atime on destination
ssh root@192.168.4.153 zfs set atime=off software/replidest

#loop to increment
while true
do

#manage destinationsnaps
echo manage destinationsnaps
ssh root@192.168.4.153 zfs destroy software/replidest@snap1
ssh root@192.168.4.153 zfs rename software/replidest@snap0 software/replidest@snap1
echo done

#manage localsnaps
echo manage source snaps
zfs destroy software/replisource@snap1
zfs rename software/replisource@snap0 software/replisource@snap1
zfs snapshot software/replisource@snap0
echo done

#incremental replication
echo increment
zfs send -i software/replisource@snap1 software/replisource@snap0 | ssh root@192.168.4.153 zfs receive -F software/replidest
echo done
sleep 5
done
==========================

Run the script.
# chmod +x repli.sh
# ./repli.sh

Now add files to the source filesystem and read the contents of the
destination to check the replication.

Posted in solaris | Leave a comment

netapp clustered ontap 8.2.1RC

CLUSTERED SIMULATE ONTAP LICENSES
+++++++++++++++++++++++++++++++++

These are the licenses that you use with the clustered Data ONTAP version
of Simulate ONTAP to enable Data ONTAP features.

There are four groups of licenses in this file:

- cluster base license
- feature licenses for the ESX build
- feature licenses for the non-ESX build
- feature licenses for the second node of a 2-node cluster

Cluster Base License (Serial Number 1-80-000008)
================================================

You use the cluster base license when setting up the first simulator in a cluster.

Cluster Base license = SMKQROWJNQYQSDAAAAAAAAAAAAAA

Clustered Data ONTAP Feature Licenses
=====================================

You use the feature licenses to enable unique Data ONTAP features on your simulator.

Licenses for the ESX build (Serial Number 4082367724)
-----------------------------------------------------

Use these licenses with the VMware ESX build.

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS CEASNGAUTXKZOFABGAAAAAAAAAAA CIFS protocol
FCP ATVVOGAUTXKZOFABGAAAAAAAAAAA Fibre Channel Protocol
FlexClone WWMDRGAUTXKZOFABGAAAAAAAAAAA FlexClone
Insight_Balance CKXDVGAUTXKZOFABGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI OYXGOGAUTXKZOFABGAAAAAAAAAAA iSCSI protocol
NFS QJCDNGAUTXKZOFABGAAAAAAAAAAA NFS protocol
SnapLock ULIHSGAUTXKZOFABGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise QPZOUGAUTXKZOFABGAAAAAAAAAAA SnapLock Enterprise
SnapManager GGGWSGAUTXKZOFABGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror KCPOQGAUTXKZOFABGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect SAELTGAUTXKZOFABGAAAAAAAAAAA SnapProtect Applications
SnapRestore YHRZPGAUTXKZOFABGAAAAAAAAAAA SnapRestore
SnapVault IRKSRGAUTXKZOFABGAAAAAAAAAAA SnapVault primary and secondary

Licenses for the non-ESX build (Serial Number 4082367722)
---------------------------------------------------------

Use these licenses with the VMware Workstation, VMware Player, and VMware Fusion build.

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS APLPKUDPDUEVQEABGAAAAAAAAAAA CIFS protocol
FCP YDHTLUDPDUEVQEABGAAAAAAAAAAA Fibre Channel Protocol
FlexClone UHYAOUDPDUEVQEABGAAAAAAAAAAA FlexClone
Insight_Balance AVIBSUDPDUEVQEABGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI MJJELUDPDUEVQEABGAAAAAAAAAAA iSCSI protocol
NFS OUNAKUDPDUEVQEABGAAAAAAAAAAA NFS protocol
SnapLock SWTEPUDPDUEVQEABGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise OALMRUDPDUEVQEABGAAAAAAAAAAA SnapLock Enterprise
SnapManager ERRTPUDPDUEVQEABGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror INAMNUDPDUEVQEABGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect QLPIQUDPDUEVQEABGAAAAAAAAAAA SnapProtect Applications
SnapRestore WSCXMUDPDUEVQEABGAAAAAAAAAAA SnapRestore
SnapVault GCWPOUDPDUEVQEABGAAAAAAAAAAA SnapVault primary and secondary

Licenses for the second node in a cluster (Serial Number 4034389062)
--------------------------------------------------------------------

Use these licenses with the second simulator in a cluster (either the ESX or non-ESX build).

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS MHEYKUNFXMSMUCEZFAAAAAAAAAAA CIFS protocol
FCP KWZBMUNFXMSMUCEZFAAAAAAAAAAA Fibre Channel Protocol
FlexClone GARJOUNFXMSMUCEZFAAAAAAAAAAA FlexClone
Insight_Balance MNBKSUNFXMSMUCEZFAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI YBCNLUNFXMSMUCEZFAAAAAAAAAAA iSCSI protocol
NFS ANGJKUNFXMSMUCEZFAAAAAAAAAAA NFS protocol
SnapLock EPMNPUNFXMSMUCEZFAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise ATDVRUNFXMSMUCEZFAAAAAAAAAAA SnapLock Enterprise
SnapManager QJKCQUNFXMSMUCEZFAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror UFTUNUNFXMSMUCEZFAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect CEIRQUNFXMSMUCEZFAAAAAAAAAAA SnapProtect Applications
SnapRestore ILVFNUNFXMSMUCEZFAAAAAAAAAAA SnapRestore
SnapVault SUOYOUNFXMSMUCEZFAAAAAAAAAAA SnapVault primary and secondary

Posted in netapp | Leave a comment

solaris 11 exercise ip (1)

Your machine has 4 network interfaces.
Adapter 1 is configured up.

1. View the dladm subcommands.
# dladm
usage: dladm ...
rename-link
(output skipped)

2. View your physical network interfaces.
# dladm show-phys
NK MEDIA STATE SPEED DUPLEX DEVICE
net1 Ethernet unknown 1000 full e1000g1
net2 Ethernet unknown 0 unknown e1000g2
net0 Ethernet up 1000 full e1000g0
net3 Ethernet unknown 0 unknown e1000g3

View your links.
# dladm show-link
LINK CLASS MTU STATE OVER
net1 phys 1500 unknown --
net2 phys 1500 unknown --
net0 phys 1500 up --
net3 phys 1500 unknown --
vnic1 vnic 1500 up net0
zone1/vnic1 vnic 1500 up net0
zone1/net0 vnic 1500 up net0

View your interfaces
# ipadm show-if
IFNAME CLASS STATE ACTIVE OVER
lo0 loopback ok yes --
net0 ip ok yes --

View your addresses
# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
net0/v4 static ok 192.168.4.154/24
lo0/v6 static ok ::1/128
net0/v6 addrconf ok fe80::250:56ff:fe96:d471/10

How is your network service configured?
# svcs network/physical
STATE STIME FMRI
disabled 17:34:14 svc:/network/physical:nwam
online 17:34:19 svc:/network/physical:upgrade
online 17:34:21 svc:/network/physical:default

3. Pick a free physical interface to create an ip-interface.
# ipadm create-ip net1
# ipadm show-if|grep net1
net1 ip down no --

Configure an ip-address on the new interface.
Make sure you select a unique address in your network.
# ipadm create-addr -T static -a 192.168.4.140/24 net1/v4

# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
net0/v4 static ok 192.168.4.154/24
net1/v4 static ok 192.168.4.140/24
lo0/v6 static ok ::1/128
net0/v6 addrconf ok fe80::250:56ff:fe96:d471/10

Down the interface and remove it.
# ipadm down-addr net1/v4
# ipadm delete-ip net1

4. Create a vnic, create an ip-interface on it and address.
# dladm create-vnic -l net1 vnic10
# ipadm create-ip vnic10
# ipadm create-addr -T static -a 192.168.4.140/24 vnic10/v4
# ipadm down-addr
# ipadm delete-ip vnic10
# dladm delete-vnic vnic10

5. Setup Link Aggragation.
You use 2 of the available physical interface to create an aggregate
# dladm create-aggr -l net2 -l net3 aggr1
# dladm show-aggr
LINK POLICY ADDRPOLICY LACPACTIVITY LACPTIMER FLAGS
aggr1 L4 auto off short -----

Setup an ip-address on the aggregate.
# ipadm create-addr -T static -a 192.168.4.140/24 aggr1/v4aggr
# ipadm show-if|grep aggr1
aggr1 ip ok yes --

Remove the link-aggregation
# ipadm down-addr aggr1/v4aggr
# ipadm delete-addr aggr1/v4aggr
# ipadm delete-ip aggr1
# dladm delete-aggr aggr1

6. Create an etherstub and connect vnics to the stub.
These vnics could be connected to zones.

# dladm create-etherstub stub0
# dladm create-vnic -l stub0 vnic5
# dladm create-vnic -l stub0 vnic10
# dladm create-vnic -l stub0 vnic1
# dladm show-vnic
LINK OVER SPEED MACADDRESS MACADDRTYPE VID
vnic1 net0 1000 2:8:20:25:26:38 random 0
zone1/vnic1 net0 1000 2:8:20:25:26:38 random 0
zone1/net0 net0 1000 2:8:20:80:87:6 random 0
vnic0 stub0 0 2:8:20:46:6a:77 random 0
vnic5 stub0 0 2:8:20:9b:e1:3b random 0
vnic10 stub0 0 2:8:20:7:6e:5d random 0

7. Setup two zones to use vnic5 and vnic10
(setup zone5 and zone6 and set the zonepath in /software)

# zonecfg -z zone5
zonecfg:zone5> add net
zonecfg:zone5:net> set physical=vnic5
zonecfg:zone5:net> end
zonecfg:zone5> commit
zonecfg:zone5> exit

Repeat this for zone6

Login to zone5 and setup an ip-address
# zlogin zone5
# dladm show-link
LINK CLASS MTU STATE OVER
vnic5 vnic 9000 up ?
# ipadm create-addr -T static -a 192.168.4.140/24 vnic5/v4

Repeat this in zone6.

Login to zone5 and ping the zone6 ip-address.
Can you also ping your global zone?

8. Optional exercise: NWAM

In this exercise you will create a classroom ncp and with a physical
interface and ipaddress.

Create a new ncp called classncp

root@solaris11-1:~# netcfg
netcfg> create ncp classncp
netcfg:ncp:classncp> create ncu phys net1
Created ncu 'net1'. Walking properties ...
activation-mode (manual) [manual|prioritized]> manual
link-mac-addr>enter
link-autopush>enter
link-mtu>enter
netcfg:ncp:classncp:ncu:net1> end
Committed changes
netcfg:ncp:classncp> create ncu ip net1
Created ncu 'net1'. Walking properties ...
ip-version (ipv4,ipv6) [ipv4|ipv6]> ipv4
ipv4-addrsrc (dhcp) [dhcp|static]> static
ipv4-addr> 192.168.4.110 (Note, ask your instructor for the ipaddress)
ipv4-default-route>enter
netcfg:ncp:classncp:ncu:net1> end
Committed changes
netcfg:ncp:classncp> end
netcfg> end
root@solaris11-1:~#

2. Enable the new ncp

root@solaris11-1:~# netadm enable -p ncp classncp

Do you still have connection to your system?

Posted in solaris | Leave a comment

solaris 11 exercise bootenvironments (1)

A boot environment is a bootable Oracle Solaris environment consisting of a root dataset and, optionally,
other datasets mounted underneath it. Exactly one boot environment can be active at a time.

A dataset is a generic name for ZFS entities such as clones, file systems, or snapshots. In the context
of boot environment administration, the dataset more specifically refers to the file
system specifications for a particular boot environment or snapshot.

A snapshot is a read-only image of a dataset or boot environment at a given point in time.
A snapshot is not bootable.

A clone of a boot environment is created by copying another boot environment. A clone is bootable.

Shared datasets are user-defined directories, such as /export, that contain the same mount point
in both the active and inactive boot environments. Shared datasets are located outside the root
dataset area of each boot environment.

Note - A clone of the boot environment includes everything hierarchically under the main root
dataset of the original boot environment. Shared datasets are not under the root dataset and are
not cloned. Instead, the boot environment accesses the original, shared dataset.

A boot environment's critical datasets are included within the root dataset area for that environment.

1. Create a new bootenvironment
# beadm create newbe
# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
newbe - - 226.0K static 2014-02-17 17:20
solaris NR / 6.40G static 2014-02-16 14:02

2. View the new bootenvironment filesystems
# zfs list
(output skipped)
rpool/ROOT/newbe 142K 12.5G 3.35G /
rpool/ROOT/newbe/var 84K 12.5G 221M /var
(output skipped)

3. Activate the new bootenvironment
# beadm activate newbe
# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
newbe R - 6.40G static 2014-02-17 17:20
solaris N / 41.0K static 2014-02-16 14:02

4. Mount the new bootenvironment.
# beadm mount newbe /newbe
Add a file to the mounted bootenvironment and reboot your system.
# touch /newbe/added
# shutdown -y -g0 -i6
Log in and check the file that you added.

5. Activate the original bootenvironment.
# beadm activate solaris
# shutdown -y -g0 -i6
Log in and destroy the new bootenvironment.
# beadm destroy newbe
Are you sure you want to destroy newbe? This action cannot be undone(y/[n]): y

6. Before installing a package, you can have a backup bootenvironment created.
First remove the telnet package and then install it again.
# pkg uninstall pkg://solaris/network/telnet
# pkg install --require-backup-be pkg://solaris/network/telnet
# beadm list | grep backup
solaris-backup-1 - - 145.0K static 2014-02-16 16:18

7. If there is no zone yet, create one.
# zonecfg -z zone1
# zonecfg -z zone2
zone1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create
create: Using system default template 'SYSdefault'
zonecfg:zone1> set zonepath=/software/zone1
zonecfg:zone1> exit
# zoneadm -z zone1 install
A ZFS file system has been created for this zone.
Progress being logged to /var/log/zones/zoneadm.20140217T224151Z.zone2.install
Image: Preparing at /software/zone2/root.

8. Create a bootenvironment in a the zone.
# zlogin zone1
# beadm create newbe
# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
newbe - - 58.0K static 2014-02-17 22:30
solaris NR / 421.67M static 2014-02-17 17:08
# exit

Log in using the console of the zone.
# zlogin -C zone1
solaris console login: root
Password: *******

Change the rootpassword and then activate the new bootenvironment, and reboot.
# passwd
New Password:
Re-enter new Password:
# beadm activate newbe
# reboot

Can you use the new rootpassword to login?

Posted in solaris | Leave a comment

solaris 11 exercise ips (1)

1. Log in to your machine.
ssh user1@192.168.4.151

switch to root
# sudo bash
Password: e1car0

2. Your machine has the Full Repository iso mounted.
# df -h | grep media
/dev/dsk/c3t0d0s2 6.8G 6.8G 0K 100% /media/SOL_11_1_REPO_FULL

Check the available disks.
# format
AVAILABLE DISK SELECTIONS:
0. c4t0d0
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c4t2d0
/pci@0,0/pci15ad,1976@10/sd@2,0

Use disk c4t2d0 (for example) to create a zpool for your repository.
# zpool create software c4t2d0
# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 19.9G 6.96G 12.9G 35% 1.00x ONLINE -
software 19.9G 112K 19.9G 0% 1.00x ONLINE -

Create a zfs filesystem for your repository.
# zfs create software/ips

3. Check your publisher.
# pkg publisher
PUBLISHER TYPE STATUS URI
solaris origin online http://pkg.oracle.com/solaris/release/

Create a repository.
# pkgrepo create /software/ips/
# ls /software/ips
pkg5.repository

To pupulate your repository choose 4a or 4b as a method.
4a. Populate your new repository.
# rsync -aP /media/SOL_11_1_REPO_FULL/repo /software/ips
(this will take some time 6.8GB)

Refresh your repository.
# pkgrepo refresh -s /software/ips

4b.
# pkgrepo create /software/ips
# pkgrecv -s http://pkg.oracle.com/solaris/release/ -d /software/ips '*'

5. Make your repository accessible for others.
# zfs set share=name=s11repo,path=/software/ips,prot=nfs software/ips
# zfs set sharenfs=on software/ips
# dfshares
RESOURCE SERVER ACCESS TRANSPORT
solaris11-4:/software/ips solaris11-4 - -

6. Set the inst_root for the pkg service.
# svccfg -s application/pkg/server setprop pkg/inst_root=/software/ips
# svcadm refresh pkg/server
# svcadm disable pkg/server
# svcadm enable pkg/server

Check your colleague's work and use his repository.
# pkg set-publisher -G'*' -M'*' -g /net/192.168.4.154/software/ips/ solaris

7. Remove the xkill package from your system.
# pkg uninstall xkill

Install the package again.
# pkg install xkill

8. Change the package, verify and repair.
# pkg contents xkill
PATH
usr/X11/bin/xkill
usr/bin/xkill
usr/share/man/man1/xkill.1

# chmod 000 /usr/bin/xkill
# pkg verify xkill
PACKAGE STATUS
pkg://solaris/x11/xkill ERROR
file: usr/bin/xkill
Mode: 0000 should be 0555

# pkg fix xkill

9. Create your own package and publish it.

# svcadm disable application/pkg/server
# svccfg -s application/pkg/server setprop pkg/readonly=false
# svcadm refresh application/pkg/server
# svcadm enable application/pkg/server

Create a directory to hold your software.
# mkdir -p /var/tmp/new
# cd /var/tmp/new
# vi newpackage
This is a new package.
:wq!

# eval 'pkgsend -s http://192.168.4.154 open newpackage@1.0-1'
export PKG_TRANS_ID=1392650521_pkg%3A%2F%2Fsolaris%2Fnewpackage%401.0%2C5.11-1%3A20 140217T152201Z

# export PKG_TRANS_ID=1392650521_pkg%3A%2F%2Fsolaris%2Fnewpackage%401.0%2C5.11-1%3A20140217T152201Z

# pkgsend -s \
> http://192.168.4.154 add dir mode=0555 owner=root \
> group=bin path=/export/newpackage

# pkgsend -s http://192.168.4.154 \
> add file /var/tmp/new/newpackage mode=0555 owner=root group=bin \
> path=/export/newpackage/newpackage

# pkgsend -s http://192.168.4.154 \
> add set name=description value="MyPackage"

# pkgsend -s http://192.168.4.154 \
> close
PUBLISHED
pkg://solaris/newpackage@1.0,5.11-1:20140217T152201Z

# svccfg -s pkg/server setprop pkg/readonly=true
# svcadm enable pkg/server

# pkg search newpackage
INDEX ACTION VALUE PACKAGE
basename file software/ips/new/newpackage pkg:/newpackage@1.0-1
pkg.fmri set solaris/newpackage pkg:/newpackage@1.0-1

# pkg install newpackage

More IPS

In a terminal window on the Sol11-Desktop virtual machine, determine if the apptrace
software package is currently installed.
# pkg list apptrace
pkg list: no packages matching 'apptrace' installed

Search the IPS package repository for the apptrace software package.
# pkg search apptrace
INDEX ACTION VALUE PACKAGE
pkg.description set Apptrace utility for application tracing,
including shared objects pkg:/developer/apptrace@0.5.11-0.175.0.0.0.2.1

Display detailed information about the apptrace package from the remote repository by
using the -r option
# pkg info -r apptrace
Name: developer/apptrace
Summary: Apptrace Utility
Description: Apptrace utility for application tracing,
including shared objects
Category: Development/System
State: Not installed
Publisher: solaris

Perform a dry run  on the apptrace package installation.
# pkg install -nv apptrace

The dry run shows that one package will be installed. The package installation will not affect the boot environment. No currently installed packages will be changed. Note that
an FMRI is the fault management resource identifier. The FMRI is the identifier for this
package. The FMRI includes the package publisher, name, and version. The pkg
command uses FMRIs, or portions of FMRIs, to operate on packages.

Install the apptrace package.
# pkg install apptrace

Verify the apptrace package installation.
# pkg verify -v apptrace
PACKAGE STATUS
pkg://solaris/developer/apptrace OK

Remove the apptrace package from the system image on your host.
# pkg uninstall apptrace

Posted in solaris, Uncategorized | Leave a comment

solaris 11 vnc

1. Install the Solaris Desktop environment.

# pkg install slim_install
# /usr/bin/vncserver
# vi /etc/gdm/custom.conf
[daemon]
[security]
[xdmcp]
Enable=true
[greeter]
[chooser]
[debug]

# svcadm restart gdm

# inetadm -e xvnc-inetd
or
# svcadm enable xvnc-inetd

Posted in solaris | Leave a comment

centos and adito

setting up adito on centos

Posted in linux | Leave a comment

solaris zfs nfs share

root@solaris11-1:~# zfs set share=name=kanweg,path=/rpool/kanweg,prot=nfs rpool/kanweg
name=kanweg,path=/rpool/kanweg,prot=nfs
root@solaris11-1:~# zfs sharenfs=on rpool/kanweg
root@solaris11-1:~# dfshares|grep kanweg
solaris11-1:/rpool/kanweg solaris11-1 - -

other example:
zfs set share=name=fs1,path=/temppool/fs1,prot=nfs,root=192.168.4.235,rw=192.168.4.235 temppool/fs1

Posted in solaris | Leave a comment

7000 factoryreset at boot

add "-c" as parameter to the boot line.

kernel$ /platform........... -c

Posted in solaris | Leave a comment

Windows MPIO iscsi

mpio

Posted in Uncategorized | Leave a comment

netapp 7-mode upgrade simulator (exercise)

This is just an exercise for installing a second image on your simulator and boot
from it.

First the existing image is tarred and zipped and put on the root volume.
Then the update is done and after that you will have the same image twice.
I know it is not very useful but as I said, it is just an exercise.

1. Log in to the filer and unlock the diaguser.
login: root
password: *****
priv set advanced
useradmin diaguser unlock
useradmin diaguser password

systemshell
login: diag
password: *****
sudo bash
cd /cfcard/x86_64/freebsd/image1
mkdir /mroot/etc/software
tar cvf /mroot/etc/software/8.tar .
gzip /mroot/etc/software/8.tar
cd /mroot/etc/software
gzip 8.tar
mv 8.tar.gz 8.tgz

(done)
exit
exit

(now you are back in the nodeshell)
software list
software update 8.tgz
software: You can cancel this operation by hitting Ctrl-C in the next 6 seconds.
software: Depending on system load, it may take many minutes
software: to complete this operation. Until it finishes, you will
software: not be able to use the console.
Software update started on node 7mode1. Updating image2 package: file://localhost/mroot/etc/software/82.tgz current image: image1
Listing package contents.
Decompressing package contents.
Invoking script (validation phase).
INSTALL running in check only mode
Mode of operation is UPDATE
Current image is image1
Alternate image is image2

reboot

Note: I also tried to run software update 82.tgz (a file I created from a running 8.2.1 simulator).
The update failed. Then I unzipped the 82.tgz and untarred it to the image2 directory. I booted
from the image2/kernel. Panic... Needs more work.

Posted in Uncategorized | Leave a comment

netapp add disks to simulator

f1> priv set advanced
( or in cdot: set d )
f1*> useradmin diaguser unlock
f1*> useradmin diaguser password
( or in cdot: security login unlock -username diag
security login password -username diag)

Enter a new password:*****
Enter it again:*****
f1*> systemshell -node nodename
login: diag
Password: *****
f1% sudo bash
bash-3.2# export PATH=${PATH}:/usr/sbin
bash-3.2# cd /sim/dev
bash-3.2# vsim_makedisks -h
(this will show all possible disk-types)

bash-3.2# vsim_makedisks -n 6 -t 23 -a 2
(this will create 6 drives of type 23 on
adapter 2. you can check the drives as follows)

bash-3.2# ls ,disks | more
,reservations
Shelf:DiskShelf14
v0.16:NETAPP__:VD-1000MB-FZ-520:11700900:2104448
v0.17:NETAPP__:VD-1000MB-FZ-520:11700901:2104448
v0.18:NETAPP__:VD-1000MB-FZ-520:11700902:2104448
v0.19:NETAPP__:VD-1000MB-FZ-520:11700903:2104448
(output snipped)
v1.22:NETAPP__:VD-1000MB-FZ-520:14091006:2104448
--More--(byte 1061)

(reboot your system)
bash-3.2# reboot
(the system reboots)
Password: *****
f1> disk show -n
(this will show the new unowned disks)

f1> disk assign all
(or in cdot: disk assign -all true -node )
(this will assign the new disks to the controller)

(done)

Posted in netapp | Leave a comment

netapp adding disks

7-mode and cmode

Posted in netapp | Leave a comment

linux allocate memory

#!/bin/bash

echo "Provide sleep time in the form of NUMBER[SUFFIX]"
echo " SUFFIX may be 's' for seconds (default), 'm' for minutes,"
echo " 'h' for hours, or 'd' for days."
read -p "> " delay

echo "begin allocating memory..."
for index in $(seq 1000); do
value=$(seq -w -s '' $index $(($index + 100000)))
eval array$index=$value
done
echo "...end allocating memory"

echo "sleeping for $delay"
sleep $delay

Posted in linux | Leave a comment

cdot exercises

1. Your cluster has 2 cluster-interfaces per node.

Find an available network port per node and add
a third cluster-interface on each node to increase
the bandwidth on the cluster-network.

example

2. Create a new user (user1) in a vserver and allow
the user to create volumes.

example

3. Make sure you can login to your cluster without having
to specify a password.

example

4. Find out which disks are in which raidgroups.

example

5. Create a new rootaggregate

example

Posted in Uncategorized | Leave a comment

clustermode 8.2.1 qtree export

Support for qtree nfs-exports.

A new qtree
volume qtree create -vserver vserver_name -qtree-path /vol/volume_name/qtree_name -export-policy export_policy_name

An existing qtree
volume qtree modify -vserver vserver_name -qtree-path /vol/volume_name/qtree_name -export-policy export_policy_name

Posted in Uncategorized | Leave a comment

esx power on vm

To power on a virtual machine from the command line:
List the inventory ID of the virtual machine with the command:

vim-cmd vmsvc/getallvms |grep

Note: The first column of the output shows the vmid.

Check the power state of the virtual machine with the command:

vim-cmd vmsvc/power.getstate

Power-on the virtual machine with the command:

vim-cmd vmsvc/power.on

Posted in Virtualization | Leave a comment

mysql phpmyadmin install

phpmyadmin_installation

Step #1: Turn on EPEL repo

phpMyAdmin is not included in default RHEL / CentOS repo. So turn on EPEL repo as described here:
$ cd /tmp
$ wget http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
# rpm -ivh epel-release-6-8.noarch.rpm

Step #2: Install phpMyAdmin on a CentOS / RHEL Linux

Type the following yum command to download and install phpMyAdmin:
# yum search phpmyadmin
# yum -y install phpmyadmin

Install MySQL server on a CentOS/RHEL

You need download and install MySQL server on CentOS/RHEL using the following yum command:
# yum install mysql-server mysql

Turn on and start the mysql service, type:
# chkconfig mysqld on
# service mysqld start

Set root password and secure mysql installation by running the following command:
# mysql_secure_installation

Step #3: Configure phpMyAdmin

You need to edit /etc/httpd/conf.d/phpMyAdmin.conf file, enter:
# vi /etc/httpd/conf.d/phpMyAdmin.conf

It allows only localhost by default. You can setup HTTPD SSL as described here (mod_ssl) and allow LAN / WAN users or DBA user to manage the database over www. Find line that read follows

Require ip 127.0.0.1
Replace with your workstation IP address:

Require ip 10.1.3.53
Again find the following line:

Allow from 127.0.0.1
Replace as follows:

Allow from 10.1.3.53
Save and close the file. Restart Apache / httpd server:
# service httpd restart

Open a web browser and type the following url:
https://your-server-ip/phpMyAdmin/

Posted in Uncategorized | Leave a comment

mysql reset lost rootpassword

source: reset lost rootpassword

# /etc/init.d/mysql stop

Output:

Stopping MySQL database server: mysqld.
Step # 2: Start to MySQL server w/o password:

# mysqld_safe --skip-grant-tables &

Output:

[1] 5988
Starting mysqld daemon with databases from /var/lib/mysql
mysqld_safe[6025]: started
Step # 3: Connect to mysql server using mysql client:

# mysql -u root

Output:

Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 4.1.15-Debian_1-log
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql>
Step # 4: Setup new MySQL root user password

mysql> use mysql;
mysql> update user set password=PASSWORD("NEW-ROOT-PASSWORD") where User='root';
mysql> flush privileges;
mysql> quit

Step # 5: Stop MySQL Server:

# /etc/init.d/mysql stop

Output:

Stopping MySQL database server: mysqld
STOPPING server from pid file /var/run/mysqld/mysqld.pid
mysqld_safe[6186]: ended
[1]+ Done mysqld_safe --skip-grant-tables
Step # 6: Start MySQL server and test it

# /etc/init.d/mysql start
# mysql -u root -p

----

or: /usr/bin/mysql_secure_installation

Posted in Uncategorized | Leave a comment

clustermode nice documents

documents

Posted in Uncategorized | Leave a comment

clustermode 8.2 vm setup

By Gerard Bosmann

Kopieer de uitgepakte simulator naar de ESX of VMWorkstation

Edit de vmx file

zoek naar de regel scsi0.pciSlotNumber = ì16î
en plak op die plaats het volgende

pciBridge0.present = "TRUE"
pciBridge0.pciSlotNumber = "16"
scsi0.pciSlotNumber = "17"
ethernet0.pciSlotNumber = "18"
ethernet1.pciSlotNumber = "19"
ethernet2.pciSlotNumber = "20"
ethernet3.pciSlotNumber = "21"
ethernet4.pciSlotNumber = "22"
ethernet5.pciSlotNumber = "23"

Start node 1
onderbreek het opstarten en open de loader
setenv bootarg.nvram.sysid 4079432737
setenv SYS_SERIAL_NUM 4079432-73-7
boot
bootmenu 4
Create het Cluster
Cluster Base license = SMKQROWJNQYQSDAAAAAAAAAAAAAA

CIFS WKGVRYETVDDCMAXAGAAAAAAAAAAA CIFS protocol
FCP UZBZSYETVDDCMAXAGAAAAAAAAAAA Fibre Channel Protocol
FlexClone QDTGVYETVDDCMAXAGAAAAAAAAAAA FlexClone
Insight_Balance WQDHZYETVDDCMAXAGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI IFEKSYETVDDCMAXAGAAAAAAAAAAA iSCSI protocol
NFS KQIGRYETVDDCMAXAGAAAAAAAAAAA NFS protocol
SnapLock OSOKWYETVDDCMAXAGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise KWFSYYETVDDCMAXAGAAAAAAAAAAA SnapLock Enterprise
SnapManager ANMZWYETVDDCMAXAGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror EJVRUYETVDDCMAXAGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect MHKOXYETVDDCMAXAGAAAAAAAAAAA SnapProtect Applications
SnapRestore SOXCUYETVDDCMAXAGAAAAAAAAAAA SnapRestore
SnapVault CYQVVYETVDDCMAXAGAAAAAAAAAAA SnapVault primary and secondary

Maak de SSD schijven
security login unlock -username diag
security login password -username diag ( kies wachtwoord )
set diag
systemshell -node node-x
login: diag
setenv PATH /sim/bin:$PATH
cd /sim/dev
sudo vsim_makedisks -t 35 -a 2 -n 14
exit
reboot -node node-x

Na de reboot:
disk assign all true -node node-x

Controleer of de node eigenaar van de licenties is ( lincense show )
Controleer of de schijven er zijn ( disk show -type ssd, disk show -type fcal )

Node 2

Edit de vmx file
zoek naar de regel scsi0.pciSlotNumber = ì16î
en plak op die plaats het volgende

pciBridge0.present = "TRUE"
pciBridge0.pciSlotNumber = "16"
scsi0.pciSlotNumber = "17"
ethernet0.pciSlotNumber = "18"
ethernet1.pciSlotNumber = "19"
ethernet2.pciSlotNumber = "20"
ethernet3.pciSlotNumber = "21"
ethernet4.pciSlotNumber = "22"
ethernet5.pciSlotNumber = "23"

Start node 2
onderbreek het opstarten en open de loader
setenv bootarg.nvram.sysid 4079432741
setenv SYS_SERIAL_NUM 4079432-74-1
bootmenu 4
join de cluster

CIFS APJAYWXCCLPKICXAGAAAAAAAAAAA CIFS protocol
FCP YDFEZWXCCLPKICXAGAAAAAAAAAAA Fibre Channel Protocol
FlexClone UHWLBXXCCLPKICXAGAAAAAAAAAAA FlexClone
Insight_Balance AVGMFXXCCLPKICXAGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI MJHPYWXCCLPKICXAGAAAAAAAAAAA iSCSI protocol
NFS OULLXWXCCLPKICXAGAAAAAAAAAAA NFS protocol
SnapLock SWRPCXXCCLPKICXAGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise OAJXEXXCCLPKICXAGAAAAAAAAAAA SnapLock Enterprise
SnapManager ERPEDXXCCLPKICXAGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror INYWAXXCCLPKICXAGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect QLNTDXXCCLPKICXAGAAAAAAAAAAA SnapProtect Applications
SnapRestore WSAIAXXCCLPKICXAGAAAAAAAAAAA SnapRestore
SnapVault GCUACXXCCLPKICXAGAAAAAAAAAAA SnapVault primary and secondary

Maak de SSD schijven
security login unlock -username diag
security login password -username diag ( kies wachtwoord )
set diag
systemshell -node node-x
login: diag
setenv PATH /sim/bin:$PATH
cd /sim/dev
sudo vsim_makedisks -t 35 -a 2 -n 14
exit
reboot -node node-x

Na de reboot:
disk assign -all true -node node-x

Controleer of de node eigenaar van de licenties is ( lincense show )
Controleer of de schijven er zijn ( disk show -type ssd, disk show -type fcal )

Posted in netapp | Leave a comment

clustermode get rid of snap because of smaller disks in vm environment

system node run -node cluster1–01 vol options vol0 nosnap on
system node run -node cluster1–02 vol options vol0 nosnap on

cluster1::> system node run -node cluster1–01 snap reserve vol0 0
cluster1::> system node run -node cluster1–02 snap reserve vol0 0

Posted in Uncategorized | Leave a comment

7-mode nfsmounts

From any linux/unix client, check which clients have mounted
which exported volumes from a particular filer.

1. On the filer
filer> options nfs.mountd.trace on

2. On any client
mount an exported volume from the filer.

3. On client
client> ssh root@filer rdfile /etc/messages | grep "in mount"| \
awk '{print $8 " "$18}'

Posted in netapp | Leave a comment

netapp 7-mode licenses 8.2

7-Mode Data ONTAP Feature Licenses
==================================

Licenses for the ESX build (Serial Number 4079432752)
-----------------------------------------------------

Use these licenses with the VMware ESX build.

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS WMNZAUTQACAAAAXAGAAAAAAAAAAA CIFS protocol
FCP UBJDCUTQACAAAAXAGAAAAAAAAAAA Fibre Channel Protocol
FlexClone QFALEUTQACAAAAXAGAAAAAAAAAAA FlexClone
Insight_Balance WSKLIUTQACAAAAXAGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI IHLOBUTQACAAAAXAGAAAAAAAAAAA iSCSI protocol
NFS KSPKAUTQACAAAAXAGAAAAAAAAAAA NFS protocol
SnapLock OUVOFUTQACAAAAXAGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise KYMWHUTQACAAAAXAGAAAAAAAAAAA SnapLock Enterprise
SnapManager APTDGUTQACAAAAXAGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror ELCWDUTQACAAAAXAGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect MJRSGUTQACAAAAXAGAAAAAAAAAAA SnapProtect Applications
SnapRestore SQEHDUTQACAAAAXAGAAAAAAAAAAA SnapRestore
SnapVault CAYZEUTQACAAAAXAGAAAAAAAAAAA SnapVault primary and secondary

Licenses for the non-ESX build (Serial Number 4079432748)
---------------------------------------------------------

Use these licenses with the VMware Workstation, VMware Player, and VMware Fusion build.

Feature License Code Description
------------------- ---------------------------- --------------------------------------------

CIFS UPIWINYTTXKZOFXAGAAAAAAAAAAA CIFS protocol
FCP SEEAKNYTTXKZOFXAGAAAAAAAAAAA Fibre Channel Protocol
FlexClone OIVHMNYTTXKZOFXAGAAAAAAAAAAA FlexClone
Insight_Balance UVFIQNYTTXKZOFXAGAAAAAAAAAAA OnCommand Insight and Balance products
iSCSI GKGLJNYTTXKZOFXAGAAAAAAAAAAA iSCSI protocol
NFS IVKHINYTTXKZOFXAGAAAAAAAAAAA NFS protocol
SnapLock MXQLNNYTTXKZOFXAGAAAAAAAAAAA SnapLock Compliance
SnapLock_Enterprise IBITPNYTTXKZOFXAGAAAAAAAAAAA SnapLock Enterprise
SnapManager YROAONYTTXKZOFXAGAAAAAAAAAAA SnapManager and SnapDrive products
SnapMirror COXSLNYTTXKZOFXAGAAAAAAAAAAA SnapMirror, including synchronous SnapMirror
SnapProtect KMMPONYTTXKZOFXAGAAAAAAAAAAA SnapProtect Applications
SnapRestore QTZDLNYTTXKZOFXAGAAAAAAAAAAA SnapRestore
SnapVault ADTWMNYTTXKZOFXAGAAAAAAAAAAA SnapVault primary and secondary

Posted in Uncategorized | 1 Comment

vmware vmkload_mod multiextent

ESXi no longer automatically loads this module.
Results in this:

You are unable to power on some virtual machines
Powering on virtual machines fail with the error:

File [VMFS volume] VM-name/VM-name.vmdk was not found.

When you view the details of this message, you see entries similar to:

Error Stack:
An error was received from the ESX host while powering on VM VM-name
Cannot open the disk '/vmfs/volumes/Datastore/VM-name/VM-name.vmdk' or one of the snapshot disks it depends on.
The system cannot find the file specified.
VMware ESX cannot find the virtual disk '/vmfs/volumes/Datastore/VM-name/VM-name.vmdk'. Verify the path is valid and try again.


solution:
log in to your esx server and run this command:
vmkload_mod multiextent

Posted in Virtualization | Leave a comment

netapp 7-mode systemshell

7-mode systemshell

Posted in Uncategorized | Leave a comment

solaris 11 integrated load balancer example

ilb example

Posted in solaris | Leave a comment

solaris 11 lacp dlmp link-aggregate

1. create an aggregate (aggr0) over net1 net2 and net3
( in vbox Adapter1=net3 Adapter2=net0 Adapter3=net1 Adapter4=net2 )

dladm create-aggr -l net1 -l net2 -l net3 aggr0
dladm modify-aggr -m trunk aggr0
(trunk is the default though)

dladm show-aggr -L
(what is the policy?
2 source and destination MAC
3 source and destination
4 upper layer protocol information contained in the packet. (tcp/udp port)

2. dladm modify-aggr -m dlmp aggr0
dladm show-aggr -L
(see the mode has changed to dlmp)

3. create an ip interface
ipadm create-ip aggr0

4. create an address
ipadm create-addr -T static -a 192.168.0.120/24 aggr0/v4

5. from another machine run 'ping -s 192.168.0.120'

6. remove cables from vm interfaces one by one.
your pings will succeed until you remove the last cable.

7. with dladm show-link you will see that the removed cables
have an effect.

Posted in solaris | Leave a comment

solaris11 distro_const

distribution constructor

pfexec pkg install SUNWdistro-const

mkdir -p /ips/manifests
cp /usr/share/distro_const/dc_text_x86.xml /ips/manifests/

distro_const build /ips/manifests/dc_text_x86.xml

Posted in Uncategorized | Leave a comment

solaris 11 exercise pkg search and install

To search for a package in a particular publisher:

# pkg search -s http://192.168.4.150
(output skipped)

To install a package from a particular publisher:

# pkg install -g http://192.168.4.150
(output skipped)

Posted in solaris | Leave a comment

solaris script to create and delete a zone

1. create a filesystem and install a zone
then snapshot the filesystem and use it
for cloning at zonecreation

zonecfg -z basezone
create
set zonepath=/zonepool/basezone
exit
zoneadm -z base install

zfs snapshot zonepool/basezone@base

script:
zcreate
==========================================
#!/usr/bin/bash

if test $# -lt 1
then
echo "usage : zcreate zonename "
exit
fi

#put config in place
zfs clone zonepool/basezone@base zonepool/$1
zonecfg -z $1 < /etc/zones/index.new
cat /etc/zones/index.orig /etc/zones/index.new > /etc/zones/index

zoneadm -z $1 boot
=============================================

script:
rzone
==============================================
cp /etc/zones/index.orig /etc/zones/index
rm /etc/zones/${1}.xml
zfs destroy zonepool/${1}

==============================================

Posted in solaris | Leave a comment

testje

ssh hello

Posted in Uncategorized | Leave a comment

esxi mac address

vim-cmd hostsvc/net/info | grep "mac ="
mac = (string) [
mac = "00:1b:78:59:eb:52",
mac = "00:1b:78:59:eb:53",
mac = "80:ee:73:63:4e:1c",
mac = "00:1b:78:59:eb:52",

Posted in Virtualization | Leave a comment

solaris11 network stack

solaris 11 network stack

Posted in solaris | Leave a comment

solaris11 networking getting started

oracle html
Oracle

New Features of Oracle Solaris 11 Network Configuration
Manual and Automatic Networking Modes
Manual Network Configuration
Name Service Configuration Using SMF
Setting the Host Name
Changes to /etc/hosts
Automatic Network Configuration Using Profiles
Network Profiles
Creating a Network Configuration Profile
Summary
See Also
About the Author

The Oracle Solaris 11 network architecture is significantly different from previous releases of Oracle Solaris. Not only has the implementation changed, but so have the names of network interfaces and the commands and methods for administering and configuring them.

OTN is all about helping you become familiar enough with Oracle technologies to make an informed decision. Articles, software downloads, documentation, and more. Join up and get the technical resources you need to do your job.
These changes were introduced to bring a more consistent and integrated experience to network administration, particularly as administrators add more-complex configurations including link aggregation, bridging, load balancing, or virtual networks. In addition to the traditional fixed networking configuration, Oracle Solaris 11 introduced automatic network configuration through network profiles.

New Features of Oracle Solaris 11 Network Configuration
Oracle Solaris 11 introduced two new commands for manually administering networks, dladm and ipadm, and both supersede ifconfig. Unlike ifconfig, changes made by dladm and ipadm are persistent across reboots. They share a common, consistent command format and, unlike ifconfig, they have parseable output that can be used in scripts.

dladm performs data-link (layer 2) administration to configure physical links, aggregations, VLANs, IP tunnels, and InfiniBand partitions. It also manages link-layer properties.

ipadm configures IP interfaces, IP addresses, and TCP/IP protocol properties. It also replaces the use of ndd for network and transport layer tuning.

Data-link names are no longer the same as the physical interface, which might be a virtual device. Instead, they have generic names, such as net0 or net1, or administrators can give them descriptive names. This allows the underlying hardware to be changed without impacting the network configuration.

In addition, Oracle Solaris 11 adds automatic network configuration using network profiles. Profiles are managed with two administrative commands—netadm and netcfg—and describe the configuration of network interfaces, name services, routing, and IP filter and IPsec policies in a single entity.

Manual and Automatic Networking Modes
Oracle Solaris 11 uses profile-based network configuration, which comprises two network configuration modes: manual and automatic.

Depending on which mode you chose during installation, either the DefaultFixed network configuration profile (NCP) or the Automatic NCP is activated on the system.

The Automatic NCP uses DHCP to obtain a basic network configuration (IP address, router, and DNS server) from any of the connected Ethernet interfaces. If this fails, it will try connecting to the best wireless network in the list of known networks.

The DefaultFixed NCP effectively disables automatic network configuration and requires the network interfaces to be manually configured using dladm and ipadm and the name services to be configured using the Oracle Solaris Service Management Facility (SMF).

It is easier to manage Oracle Solaris 11 networking by creating your own NCPs rather than using the DefaultFixed NCP and manually configuring the network.

The DefaultFixed NCP should be used on systems that will be reconfigured using Oracle Solaris Dynamic Reconfiguration or where hot-swappable interfaces are used. It must be used for IP multipathing, which is not supported when using the Automatic NCP.

You can use netadm to find out what network profiles are active on a system:

root@solaris:~# netadm list
TYPE PROFILE STATE
ncp Automatic online
ncu:phys net0 online
ncu:ip net0 online
loc Automatic online
loc NoNet offline
loc User online

Without going into too much detail now (we will cover this in a later section), the output above shows that the Automatic NCP is enabled.

To switch to the DefaultFixed NCP and, thus, enable manual networking, run the following command:

root@solaris:~# netadm enable -p ncp DefaultFixed
root@solaris:~# netadm list
netadm: DefaultFixed NCP is enabled; automatic network management is not available.
'netadm list' is only supported when automatic network management is active.

And to switch back to the Automatic NCP, use the following command:

root@solaris:~# netadm enable -p ncp Automatic
root@solaris:~# netadm list
TYPE PROFILE STATE
ncp Automatic uninitialized
ncu:phys net0 uninitialized
ncu:ip net0 uninitialized
loc Automatic uninitialized

As the system starts to configure the data links and receives an IP address from the DHCP server, we soon get back to our original online state:

root@solaris:~# netadm list
TYPE PROFILE STATE
ncp Automatic online
ncu:phys net0 online
ncu:ip net0 online
loc Automatic online
loc NoNet offline
loc User online

Manual Network Configuration
In the following example, we will manually configure our server to have a static IPv4 address of 10.163.198.20.

First of all, we will switch to the DefaultFixed NCP, if that hasn't been done already:

root@solaris:~# netadm enable -p ncp DefaultFixed

On a machine with multiple physical networks, you can use dladm to determine how network interface names are mapped to physical interfaces.

root@solaris:~# dladm show-phys
LINK MEDIA STATE SPEED DUPLEX DEVICE
net0 Ethernet up 1000 full e1000g0
net1 Ethernet unknown 0 unknown pcn0

Creating a static IP address is a two-step process, and it involves creating an IP interface and an IP address. There can be multiple IP addresses associated with an IP interface. IP address objects have names in the form interface/description.

In the example shown in Listing 1, we use acme as the description.

root@solaris:~# ipadm create-ip net0
root@solaris:~# ipadm show-if
IFNAME CLASS STATE ACTIVE OVER
lo0 loopback ok yes ---
net0 ip down no ---
root@solaris:~# ipadm create-addr -T static -a 10.163.198.20/24 net0/acme
root@solaris:~# ipadm show-if
IFNAME CLASS STATE ACTIVE OVER
lo0 loopback ok yes ---
net0 ip ok yes ---
root@solaris:~# ipadm show-addr
ADDROBJ TYPE STATIC ADDR
lo0/v4 static ok 127.0.0.1/8
net0/acme static ok 10.163.198.20/24
lo0/v6 static ok ::1/128
Listing 1. Configuring a Static IP Address

We can then add a persistent default route:

root@solaris:~# route -p add default 10.163.198.1
add net default: gateway 10.163.198.1
add persistent net default: gateway 10.163.198.1

Name Service Configuration Using SMF
The name service configuration is now stored and configured via SMF services instead of via configuration files in /etc. This change is part of a wider set of configuration changes in Oracle Solaris 11, which provides a greater degree of administrative auditability and control over system configuration, particularly during system updates.

The SMF service svc:/network/dns/client manages configuration information that used to be in /etc/resolv.conf. The SMF service svc:/system/name-service/switch manages configuration information that used to be in /etc/nsswitch.conf. In both cases, the configuration information is also stored in the legacy files for compatibility with other applications that might read them. You should not directly edit these legacy files. Changes made to properties are not reflected in the legacy files until the service is refreshed, restarted, or enabled.

Note: Specifying lists and strings as SMF properties requires quoting them or escaping parentheses and quotation marks to prevent the shell from interpreting them.

Example: Configuring a DNS Client Using SMF
In the following example, we configure Domain Name Service (DNS) using the svccfg command on the svc:/network/dns/client SMF service. This will give us the ability to look up IP addresses for host names and vice versa:

root@solaris:~# svccfg -s svc:/network/dns/client setprop \
config/search='("uk.acme.com" "us.acme.com" "acme.com")'

root@solaris:~# svccfg -s svc:/network/dns/client listprop config/search
config/search astring "uk.acme.com" "us.acme.com" "acme.com"

root@solaris:~# svccfg -s svc:/network/dns/client setprop \
config/nameserver=net_address: '(10.167.162.20 10.167.162.36)'

root@solaris:~# svccfg -s svc:/network/dns/client listprop config/nameserver
config/nameserver net_address 10.167.162.20 10.167.162.36

After we have made the configuration changes, we refresh the SMF service:

root@solaris:~# svcadm refresh svc:/network/dns/client

It is not necessary to set the properties for every name service database. You can use the special property config/default to provide a default value. You can individually customize entries that can't use the default value.

Example: Configuring /etc/switch.conf Using SMF
In the following example, we use the name service switch mechanism to allow our system to search through the DNS, LDAP, NIS, or local file sources for naming information. We again use the svccfg command on the svc:/system/name-service/switch SMF service:

root@solaris:~# svccfg -s svc:/system/name-service/switch setprop config/default = "files nis"
root@solaris:~# svccfg -s svc:/system/name-service/switch setprop config/host = "files dns nis"
root@solaris:~# svccfg -s svc:/system/name-service/switch setprop config/password = "files nis"
root@solaris:~# svcadm refresh svc:/system/name-service/switch

Note: The config/host property defines both the hosts and ipnodes entries in /etc/nsswitch.conf, while the config/password property defines the passwd entry. The remaining properties have the same name as their /etc/nsswitch.conf entries.

Setting the Host Name
In Oracle Solaris 11, /etc/nodename has been removed and replaced with the config/nodename property of the svc:/system/identity:node service.

To set the host name, we again use svccfg:

root@solaris:~# svccfg -s svc:/system/identity:node setprop config/nodename = astring: hostname
root@solaris:~# svcadm refresh svc:/system/identity:node
root@solaris:~# svcadm restart identity:node

Setting the host name this way will work for both automatic and manual network configurations.

Changes to /etc/hosts
In Oracle Solaris 11, the host's own entry in /etc/hosts is now the same as that of localhost. In previous versions of Oracle Solaris, this entry was associated with the first network interface.

root@solaris:~# cat /etc/hosts
#
# Copyright 2009 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# Internet host table
#
::1 solaris localhost
127.0.0.1 solaris localhost loghost

Note: Some application installers might fail due to changes in the /etc/hosts file. If you experience this, you might have to edit /etc/hosts directly.

Automatic Network Configuration Using Profiles
In Oracle Solaris 11, network profiles help to aggregate network configuration that was scattered across multiple different configuration files in previous versions of Oracle Solaris. Switching network profiles results in a set of changes to different network configuration that is applied in a single administrative operation.

The traditional configuration files still exist for compatibility reasons only, but you should not directly edit any of these files because any modifications will be overwritten when a profile is activated or the system is rebooted.

Network Profiles
A network profile contains a Network Configuration Profile (NCP) and a Location Profile at a minimum, and it optionally contains External Network Modifiers (ENMs) and Known Wireless Networks (WLANs).

NCPs define a set of data links and IP interfaces as Network Configuration Units (NCUs). A Location Profile defines additional configuration, such as name service, IP filter rules, and IPsec policies that can be configured only after basic IP configuration.

ENMs are applications or services that directly modify the network configuration when a profile is activated or deactivated. An ENM would be needed to configure a virtual private network (VPN), for example. The use of ENMs or the configuration of wireless networks is not covered in this article.

Profiles have an activation mode that is either manual or automatic. When an automatic profile is active, external network events cause Oracle Solaris to re-evaluate which is the "best" automatic profile and make that profile active. External events include connecting or disconnecting an Ethernet cable, obtaining or losing a DHCP lease, or discovering a wireless network. There is always an active NCP and Location Profile. It is not possible to disable networking by disabling the current profile.

Creating a Network Configuration Profile
Without modification, the Automatic profile is generally unsuitable for most corporate networks, which are either static or provide more configuration information via DHCP than the Automatic profiles uses.

If your network has statically allocated IP address, you will need to create an NCP and a Location Profile.

In this example, we look at a typical corporate network of a fictional Acme corporation. It has statically allocated network addresses, uses a combination of NIS and DNS, and does not use IPv6.

To configure a system on the Acme network, we need to create an NCP and a Location Profile.

Example: Creating an NCP
To create the NCP and its component NCUs, we use netcfg. For the physical link, we accept the defaults provided by netcfg. For the IP configuration, we want IPv4 addressing and static IP address allocation, as shown in Listing 2.

root@solaris:~# netcfg
netcfg> create ncp acme.corp.ncp
netcfg:ncp:acme.corp.ncp> create ncu phys net0
Created ncu 'net0'. Walking properties ...
activation-mode (manual) [manual|prioritized]>
link-mac-addr>
link-autopush>
link-mtu>
netcfg:ncp:acme.corp.ncp:ncu:net0> list
ncu:net0
type link
class phys
parent "acme.corp.ncp"
activation-mode manual
enabled true
netcfg:ncp:acme.corp.ncp:ncu:net0> end
Committed changes
netcfg:ncp:acme.corp.ncp> create ncu ip net0
Created ncu 'net0'. Walking properties ...
ip-version (ipv4,ipv6) [ipv4|ipv6]> ipv4
ipv4-addrsrc (dhcp) [dhcp|static]> static
ipv4-addr> 10.163.198.20/24
ipv4-default-route> 10.163.198.1
netcfg:ncp:acme.corp.ncp:ncu:net0> list
ncu:net0
type interface
class ip
parent "acme.corp.ncp"
enabled true
ip-version ipv4
ipv4-addrsrc static
ipv4-addr "10.163.198.20/24"
ipv4-default-route "10.163.198.1"
ipv6-addrsrc dhcp,autoconf
netcfg:ncp:acme.corp.ncp:ncu:net0> end
Committed changes
netcfg:ncp:acme.corp.ncp> end
netcfg> end
Listing 2. Creating the NCP

Now we need to create the Location Profile, as shown in Listing 3. We associate the Location Profile to the network profile through its activation mode. The Location Profile will automatically activate as long as the NCP is active.

Since Acme uses a combination of NIS and DNS name services, we need to provide our own /etc/nsswitch.conf, which we will call /etc/nsswitch.acme.

root@solaris:~# netcfg
netcfg> create loc acme.corp.loc
Created loc 'acme.corp.loc'. Walking properties ...
activation-mode (manual) [manual|conditional-any|conditional-all]> conditional-all
conditions> ncp acme.corp.ncp is active
nameservices (dns) [dns|files|nis|ldap]> dns,nis
nameservices-config-file ("/etc/nsswitch.dns")> /etc/nsswitch.acme
dns-nameservice-configsrc (dhcp) [manual|dhcp]> manual
dns-nameservice-domain>
dns-nameservice-servers> 10.167.162.20,10.167.162.36
dns-nameservice-search> acme.com,uk.acme.com,us.acme.com
dns-nameservice-sortlist>
dns-nameservice-options>
nis-nameservice-configsrc [manual|dhcp]> manual
nis-nameservice-servers> 10.167.162.21
default-domain> acme.com
nfsv4-domain>
ipfilter-config-file>
ipfilter-v6-config-file>
ipnat-config-file>
ippool-config-file>
ike-config-file>
ipsecpolicy-config-file>
netcfg:loc:acme.corp.loc> list
loc:acme.corp.loc
activation-mode conditional-all
conditions "ncp acme.corp.ncp is active"
enabled false
nameservices dns,nis
nameservices-config-file "/etc/nsswitch.acme"
dns-nameservice-configsrc manual
dns-nameservice-servers "10.167.162.20","10.167.162.36"
dns-nameservice-search "acme.com","uk.acme.com","us.acme.com"
nis-nameservice-configsrc manual
nis-nameservice-servers "10.167.162.21"
default-domain "acme.com"
netcfg:loc:acme.corp.loc> end
Committed changes
netcfg> end
Listing 3. Creating the Location Profile

Now we can activate the NCP, as shown in Listing 4, and the Location Profile will be automatically activated.

root@solaris:~# netadm enable acme.corp.ncp
Enabling ncp 'acme.corp.ncp'
root@solaris:~# netadm list
TYPE PROFILE STATE
ncp acme.corp.ncp online
ncu:phys net0 online
ncu:ip net0 online
ncp Automatic disabled
loc acme.corp.loc online
loc Automatic offline
loc NoNet offline
loc User disabled
Listing 4. Activating the NCP

Editing an NCP
There are two ways to edit an existing NCP with netcfg. The set command lets you set individual properties, while the walkprop command walks you through all the properties.

netcfg automatically performs a walkprop command when you create a profile.

In example shown in Listing 5, we add a third DNS server to the existing acme.corp.loc Location Profile.

root@solaris:~# netcfg
netcfg> select loc acme.corp.loc
netcfg:loc:acme.corp.loc> list
loc:acme.corp.loc
activation-mode conditional-all
conditions "ncp acme.corp.ncp is active"
enabled false
nameservices dns,nis
nameservices-config-file "/etc/nsswitch.acme"
dns-nameservice-configsrc manual
dns-nameservice-servers "10.167.162.20","10.167.162.36"
dns-nameservice-search "acme.com", "uk.acme.com","us.acme.com"
nis-nameservice-configsrc manual
nis-nameservice-servers "10.167.162.21"
default-domain "acme.com"
netcfg:loc:acme.corp.loc>
Listing 5. Adding a DNS Server

The list command shows only properties that have been set; list -a shows all the properties of the profile, as shown in Listing 6.

netcfg:loc:acme.corp.loc> list -a
loc:acme.corp.loc
activation-mode conditional-all
conditions "ncp acme.corp.ncp is active"
enabled false
nameservices dns,nis
nameservices-config-file "/etc/nsswitch.acme"
dns-nameservice-configsrc manual
dns-nameservice-domain
dns-nameservice-servers "10.167.162.20","10.167.162.36"
dns-nameservice-search "acme.com", uk.acme.com","us.acme.com"
dns-nameservice-sortlist
dns-nameservice-options
nis-nameservice-configsrc manual
nis-nameservice-servers "10.167.162.21"
ldap-nameservice-configsrc
ldap-nameservice-servers
default-domain "acme.com"
nfsv4-domain
ipfilter-config-file
ipfilter-v6-config-file
ipnat-config-file
ippool-config-file
ike-config-file
ipsecpolicy-config-file
netcfg:loc:acme.corp.loc>

netcfg:loc:acme.corp.loc> set dns-nameservice-servers = "10.167.162.20","10.167.162.36","192.135.82.44"
netcfg:loc:acme.corp.loc> list
loc:acme.corp.loc
activation-mode conditional-all
conditions "ncp acme.corp.ncp is active"
enabled false
nameservices dns,nis
nameservices-config-file "/etc/nsswitch.dns"
dns-nameservice-configsrc manual
dns-nameservice-servers "10.167.162.20","10.167.162.36","192.135.82.44"
dns-nameservice-search "acme.com", uk.acme.com","us.acme.com"
nis-nameservice-configsrc manual
nis-nameservice-servers "10.167.162.21"
netcfg:loc:acme.corp.loc> verify
All properties verified
netcfg:loc:acme.corp.loc> commit
Committed changes
netcfg:loc:acme.corp.loc> end
netcfg> end
root@solaris:~#
Listing 6. Showing All Properties

Summary
Network configuration has substantially changed in Oracle Solaris 11 with the introduction of network configuration profiles and consolidated administration across the different facets of networking fabrics in the data center. By using network configuration profiles, administrators can simplify complex configurations and apply them as a single unit of change.

See Also
For more information related to Oracle Solaris 11 network administration, see the following administration guides:

Oracle Solaris Administration: IP Services
Oracle Solaris Administration: Naming and Directory Services
Oracle Solaris Administration: Network Interfaces and Network Virtualization
Transitioning From Oracle Solaris 10 to Oracle Solaris 11
Here are some additional Oracle Solaris 11 resources:

Download Oracle Solaris 11
Access Oracle Solaris 11 product documentation
Access all Oracle Solaris 11 how-to articles
Learn more with Oracle Solaris 11 training and support
See the official Oracle Solaris blog
Check out The Observatory and OTN Garage blogs for Oracle Solaris tips and tricks
Follow Oracle Solaris on Facebook and Twitter
About the Author
Andrew Walton is a senior engineer in the ISV group at Oracle and has over 20 years experience in the UNIX industry working at Silicon Graphics, Sun, and Oracle. He specializes in application performance tuning and porting C and C++ code.

Revision 1.0, 05/16/2012
See sysadmin-related content for all Oracle technologies by following OTN Systems on Facebook and Twitter.

E-mail this page E-mail this page Printer View Printer View

ORACLE CLOUD
Learn About Oracle Cloud
Get a Free Trial
Learn About PaaS
Learn About SaaS
Learn About IaaS
JAVA
Learn About Java
Download Java for Consumers
Download Java for Developers
Java Resources for Developers
Java Cloud Service
Java Magazine
CUSTOMERS AND EVENTS
Explore and Read Customer Stories
All Oracle Events
Oracle OpenWorld
JavaOne
COMMUNITIES
Blogs
Discussion Forums
Wikis
Oracle ACEs
User Groups
Social Media Channels
SERVICES AND STORE
Log In to My Oracle Support
Training and Certification
Become a Partner
Find a Partner Solution
Purchase from the Oracle Store
CONTACT AND CHAT
Phone: +1.800.633.0738
Global Contacts
Oracle Support
Partner Support
Hardware and Software, Engineered to Work Together
SubscribeCareersContact UsSite MapsLegal NoticesTerms of UsePrivacy
Cookie Preferences
Oracle Mobile
Facebook
LinkedIn
Twitter
Google+
YouTube
Oracle RSS Feed

Posted in solaris | Leave a comment

solaris11 ai_installer non_global zones

installing non-global zones with ai

Posted in solaris, Uncategorized | Leave a comment

solaris 11 exercise zones (2) resource control

Situation: global zone, zone1 and zone2.

First bring all processes under FSS control
dispadmin -d FSS
this will set the default scheduling to FSS at reboot.
file: /etc/dispadmin.conf

To set up a running system.
priocntl -s -c FSS -i all
priocntl -s -c FSS -i pid 1

prctl -n zone.cpu-shares -v 10 -r -i zone global
prctl -n zone.cpu-shares -i zone global
prctl -n zone.cpu-shares -v 10 -r -i zone zone1
prctl -n zone.cpu-shares -v 40 -r -i zone zone2

script:
#!/usr/bin/bash
while true
do
a=121.22/3.34
done

copy the script to the zone root dir of both zones
and run it from within the zone.

use: prstat -Z from within the global zone to
or
use: zonestat -z zone1,zone2 -R total,high 5 1m

watch the usage of cpu.

to change the cpu-shares in the zoneconfig:

zonecfg:zone1> add rctl
zonecfg:zone1:rctl> set name=zone.cpu-shares
zonecfg:zone1:rctl> add value (priv=privileged,limit=10,action=none)
zonecfg:zone1:rctl> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit

other resource controls:

zonecfg:my-zone> add dedicated-cpu
zonecfg:my-zone:dedicated-cpu> set ncpus=1-3
zonecfg:my-zone:dedicated-cpu> set importance=2
zonecfg:my-zone:dedicated-cpu> end
capped-cpu
ncpus

Specify the number of CPUs. The following example specifies a CPU cap of 3.5 CPUs for the zone my-zone.

zonecfg:my-zone> add capped-cpu
zonecfg:my-zone:capped-cpu> set ncpus=3.5
zonecfg:my-zone:capped-cpu> end
capped-memory
physical, swap, locked

Specify the memory limits for the zone my-zone. Each limit is optional, but at least one must be set.

zonecfg:my-zone> add capped-memory
zonecfg:my-zone:capped-memory> set physical=50m
zonecfg:my-zone:capped-memory> set swap=100m
zonecfg:my-zone:capped-memory> set locked=30m
zonecfg:my-zone:capped-memory> end

Posted in solaris | Leave a comment

distro_const example

distro_const

Posted in solaris | Leave a comment

solaris 11 exercise zones (3) clone zone

1. create webzone-1
root@global:~# zonecfg -z webzone-1 "create ; set zonepath=/zones/webzone-1"

2. install webzone-1
root@global:~# zoneadm -z webzone-1 install

3. login and configure webzone-1
root@global:~# zoneadm -z webzone-1 boot; zlogin -C webzone-1

4. create template-profile in webzone-1
root@global:~# zlogin webzone-1
root@webzone-1:~# sysconfig create-profile -o /root/webzone-2-template.xml

5. shutdown webzone-1
root@global:~# zoneadm -z webzone-1 shutdown

6. export webzone-1 to profile
root@global:~# zonecfg -z webzone-1 export -f /zones/webzone-2-profile

7. edit the zonepath in profile
root@global:~# cat /zones/webzone-2-profile
create -b
set zonepath=/zones/webzone-2
set brand=solaris
set autoboot=true
set ip-type=exclusive
add anet
set linkname=net0
set lower-link=auto
set configure-allowed-address=false
set link-protection=mac-nospoof
set mac-address=random
set auto-mac-address=2:8:20:f1:e4:b7
end

8. copy xml sysconfig file from webzone-1 to /zones
root@global:~# cp /zones/webzone-1/root/root/webzone-2-template.xml /zones

9. create webzone-2
root@global:~# zonecfg -z webzone-2 -f /zones/webzone-2-profile

10. clone webzone-2 from webzone-1
root@global:/zones# time zoneadm -z webzone-2 clone -c /zones/webzone-2-template.xml webzone-1

11. login to webzone-2
root@global:~# zoneadm -z webzone-2 boot; zlogin -C webzone-2

Posted in solaris | Leave a comment

solaris 11 exercise smf (3) system identity

# svccfg –s svc:/system/identity:node setprop config/nodename = “myhost”

# svcadm refresh svc:/system/identity:node
# svcadm restart svc:/system/identity:node

Configuring console keyboard layout:
# svccfg –s keymap:default setprop keymap/layout = UK-English
# svcadm refresh keymap
# svcadm restart keymap

Configuring system locale:
# svccfg –s timezone:default setprop timezone/localtime = astring: US/Mountain
# svcadm refresh timezone:default

Posted in Uncategorized | Leave a comment

solaris11 create multiple repositoryservers

original url: omnios.omniti

Creating Repos #

Why? Because it's easy. It's also a good way to separate packages with different dispositions, such as core OS vs. site-specific.
First, create the repo. Any directory will do, but it's usually a good idea to make a filesystem for your repo, which is trivial with ZFS but it can be UFS or NFS just as easily.
# zfs create data/myrepo
# pkgrepo create /data/myrepo
# pkgrepo set -s /data/myrepo publisher/prefix=myrepo.example.com
The last command sets the default publisher to be "myrepo.example.com". A publisher is an entity that builds packages. Publishers are named for uniqueness among a list of possible software providers. Using Internet domain-style names or registered trademarks provides a natural namespace.
At this point there is a fully-functioning pkg repository at file:///data/myrepo. The local machine can use this repository, but it's more likely that you'll want other machines to be able to access this repo.
Configure pkg.depotd to provide remote access. pkg.depotd provides an HTTP interface to a pkg repo. Here we are going to make the repo server listen on port 10000, and use the repo dir we created as its default.
# svcadm disable pkg/server
# svccfg -s pkg/server setprop pkg/inst_root = /data/myrepo
# svccfg -s pkg/server setprop pkg/port = 10000
# svcadm refresh pkg/server
# svcadm enable pkg/server
Additional Depot Servers #

To create a additional depot servers, create a new instance of the pkg/server service for each repository you wish to serve. You'll need to change the filesystem path to the root of the repository, and optionally the port to listen on and whether to allow publishing.
# svccfg -s pkg/server
svc:/application/pkg/server> add mycoolsw
svc:/application/pkg/server> select mycoolsw
svc:/application/pkg/server:mycoolsw> addpg pkg application
svc:/application/pkg/server:mycoolsw> setprop pkg/inst_root = astring: "/data/mycoolsw"
svc:/application/pkg/server:mycoolsw> setprop pkg/port = count: 10003
svc:/application/pkg/server:mycoolsw> setprop pkg/readonly = false
svc:/application/pkg/server:mycoolsw> exit
# svcadm refresh pkg/server:mycoolsw
# svcadm enable pkg/server:mycoolsw
There is now a depot server running at port 10003 that allows publishing (the default is read-only). Note that pkg.depotd provides no authentication, so you may wish to put a reverse-proxy server in front of it if you are going to expose the service publicly. The proxy would need to limit request methods to HEAD and GET for untrusted users.

Posted in solaris | Leave a comment

linux install vmwaretools (CentOS)

create /etc/yum.repos.d/vmware.repo
with the following content:

[vmware-tools]
name=VMware Tools
#baseurl=http://packages.vmware.com/tools/esx/5.1latest/rhel5/i386
#baseurl=http://packages.vmware.com/tools/esx/5.1latest/rhel5/x86_64
#baseurl=http://packages.vmware.com/tools/esx/4.0latest/rhel6/x86_64
#baseurl=http://packages.vmware.com/tools/esx/4.0latest/rhel5/i686
baseurl=http://packages.vmware.com/tools/esx/4.0latest/rhel6/i686
enabled=1
gpgcheck=1
gpgkey=http://packages.vmware.com/tools/keys/VMWARE-PACKAGING-GPG-RSA-KEY.pub

run the following command:
yum install vmware-tools-esx-nox

Posted in linux, Virtualization | Leave a comment

solaris 11 zones (4) delegation example

Delegate zonemanagement of zone3 to user peter.

# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
2 zone3 running /rpool/zones/zone3 solaris excl

# zonecfg -z zone3
zonecfg:zone3> add admin
zonecfg:zone3:admin> set user=peter
zonecfg:zone3:admin> set auths=manage
zonecfg:zone3:admin> end
zonecfg:zone3> commit
zonecfg:zone3> exit

# su - peter
# pfexec bash
# zlogin zone3

[Connected to zone 'zone3' pts/7]
Oracle Corporation SunOS 5.11 11.0 August 2012
root@zone3:~# exit

# zoneadm -z zone3 halt

note: another way of setting authorizations
# usermod -P+"Zone Management" -A+solaris.zone.manage/zone1 peter
# usermod -A+solaris.zone.login/zone2 peter

note: use pfexec bash to test because bash is not RBAC aware.

Posted in solaris | Leave a comment

solaris 11 exercise zfs (2) and intentlog (zil)

sync=standard
This is the default option. Synchronous file system transactions
(fsync, O_DSYNC, O_SYNC, etc) are written out (to the intent log)
and then secondly all devices written are flushed to ensure
the data is stable (not cached by device controllers).

sync=always
For the ultra-cautious, every file system transaction is
written and flushed to stable storage by a system call return.
This obviously has a big performance penalty.

sync=disabled
Synchronous requests are disabled. File system transactions
only commit to stable storage on the next DMU transaction group
commit which can be many seconds (5-30sec). This option gives the
highest performance. However, it is very dangerous as ZFS
is ignoring the synchronous transaction demands of
applications such as databases or NFS.
Setting sync=disabled on the currently active root or /var
file system may result in out-of-spec behavior, application data
loss and increased vulnerability to replay attacks.
This option does *NOT* affect ZFS on-disk consistency.
Administrators should only use this when these risks are understood.

Example:

1. create a 50M file in /tmp
# dd if=/dev/zero of=/tmp/file bs=1048 count=50000

2. create three filesystems
# zfs create -o sync=disabled rpool/nosync
# zfs create -o sync=standard rpool/syncstand
# zfs create -o sync=always rpool/sync

3. watch performance differences
# time for i in 1 2 3 4 ; do cp /tmp/file /rpool/nosync/file$i; done
real 0m0.482s
user 0m0.003s
sys 0m0.137s

# time for i in 1 2 3 4 ; do cp /tmp/file /rpool/syncstand/file$i; done
real 0m1.035s
user 0m0.003s
sys 0m0.180s

# time for i in 1 2 3 4 ; do cp /tmp/file /rpool/sync/file$i; done
real 0m11.179s
user 0m0.005s
sys 0m0.250s

Posted in Uncategorized | Leave a comment

solaris 11 exercise zfs (6) encryption

example with key
zfs create -o encryption=on rpool/cryptfs
Enter passphrase for 'rpool/cryptfs':
Enter again:

zfs snapshot rpool/cryptfs@snap1
zfs clone rpool/cryptfs@snap1 rpool/cryptclone

Enter passphrase for 'rpool/cryptclone':
Enter again:

example with keyfile
# pktool genkey keystore=pkcs11 keytype=aes keylen=128 label=mykey
Enter PIN for Sun Software PKCS#11 softtoken:

# zfs create -o encryption=on -o keysource=raw,pkcs11:object=mykey tank/project/C
Enter PKCS#11 token PIN for 'tank/project/C':

Posted in Uncategorized | Leave a comment

solaris11 flowadm (1)

Simple flowadm example.

server1 - 192.168.4.142, nic-name - net0
client1 - 192.168.4.161
client2 - 192.168.4.6

On server1 that runs solaris 11 run the following commands:
# flowadm add-flow -l net0 -a remote_ip=192.168.4.161 ssh-1
# flowadm add-flow -l net0 -a remote_ip=192.168.4.6 ssh-2
# flowadm set-flowprop -p maxbw=3M ssh-1
# flowadm set-flowprop -p maxbw=30M ssh-2

On server1 create a 10MB file in /tmp
dd if=/dev/zero of=/tmp/file bs=1024 count=10000

On client1 copy the 10MB file from server:/tmp/ to local /tmp
# time scp root@192.168.4.142:/tmp/file /tmp/
file 100% 10MB 357.1KB/s 00:28
real 0m30.308s
user 0m0.142s
sys 0m0.144s

On client2 copy the 10MB file from server:/tmp/ to local /tmp
# time scp root@192.168.4.142:/tmp/file /tmp/
file 100% 10MB 4.9MB/s 00:02
real 0m3.740s
user 0m0.148s
sys 0m0.078s

Posted in Uncategorized | Leave a comment

solaris 11 crossbow

crossbow

Posted in Uncategorized | Leave a comment

solaris11 integrated load balancer (3)

ILB Operation Modes

ILB supports stateless Direct Server Return (DSR) and Network Address Translator (NAT) modes of operation for IPv4 and IPv6, in single-legged and dual-legged topologies.

Stateless DSR topology

NAT mode (full-NAT and half-NAT) topology

Direct Server Return Topology
In DSR mode, ILB balances the incoming requests to the back-end servers, but lets the return traffic from the servers to the clients bypass it. However, you can also set up ILB to be used as a router for a back-end server. In this case, the response from the back-end server to the client is routed through the system that is running ILB. ILB's current implementation of DSR does not provide TCP connection tracking (meaning that it is stateless). With stateless DSR, ILB does not save any state information of the processed packets, except for basic statistics. Because ILB does not save any state in this mode, the performance is comparable to the normal IP forwarding performance. This mode is best suited for connectionless protocols.

Advantages:

Better performance than NAT because only the destination MAC address of packets is changed and servers respond directly to clients.

There is full transparency between the server and the client. The servers see a connection directly from the client IP address and reply to the client through the default gateway.

Disadvantages:

The back-end server must respond to both its own IP address (for health checks) and the virtual IP address (for load-balanced traffic).

Because the load balancer maintains no connection state (meaning that it is stateless), adding or removing servers will cause connection disruption.

The following figure shows the implementation of ILB using the DSR topology.

Half-NAT Load-Balancing Topology
In the half-NAT mode of ILB operation, ILB rewrites only the destination IP address in the header of the packets. If you are using the half-NAT implementation, you cannot connect to a virtual IP (VIP) address of the service from the same subnet on which the server resides. The following table shows the IP addresses of the packets flowing between client and ILB, and between ILB and back-end servers.

Full-NAT Load-Balancing Topology
In the full-NAT implementation, the source and destination IP addresses are rewritten to ensure that the traffic goes through the load balancer in both directions. The full-NAT topology makes it possible to connect to the VIP from the same subnet that the servers are on.

The following table depicts the IP addresses of the packets flowing between a client and ILB, and between ILB and a back-end server using the full-NAT topology. No special default route using the ILB box is required in the servers. But note that the full-NAT topology requires the administrator to set aside one or a range of IP addresses to be used by ILB as source addresses to communicate with the back-end servers. Assume that the addresses used belong to subnet C. In this scenario, the ILB behaves as a proxy.

Posted in solaris | Leave a comment

solaris11 integrated load balancer (2)

Configuring ILB

This section describes the steps for setting up ILB to use a half-NAT topology to load balance traffic among two servers. See the NAT topology implementation in ILB Operation Modes.

How to Configure ILB

Assume a role that includes the ILB Management rights profile, or become superuser.
You can assign the ILB Management rights profile to a role that you create. To create the role and assign the role to a user, see Initially Configuring RBAC (Task Map) in Oracle Solaris 11.1 Administration: Security Services.

Set up the back-end servers.
The back-end servers are set up to use ILB as the default router in this scenario. This can be done by running the following commands on both servers.

# route add -p default 192.168.1.21
After executing this command, start the server applications on both servers. Assume that it is a TCP application listening on port 5000.

Set up the server group in ILB.
There are 2 servers, 192.168.1.50 and 192.169.1.60. A server group, srvgrp1, consisting of these two servers can be created by typing the following command.

# ilbadm create-sg -s servers=192.168.1.50,192.168.1.60 srvgrp1
Set up a simple health check called hc-srvgrp1, can be created by typing the following command.
A simple TCP level health check is used to detect if the server application is reachable. This check is done every 60 seconds. It will try at most 3 times and wait for at most 3 seconds between trials to see if a server is healthy. If all 3 trials fail, it will mark the server as dead.

# ilbadm create-hc -h hc-test=tcp,hc-timeout=3, \
hc-count=3,hc-inerval=60 hc-srvgrp1
Set up an ILB rule by typing the following command.
Persistence (with 32 bits mask) is used in this rule. And the load balance algorithm is round robin. The server group srvgrp1 is used and the health check mechanism used is hc-srvgrp1. The rule can be created by typing the following command.

# ilbadm create-rule -e -p -i vip=10.0.2.20,port=5000 -m \
lbalg=rr,type=half-nat,pmask=32 \
-h hc-name=hc-srvgrp1 -o servergroup=srvgrp1 rule1_rr

Posted in Uncategorized | Leave a comment

solaris11 integrated load balancer (1)

How to Enable ILB

Before You Begin

Make sure that the system's role-based access control (RBAC) attribute files have the following entries. If the entries are not present, add them manually.

File name: /etc/security/auth_attr

solaris.network.ilb.config:::Network ILB Configuration::help=NetworkILBconf.html

solaris.network.ilb.enable:::Network ILB Enable Configuration::help=NetworkILBenable.html

solaris.smf.manage.ilb:::Manage Integrated Load Balancer Service States::help=SmfILBStates.html

File name: /etc/security/prof_attr

Network ILB:::Manage ILB configuration via ilbadm:auths=solaris.network.ilb.config,solaris.network.ilb.enable;help=RtNetILB.html

Network Management entry in the file must include solaris.smf.manage.ilb.

File name: /etc/user_attr

daemon::::auths=solaris.smf.manage.ilb,solaris.smf.modify.application

You must set up user authorization for ILB configuration subcommands. You must have the solaris.network.ilb.config RBAC authorization to execute the ILB configuration subcommands listed in ILB Command and Subcommands.

To assign the authorization to an existing user, see Chapter 9, Using Role-Based Access Control (Tasks), in Oracle Solaris 11.1 Administration: Security Services.

You can also provide the authorization when creating a new user account on the system.

The following example creates a user ilbadm with group ID 10, user ID 1210 and with the authorization to administer ILB in the system.

# useradd -g 10 -u 1210 -A solaris.network.ilb.config ilbadmin
The useradd command adds a new user to the /etc/passwd, /etc/shadow, and /etc/user_attr files. The -A option assigns the authorization to the user.

Assume a role that includes the ILB Management rights profile, or become superuser.
You can assign the ILB Management rights profile to a role that you create. To create the role and assign the role to a user, see Initially Configuring RBAC (Task Map) in Oracle Solaris 11.1 Administration: Security Services.

Enable the appropriate forwarding service either IPv4 or IPv6 or both of them.
This command produces no output when successful.

# ipadm set-prop -p forwarding=on ipv4
# ipadm set-prop -p forwarding=on ipv6
Enable the ILB service.
# svcadm enable ilb
Verify that the ILB service is enabled.
# svcs ilb

Posted in Uncategorized | Leave a comment

solaris11 zone delegation

Delegation of Solaris Zone Administration
By darrenm on Jul 04, 2012

In Solaris 11 'Zone Delegation' is a built in feature. The Zones system now uses finegrained RBAC authorisations to allow delegation of management of distinct zones, rather than all zones which is what the 'Zone Management' RBAC profile did in Solaris 10.

The data for this can be stored with the Zone or you could also create RBAC profiles (that can even be stored in NIS or LDAP) for granting access to specific lists of Zones to administrators.

For example lets say we have zones named zoneA through zoneF and we have three admins alice, bob, carl. We want to grant a subset of the zone management to each of them.

We could do that either by adding the admin resource to the appropriate zones via zonecfg(1M) or we could do something like this with RBAC data directly:

First lets look at an example of storing the data with the zone.

# zonecfg -z zoneA
zonecfg:zoneA> add admin
zonecfg:zoneA> set user=alice
zonecfg:zoneA> set auths=manage
zonecfg:zoneA> end
zonecfg:zoneA> commit
zonecfg:zoneA> exit
Now lets look at the alternate method of storing this directly in the RBAC database, but we will show all our admins and zones for this example:

# usermod -P +Zone Management -A +solaris.zone.manage/zoneA alice

# usermod -A +solaris.zone.login/zoneB alice

# usermod -P +Zone Management-A +solaris.zone.manage/zoneB bob
# usermod -A +solaris.zone.manage/zoneC bob

# usermod -P +Zone Management-A +solaris.zone.manage/zoneC carl
# usermod -A +solaris.zone.manage/zoneD carl
# usermod -A +solaris.zone.manage/zoneE carl
# usermod -A +solaris.zone.manage/zoneF carl
In the above alice can only manage zoneA, bob can manage zoneB and zoneC and carl can manage zoneC through zoneF. The user alice can also login on the console to zoneB but she can't do the operations that require the solaris.zone.manage authorisation on it.

Or if you have a large number of zones and/or admins or you just want to provide a layer of abstraction you can collect the authorisation lists into an RBAC profile and grant that to the admins, for example lets great an RBAC profile for the things that alice and carl can do.

# profiles -p 'Zone Group 1'
profiles:Zone Group 1> set desc="Zone Group 1"
profiles:Zone Group 1> add profile="Zone Management"
profiles:Zone Group 1> add auths=solaris.zone.manage/zoneA
profiles:Zone Group 1> add auths=solaris.zone.login/zoneB
profiles:Zone Group 1> commit
profiles:Zone Group 1> exit
# profiles -p 'Zone Group 3'
profiles:Zone Group 1> set desc="Zone Group 3"
profiles:Zone Group 1> add profile="Zone Management"
profiles:Zone Group 1> add auths=solaris.zone.manage/zoneD
profiles:Zone Group 1> add auths=solaris.zone.manage/zoneE
profiles:Zone Group 1> add auths=solaris.zone.manage/zoneF
profiles:Zone Group 1> commit
profiles:Zone Group 1> exit

Now instead of granting carl and aliace the 'Zone Management' profile and the authorisations directly we can just give them the appropriate profile.

# usermod -P +'Zone Group 3' carl

# usermod -P +'Zone Group 1' alice

If we wanted to store the profile data and the profiles granted to the users in LDAP just add '-S ldap' to the profiles and usermod commands.

Posted in Uncategorized | Leave a comment

solaris11 linkprop

From the global zone enable link protection on vnic0:

We can set different modes: ip-nospoof, dhcp-nospoof, mac-nospoof and restricted.
ip-nospoof: Any outgoing IP, ARP, or NDP packet must have an address field that matches either a DHCP-configured IP address or one of the addresses listed in the allowed-ips link property.
mac-nospoof: prevents the root user from changing the zone mac address. An outbound packet's source MAC address must match the datalink's configured MAC address.
dhcp-nospoof: prevents Client ID/DUID spoofing for DHCP.
restricted: only allows IPv4, IPv6 and ARP protocols. Using this protection type prevents the link from generating potentially harmful L2 control frames.

# dladm set-linkprop -p protection=mac-nospoof,restricted,ip-nospoof vnic0

Specify the 10.0.0.1 IP address as values for the allowed-ips property for the vnic0 link:

# dladm set-linkprop -p allowed-ips=10.0.0.1 vnic0

Verify the link protection property values:
# dladm show-linkprop -p protection,allowed-ips vnic0

LINK PROPERTY PERM VALUE DEFAULT POSSIBLE
vnic0 protection rw mac-nospoof, -- mac-nospoof,
restricted, restricted,
ip-nospoof ip-nospoof,
dhcp-nospoof
vnic0 allowed-ips rw 10.0.0.1 -- --

We can see that 10.0.0.1 is set as allowed ip address.

Posted in Uncategorized | Leave a comment

solaris 11 FMA event classes

(svccfg setnotify -g mailto:)

For convenience, the tags problem-
{diagnosed,updated,repaired,resolved} describe the lifecycle
of a problem diagnosed by the FMA subsystem - from initial
diagnosis to interim updates and finally problem closure.
These tags are aliases for underlying FMA protocol event
classes (all in the list.* hierarchy), but the latter should
not be used in configuring notification preferences.

problem-diagnosed

A new problem has been diagnosed by the FMA subsystem.
The diagnosis includes a list of one or more suspects,
which (where appropriate) might have been automatically
isolated to prevent further errors occurring. The prob-
lem is identified by a UUID in the event payload, and
further events describing the resolution lifecycle of
this problem quote a matching UUID.

problem-updated

One or more of the suspect resources in a problem diag-
nosis has been repaired, replaced or acquitted (or has
been faulted again), but there remains at least one
faulted resource in the list. A repair could be the
result of an fmadm command line (fmadm repaired, fmadm
acquit, fmadm replaced) or might have been detected
automatically such as through detection of a part serial
number change.

SunOS 5.11 Last change: 22 Jun 2011 6
:
Standards, Environments, and Macros smf(5)
:
problem-repaired
:
All of the suspect resources in a problem diagnosis have
been repaired, resolved or acquitted. Some or all of the
resources might still be isolated at this stage.
:
problem-resolved
:
All of the suspect resources in a problem diagnosis have
been repaired resolved or acquitted and are no longer
isolated (for example, a cpu that was a suspect and off-
lined is now back online again; this un-isolate action
is usually automatic).
:
State Transition Sets are defined as:
:
to- Set of all transitions that have as
the final state of the transition.
:
from- Set of all transitions that have as
the initial state of the transition.
:
Set of all transitions that have as
the initial state of the transition.
:
all Set of all transitions.
:
Valid values of state are maintenance, offline, disabled,
online and degraded. An example of a transitions set defini-
tion: maintenance, from-online, to-degraded.

Posted in Uncategorized | Leave a comment

solaris 11 svcadm listcust -M

Deleting a service from the SMF repository.

1. svcadm disable newsvc
2. svccfg delete newsvc
(this will not really delete the service but it 'MASKS' it.
3. svcs newsvc
(no instances will be found)
4. svccfg listcust -M | grep newsvc
svc:/site/newsvc:default manifest MASKED
5. svccfg
svc:/select newsvc
svc:/site/newsvc/delcust
Deleting customizations for service: site/newsvc
svc:/site/newsvc> quit

The service is now back online.
To really delete the service from the repository:

1. delete the manifest xml file
rm /lib/svc/manifest/site/newsvc.xml
2. run the importer.
svcadm restart manifest-import
3. svcs newsvc
svcs: Pattern 'newsvc' doesn't match any instances
STATE STIME FMRI

Posted in Uncategorized | Leave a comment

solaris 11 zones and more

Best Way to Update Software in Zones
Part III of Software Management Best Practices for Oracle Solaris 11 Express
By Ginny Henningsen, August 2011

Part I - Best Way to Update Software with IPS
Part II - Best Way to Automate ZFS Snapshots and Track Software Updates
Part III - Best Way to Update Software in Zones

Introduction
For the Novice: Some Background on Zones
How Do Zones Differ in Oracle Solaris 11 Express?
Creating Zones in Oracle Solaris 11 Express
How Do I Configure a Non-Global Zone?
How Do I Install a Non-Global Zone?
How Do I Finalize Zone Installation?
How Do I Clone a Zone?
How Do I Install Packages on a Zone?
How Do I Upgrade the Global Zone?
How Do I Access the Support Repository?
Upgrading the Global Zone
Upgrading a Non-Global Zone
What If the Upgrade Causes a Problem?
Final Thoughts
Resources
Introduction
This is the third article in a series highlighting best practices for software updates in Oracle Solaris 11 Express. The first article introduced the IPS software packaging model and highlighted best practices for creating a new Boot Environment (BE) before performing an update. The second article discussed the Time Slider and auto-snapshot services, describing how to initialize and use these services to periodically snapshot BEs and other ZFS volumes.

This third article dives more deeply into the topic of software updates, exploring the process of updating an Oracle Solaris 11 Express system configured with zones. This topic is especially pertinent since zones in this release differ somewhat from those in Oracle Solaris 10, as does the software upgrade process for zoned systems.

Please note that when Oracle Solaris 11 is released, it will change and simplify the process for creating and upgrading zones. This article focuses strictly on how to perform zone upgrades currently under Oracle Solaris 11 Express, and will be updated when the process changes. For reference, refer to the full documentation set for Oracle Solaris 11 Express.

For the Novice: Some Background on Zones
First introduced in Oracle Solaris 10, zones are built-in, lightweight virtual machines that isolate workloads (see the System Administration Guide: Oracle Solaris Zones, Oracle Solaris 10 Containers, and Resource Management). Processes within a zone are restricted to accessing resources in that zone, and they can't interfere with processes or resources in other zones. The global zone contains the core operating system (OS), and administrators can define multiple non-global zones to isolate user-level workloads.

How Do Zones Differ in Oracle Solaris 11 Express?
From a functional standpoint, zones in Oracle Solaris 10 and Oracle Solaris 11 Express are similar, but there are a few noteworthy differences, summarized in Table 1.

Table 1: Zone Differences Between Oracle Solaris 10 and Oracle Solaris 11 Express
Feature Oracle Solaris 10 Oracle Solaris 11 Express
Global zone brand Branded as "native" Branded as ipkg and based on the new IPS software packaging model
Non-global zone brands Branded as "native" zones or as Linux, Solaris 8, or Solaris 9 brand zones Branded as ipkg zones or as solaris10 zones; see the solaris10(5) man page in man pages section 5: Standards, Environments, and Macros
Non-global zone roots Whole or sparse root (sparse root zones share text segments from executables and shared libraries from the global zone) Whole root only and reside on own ZFS dataset
Non-global zone contents Packages must be the same as in global zone Packages in non-global zone can differ from that in global zone
Patch application? Yes, can be applied to multiple zones in parallel No patching (pkg updates instead)
Upgrading global zone also updates non-global zones? Yes No
As Table 1 shows, ipkg zones in Oracle Solaris 11 Express are "whole root" only and reside on their own ZFS dataset. As Jeff Savit's blog ("Ours Goes to 11--Features of Oracle Solaris 11 Express") describes, creating non-global zones in Oracle Solaris 11 Express takes advantage of ZFS cloning, which inherently conserves space. (Jeff's blog goes on to explain how to install a solaris10 branded zone on Oracle Solaris 11 Express.)

Upgrading zones in Oracle Solaris 11 Express differs from upgrading zones in Oracle Solaris 10. Currently, ipkg brand zones in Oracle Solaris 11 Express are not updated when the global zone is updated. Work is underway to allow zones to be updated in parallel, but until the release of Oracle Solaris 11, non-global zones in Oracle Solaris 11 Express must be updated manually.

Remember the best practice in Oracle Solaris 11 Express:
Update non-global zones manually to keep them in sync with the global zone.

At this time, updating a non-global zone in Oracle Solaris 11 Express is similar to migrating a non-global zone to another server; in both cases, system software for non-global zones must be updated to the same version level as the global zone. Global zone contents can differ from non-global zones in Oracle Solaris 11 Express, but specific release levels must be in sync.

This article steps through a simple example of creating zones on Oracle Solaris 11 Express and current best practices for updating both global and non-global zones. Note that installing non-global zones currently requires a network connection and access to an Oracle Solaris 11 Express package repository, unless the zone is cloned from an existing non-global zone.

Creating Zones in Oracle Solaris 11 Express
To set the stage, let's start by creating a non-global zone in Oracle Solaris 11 Express. The process for creating a non-global zone in Oracle Solaris 11 Express is similar to defining one in Oracle Solaris 10. First, configure the non-global zone, install it, and then boot it. Oracle Solaris 11 Express offers some new configuration options (such as those to construct virtual networks; see Jeff Victor's blog articles on this topic), but for the most part, zone configuration is much the same. One significant difference is that an Oracle Solaris 11 Express zone must reside on its own ZFS dataset, which can be explicitly created before the zone is configured:

# zfs create rpool/zones

(All command examples in this article presume a privileged user. See "User Accounts, Roles, and Rights Profiles" in Getting Started With Oracle Solaris 11 Express.)

The following command defines a mount point for the ZFS dataset rpool/zones:

# zfs set mountpoint=/export/zfs rpool/zones

How Do I Configure a Non-Global Zone?
If you already know how to configure and install a zone, skip ahead to How Do I Upgrade the Global Zone? If you are new to zones, the next few paragraphs step through the configuration and installation process.

The following commands configure a new non-global zone called my-zone on the ZFS dataset created previously:

# zonecfg -z my-zone
my-zone: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:my-zone> create
zonecfg:my-zone> set zonepath=/export/zfs/my-zone
zonecfg:my-zone> add net
zonecfg:my-zone:net> set address=192.168.1.99
zonecfg:my-zone:net> set physical=e1000g0
zonecfg:my-zone:net> end
zonecfg:my-zone> verify
zonecfg:my-zone> commit
zonecfg:my-zone> exit

How Do I Install a Non-Global Zone?
For Oracle Solaris 11 Express, zone installation accesses IPS package repositories, pulling packages from referenced or default repositories. By default, the zone installation uses packages from the release repository at http://pkg.oracle.com/solaris/release:

# zoneadm -z my-zone install
A ZFS file system has been created for this zone.
Publisher: Using solaris (http://pkg.oracle.com/solaris/release/ ).
Image: Preparing at /zones/my-zone/root.
Cache: Using /var/pkg/download.
Sanity Check: Looking for 'entire' incorporation.
Installing: Core System (output follows)
------------------------------------------------------------
Package:
pkg://solaris/consolidation/osnet/osnet-incorporation@0.5.11,5.11-0.151.0.1:20101104T230646Z
License: usr/src/pkg/license_files/lic_OTN
.
.
.
Done: Installation completed in 371.635 seconds.

Next Steps: Boot the zone, then log into the zone console (zlogin -C)
to complete the configuration process.

How Do I Finalize Zone Installation?
Boot the zone and log into its console to complete the configuration:

# zoneadm -z my-zone boot
# zlogin -C my-zone
[Connected to zone 'my-zone' console]

At this point, specify final installation parameters (host name, name service, language, locale, time zone, root password, and so forth). When the install concludes, this message appears and zone login is enabled:

System identification is completed.
.
.
.
my-zone console login:

In the global zone, the following command shows the status for all zones:

# zoneadm list -iv
ID NAME STATUS PATH BRAND IP
0 global running / ipkg shared
1 my-zone running /export/zfs/my-zone ipkg shared

How Do I Clone a Zone?
As a precaution or to speed provisioning, you can optionally clone a zone while it's inactive. First, halt the non-global zone and then export its configuration:

# zoneadm -z my-zone halt
# zonecfg -z my-zone export -f /export/zfs/master

Edit the zone configuration, changing the zonepath, the network definition, and other parameters as needed:

# vi /export/zfs/master

Configure and clone the zone, and then boot the non-global zone and its clone:

# zonecfg -z my-zone2 -f /export/zfs/master
# zoneadm -z my-zone2 clone my-zone
# zoneadm -z my-zone boot
# zoneadm -z my-zone2 boot

List the zones:

# zoneadm list -iv
ID NAME STATUS PATH BRAND IP
0 global running / ipkg shared
- my-zone running /export/zfs/my-zone ipkg shared
- my-zone2 running /export/zfs/my-zone2 ipkg shared

How Do I Install Packages on a Zone?
First, let's make a distinction between a software upgrade and a software install. If you use the pkg install command in the global zone to add a package, the package is installed there and not propagated to non-global zones. To install a package in a non-global zone, an authorized zone administrator can log in to the non-global zone and execute the pkg install command there.

As an example, let's install Apache HTTP Server version 2.2 to build a Web server on the non-global zone my-zone (for brevity, command output is not shown):

root@my-zone:~# pkg install apache-22

Executing the pkg history command in my-zone shows the Apache installation. (Compare this output to the results of the pkg history command in the global zone.)

How Do I Upgrade the Global Zone?
Best practice in Oracle Solaris 11 Express is to generate a new Boot Environment (BE) prior to a software change (see the first article in this series). In some cases, as in a full update, a new BE is automatically created and activated on reboot. In other cases you must explicitly create one. There are several ways to initiate a system software update:

Via an "Update All" in the Package Manager or Update Manager GUI
Via the pkg(1) command, as in pkg update
Oracle plans three different types of updates for Oracle Solaris 11:

Support Repository Updates (SRUs). Customers with an active Oracle Solaris 11 Express support contract will be able to access the support repository containing periodically released software package updates. These updates include bug fixes and security updates.
Periodic Update Releases. Similar to update releases for Oracle Solaris 10, Oracle will issue periodic updates for Oracle Solaris 11. About every 6 to 12 months there will be an update release that contains all the SRUs to the previous release plus the potential for some new features (just as is the case with Oracle Solaris 10 updates today).
Full Upgrades. A full upgrade, like that of updating from Oracle Solaris 11 Express to Oracle Solaris 11 (when it's available) requires access to the release repository at pkg.oracle.com or to a mirror of the release repository.
How Do I Access the Support Repository?
To access SRUs and periodic update releases, you must have an Oracle Solaris 11 Express support contract and a CSI-registered account on My Oracle Support (see the article Support Repositories Explained [ID 1021281.1]). Log in to My Oracle Support to download the certificate and key files that enable support repository access. Before updating the global zone, define a directory for the certificate and key files:

# mkdir -m 0755 -p /var/pkg/ssl
# cp -i ./Oracle_Solaris_11_Express_Support.certificate.pem /var/pkg/ssl
# cp -i ./Oracle_Solaris_11_Express_Support.key.pem /var/pkg/ssl

Then, define the support repository location and publisher for pkg, specifying the certificate and key:

# pkg set-publisher -k /var/pkg/ssl/Oracle_Solaris_11_Express_Support.key.pem -c /var/pkg/ssl/Oracle_Solaris_11_Express_Support.certificate.pem -O https://pkg.oracle.com/solaris/support solaris

If you are using the packagemanager GUI, the updated package list will be visible after you restart the GUI. The last entry in the pkg history -l command reflects the change in publisher:

Operation: update-publisher
Outcome: Succeeded
Client: pkg
Version: 052adf36c3f4
User: ghenning (101)
Start Time: 2011-04-21T10:16:40
End Time: 2011-04-21T10:16:43
Command: /usr/bin/pkg set-publisher -k
/var/pkg/ssl/Oracle_Solaris_11_Express_Support.key.pem -c
/var/pkg/ssl/Oracle_Solaris_11_Express_Support.certificate.pem -O
https://pkg.oracle.com/solaris/support/ solaris
Start State:
None
End State:
None

Upgrading the Global Zone
Running the pkg update -nv command shows what will happen during an update, without actually changing anything. The first time, you might get a warning indicating that pkg is out of date:

# pkg update -nv
WARNING: pkg(5) appears to be out of date, and should be updated before
running update. Please update pkg(5) using 'pfexec pkg install
pkg:/package/pkg' and then retry the update.

After installing the new version of pkg, run the update command again:

# pkg install pkg:/package/pkg
Packages to update: 1
Create boot environment: No
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 126/126 0.7/0.7

PHASE ACTIONS
Install Phase 1/1
Update Phase 242/242

PHASE ITEMS
Package State Update Phase 2/2
Package Cache Update Phase 1/1
Image State Update Phase 2/2

# pkg update -nv

Packages to update: 45
Create boot environment: Yes
Rebuild boot archive: Yes
Changed fmris:
pkg://solaris/entire@0.5.11,5.11-0.151.0.1:20101105T054056Z ->
pkg://solaris/entire@0.5.11,5.11-0.151.0.1.6:20110328T230730Z
.
.
.

As highlighted in the output above, the global zone's OS version (5.11-0.151.0.1) lags the version in the support repository (5.11-0.151.0.1.6). The update will also automatically create a new BE. Remember, if the update will not automatically create a new BE, best practice is to explicitly create one.

Without the -nv option, the pkg update command updates the global zone, creating a new BE with the default name of solaris-1. Best practice is to specify a BE name on the update command line explicitly, so that the BE is named something meaningful, for example:

# pkg update --require-new-be --be-name "S11E_SRU6"
Packages to update: 45
Create boot environment: Yes
DOWNLOAD PKGS FILES XFER (MB)
Completed 45/45 1235/1235 70.2/70.2

PHASE ACTIONS
Removal Phase 184/184
Install Phase 350/350
Update Phase 3349/3349

PHASE ITEMS
Package State Update Phase 90/90
Package Cache Update Phase 45/45
Image State Update Phase 2/2

A clone of solaris exists and has been updated and activated.
On the next boot the Boot Environment S11E_SRU6 will be mounted on '/'.
Reboot when ready to switch to this updated BE.

---------------------------------------------------------------------------
NOTE: Please review release notes posted at:

http://docs.sun.com/doc/821-1479

---------------------------------------------------------------------------

After updating the global zone, reboot the system to run the updated BE. Note that the update affects only currently installed packages. In a minimized system (such as one installed with the server_install package bundle), the upgrade won't install packages that aren't present.

Upgrading a Non-Global Zone
At this time, you must manually update Oracle Solaris 11 Express non-global zones to keep them in sync with the global zone. After updating the global zone, reboot the system, and halt the non-global zone:

# zoneadm -z my-zone halt

To upgrade the non-global zone my-zone, first detach it as if you were migrating it to another server:

# zoneadm -z my-zone detach

# zoneadm list -iv
ID NAME STATUS PATH BRAND IP
0 global running / ipkg shared
- my-zone2 installed /export/zfs/my-zone2 ipkg shared

Next, issue a zoneadm attach command with the -u option. The -u option upgrades the zone during the reattachment:

# zoneadm -z my-zone attach -u
Log File: /var/tmp/my-zone.attach_log.meay8c
Attaching...

preferred global publisher: solaris
Global zone version: entire@0.5.11,5.11-0.151.0.1.6:20110504T002250Z
Non-Global zone version: entire@0.5.11,5.11-0.151.0.1:20101105T054056Z

Cache: Using /var/pkg/download.
Updating non-global zone: Output follows
Packages to update: 17
Create boot environment: No
DOWNLOAD PKGS FILES XFER (MB)
Completed 17/17 447/447 14.7/14.7

PHASE ACTIONS
Removal Phase 106/106
Install Phase 115/115
Update Phase 1734/1734

PHASE ITEMS
Package State Update Phase 34/34
Package Cache Update Phase 17/17
Image State Update Phase 2/2
Updating non-global zone: Zone updated.
Result: Attach Succeeded.

The command compares the global zone's version (5.11-0.151.0.1.6) with the non-global zone's version (5.11-0.151.0.1) and performs the update accordingly. Once the ipkg non-global zone is attached and updated, it can be booted:

# zoneadm -z my-zone boot
# zoneadm list -iv
ID NAME STATUS PATH BRAND IP
0 global running / ipkg shared
1 my-zone running /export/zfs/my-zone ipkg shared
- my-zone2 installed /export/zfs/my-zone2 ipkg shared

Each non-global zone on the system must be detached, attached/upgraded, and booted in this manner to be in sync with the global zone. Future developments are planned to simplify zone updates, but for now, the process is manual. When Oracle Enterprise Manager Ops Center supports Oracle Solaris 11, it will greatly simplify system management, including tasks for managing operating systems, firmware updates, virtual machines, storage, and network fabrics.

What If the Upgrade Causes a Problem?
How to recover, of course, depends on the nature of the problem. If the global zone upgrade is successful but the non-global zone upgrade exhibits a problem, check the log file produced during the attach -u operation. The log file is labeled with the zone name (for example, /var/tmp/my-zone.attach.log.meay8c). Based on the log file, try to troubleshoot the problem. If necessary, it is possible to get back to the software state that existed prior to the updates, since the non-global zone's clone and the initial BE still exist. Restoring the previous software state is also the approach to take if the global zone is problematic.

To revert to the software state that existed before the upgrades, first halt and detach all non-global zones:

# zoneadm -z my-zone halt
# zoneadm -z my-zone2 halt
# zoneadm list -iv
ID NAME STATUS PATH BRAND IP
0 global running / ipkg shared
- my-zone installed /export/zfs/my-zone ipkg shared
- my-zone2 installed /export/zfs/my-zone2 ipkg shared
# zoneadm -z my-zone detach
# zoneadm -z my-zone2 detach

The zoneadm list command then shows only the global zone as running:

# zoneadm list -iv
ID NAME STATUS PATH BRAND IP
0 global running / ipkg shared

Next, activate and boot the original BE, which was called solaris:

# beadm activate solaris
# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
S11E_SRU6 N / 336.37M static 2011-06-02 11:28
solaris R - 2.35G static 2011-05-26 11:09
# reboot

The clone of the non-global zone (my-zone2, which hasn't yet been updated, unlike the non-global zone my-zone) can be attached and booted until the problem is resolved:

# zoneadm -z my-zone2 attach -u
Log File: /var/tmp/my-zone2.attach_log.mPaq6g
Attaching...

preferred global publisher: solaris
Global zone version: entire@0.5.11,5.11-0.151.0.1:20101105T054056Z
Non-Global zone version: entire@0.5.11,5.11-0.151.0.1:20101105T054056Z
Cache: Using /var/pkg/download.
Updating non-global zone: Output follows
No updates necessary for this image.
Updating non-global zone: Zone updated.
Result: Attach Succeeded.
# zoneadm -z my-zone2 boot

As shown in the output above, the global zone and the non-global zone my-zone2 are at the same version level, specifically, the version that existed prior to any updates.

Final Thoughts
BEs in Oracle Solaris 11 Express act as a safety net for upgrades, similar to Live Upgrade environments in Oracle Solaris 10. When updating an Oracle Solaris 11 Express global zone, always create a new BE so you can backtrack. Until Oracle Solaris 11 is released and the zone upgrade process changes, manually update all native non-global zones using the zoneadm -z zonename attach -u command to keep non-global zones in sync with the global zone.

Resources
Here are resources that were referenced earlier in this document:

Part 1 of this series, "Updating Software With IPS": http://www.oracle.com/technetwork/articles/servers-storage-dev/updatesoftwareips-367407.html
Part 2 of this series, "Automating ZFS Snapshots and Tracking Software Updates": http://www.oracle.com/technetwork/articles/servers-storage-dev/autosnapshots-397145.html
Full documentation set for Oracle Solaris 11 Express: http://download.oracle.com/docs/cd/E19963-01/index.html
System Administration Guide: Oracle Solaris Zones, Oracle Solaris 10 Containers, and Resource Management: http://download.oracle.com/docs/cd/E19963-01/index.html
solaris10(5) man page in man pages section 5: Standards, Environments, and Macros: http://download.oracle.com/docs/cd/E19963-01/index.html
Jeff Savit's blog, "Ours Goes To 11--Features of Oracle Solaris 11 Express": http://blogs.oracle.com/jsavit/entry/ours_goes_to_11_features
Jeff Victor's blog, "Virtual Network--Part 4": http://blogs.oracle.com/JeffV/entry/virtual_network_part_4
"User Accounts, Roles, and Rights Profiles" in Getting Started With Oracle Solaris 11 Express: http://download.oracle.com/docs/cd/E19963-01/index.html
Release repository at pkg.oracle.com: http://pkg.oracle.com/solaris/release/en/index.shtml
My Oracle Support (access requires support contract): https://support.oracle.com/
"Support Repositories Explained [ID 1021281.1]" (access requires support contract): https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1021281.1
And here is an additional resource:

Oracle Solaris 11 Express Image Packaging System: http://download.oracle.com/docs/cd/E19963-01/index.html

Posted in Uncategorized | 1 Comment

solaris 11 pkg build-branch

Oracle Solaris Package Versioning

Package Identifier: FMRI described the pkg.fmri attribute and the different components of the version field, including how the version field can be used to support different models of software development. This section explains how the Oracle Solaris OS uses the version field, and provides insight into the reasons that a fine-grained versioning scheme can be useful. In your packages, you do not need to follow the same versioning scheme that the Oracle Solaris OS uses.

The meaning of each part of the version string in the following sample package FMRI is given below:

pkg://solaris/system/library@0.5.11,5.11-0.175.1.0.0.2.1:20120919T082311Z
0.5.11
Component version. For packages that are part of the Oracle Solaris OS, this is the OS major.minor version. For other packages, this is the upstream version. For example, the component version of the following Apache Web Server package is 2.2.22:

pkg:/web/server/apache-22@2.2.22,5.11-0.175.1.0.0.2.1:20120919T122323Z
5.11
Build version. This is used to define the OS release that this package was built for. The build version should always be 5.11 for packages created for Oracle Solaris 11.

0.175.1.0.0.2.1
Branch version. Oracle Solaris packages show the following information in the branch version portion of the version string of a package FMRI:

0.175
Major release number. The major or marketing development release build number. In this example, 0.175 indicates Oracle Solaris 11.

1
Update release number. The update release number for this Oracle Solaris release. The update value is 0 for the first customer shipment of an Oracle Solaris release, 1 for the first update of that release, 2 for the second update of that release, and so forth. In this example, 1 indicates Oracle Solaris 11.1.

0
SRU number. The Support Repository Update (SRU) number for this update release. SRUs include only bug fixes; they do not include new features. The Oracle Support Repository is available only to systems under a support contract.

0
Reserved. This field is not currently used for Oracle Solaris packages.

2
SRU build number. The build number of the SRU, or the respin number for the major release.

1
Nightly build number. The build number for the individual nightly builds.

20120919T082311Z
Time stamp. The time stamp was defined when the package was published.

Posted in Uncategorized | Leave a comment

solaris 11 fat packages

Setting up Solaris IPS servers for multiple architecture (fat) packages

Monday, August 6, 2012
This is part 2 of a three-part series on building and packaging matplotlib as a multi-architecture, Solaris IPS package.

Compiling matplotlib 1.1.0 for Solaris on SPARC and x86
Setting up Solaris IPS servers to host packages for SPARC and x86
Packaging matplotlib 1.1.0 for Solaris on SPARC and x86
pkg logo
IPS made is a huge step forward compared to SysV packaging. One can still create and use SysV packages in Solaris 11, but why would you? IPS provides easy package distribution, upgrades, dependency resolution, and it's still open source!

Conceptually, IPS is a bit different from System V packages. An IPS package is not just a collection of scripts and files. Instead, IPS works by specifying "actions" (e.g. create directory, copy file, etc.). Similar to other packaging systems, it also includes metadata in a manifest file (e.g. pkg dependencies, descriptions).

Overview

These instructions will go through the steps necessary to setup IPS servers needed to create and host multiple architecture (fat) IPS packages. Basically, we will create three repositories and three IPS servers to host them. To keep things as simple as possible, I'll assume they will all be hosted from the same physical machine (and the same zone).

For projects that need to be compiled for SPARC and x86 (IPS calls them the sparc and i386 variants, respectively), ideally one would create a "universal" package that can be installed to sparc or x86 machines. The way this is done in Solaris is to create a separate package for each "variant" and then to "merge" them into one package. This means three IPS servers are required, one for each architecture, and one for the merged package.

Concepts

The last paragraph had quite a few terms in it that have special meaning in IPS-speak. Learning the following terms will help quite a bit in understanding what needs to be done to create and host IPS packages.
package
This is the "what" in IPS. What is it you would like to install?
publisher
This is the "who" in IPS. Who is it that is providing this package?
variant
This is more specifically, what kind of package are you installing? Examples include sparc/i386 and debug/non-debug.
repository
This is a collection of packages. It can include packages from multiple publishers, for example, if one wanted to mirror Oracle packages in addition to internally-created packages.
server
This is the software responsible for hosting a repository.
merge
This means combining two IPS package variants into one package.
Note that the publisher name is associated with a package, not the particular server. A server does have a default publisher, though. When you query for packages and send new packages to the server, the default publisher is used unless otherwise specified.

Create repositories

In order to create a package which supports multiple architectures, we need three repositories. One for sparc variants, one for i386 variants, and one for the merged packages. This does not mean we need three machines, or even three zones. We will simply run multiple IPS servers, each on a different port.

To keep the instructions simple, we are going to use the same default publisher name for all three servers we create. Remember, the publisher is supposed to identify "who" publishes a package, not "where" they are published. It makes sense that each repository should have the same default publisher name. For this example we will use "mycompany" as the publisher.

Setup local ZFS filesystems

The first thing we will do is create ZFS filesystems for each repository.
zfs create rpool/export/ips
zfs create rpool/export/ips/default
zfs create rpool/export/ips/sparc
zfs create rpool/export/ips/x86
zfs set mountpoint=/export/ips rpool/export/ips
Next, we create a skeleton for each IPS repository.
pkgrepo create /export/ips/default
pkgrepo create /export/ips/sparc
pkgrepo create /export/ips/x86
Finally, we set the publisher name. As I said before, this is the "who" publishes the package, not "where" it is published. I use "mycompany" in this example, since I'm creating a package for my company's internal use.
pkgrepo set -s /export/ips/default publisher/prefix=mycompany
pkgrepo set -s /export/ips/sparc publisher/prefix=mycompany
pkgrepo set -s /export/ips/x86 publisher/prefix=mycompany
Setup SMF-based IPS servers

For this example, we create multiple instances of the SMF-based pkg server. There are other options, file-system based sharing for example, but it seems that creating multiple server instances is the best-supported method of hosting.
Setup IPS default (multi-architecture) server

The first thing we will do is setup the default server on the standard http port (80).
svccfg -s application/pkg/server setprop pkg/port=80
Next, tell it which repository to use.
svccfg -s application/pkg/server setprop pkg/inst_root=/export/ips/default
Since we want to be able to publish to this repository, change the readonly property to false.
svccfg -s pkg/server setprop pkg/readonly=false
Refresh the pkg/server SMF service, to make sure the configuration changes get loaded.
svcadm refresh application/pkg/server
Start the repository service.
svcadm enable application/pkg/server
The IPS server can now be reached at http://myipsserver Check that it is running with the svcs command.
% svcs pkg/server
STATE STIME FMRI
online May_08 svc:/application/pkg/server:default
Setup IPS sparc server

We need to add a new instance to SMF. Since we're already using the default instance, we give it the name "sparc". We then need to do some basic configuration.
svccfg -s pkg/server add sparc
svccfg -s pkg/server:sparc addpg pkg application
svccfg -s pkg/server:sparc addpg general framework
svccfg -s pkg/server:sparc setprop general/complete=astring:\"\"
svccfg -s pkg/server:sparc setprop general/enabled=boolean: true
Next, we set the port number. Since 80 is used already, we can use 8000.
svccfg -s pkg/server:sparc setprop pkg/port=8000
Set the root repository directory.
svccfg -s pkg/server:sparc setprop pkg/inst_root=/export/ips/sparc
Change the readonly property to false.
svccfg -s pkg/server:sparc setprop pkg/readonly=false
Refresh to get configuration changes.
svcadm disable pkg/server:sparc
svcadm refresh pkg/server:sparc
Start the server.
svcadm enable pkg/server:sparc
The IPS server can now be reached at http://myipsserver:8000
Setup IPS x86 server

Now we'll add the final server instance to SMF. As before, we're already using the default instance, so we give this one a name. Finally, we add basic configuration.
svccfg -s pkg/server add x86
svccfg -s pkg/server:x86 addpg pkg application
svccfg -s pkg/server:x86 addpg general framework
svccfg -s pkg/server:x86 setprop general/complete=astring:\"\"
svccfg -s pkg/server:x86 setprop general/enabled=boolean: true
Next, we set the port number. Since 80 and 8000 are already use, we can use 8001.
svccfg -s pkg/server:x86 setprop pkg/port=8001
Set the root repository directory.
svccfg -s pkg/server:x86 setprop pkg/inst_root=/export/ips/x86
Change the readonly property to false.
svccfg -s pkg/server:x86 setprop pkg/readonly=false
Refresh to get configuration changes.
svcadm disable pkg/server:x86
svcadm refresh pkg/server:x86
Start the server.
svcadm enable pkg/server:x86
The IPS server can now be reached at http://myipsserver:8001
Build and publish the package

Now that we have all the IPS servers setup, we can build and publish our package. In part 3, we will create a multi-architecture (fat) IPS package for matplotlib, which we compiled in part 1.
Resources

An Introduction to IPS (OTN tutorial)
Short tutorial from Bart Smaalders
(an IPS developer)
IPS Concepts (from the IPS documentation)
PKG Command documentation
IPS Developer's Guide (pdf book)
Released under Creative Commons Attribution-ShareAlike License.
Copyright 2012, Tim Swast. All rights reserved.

Posted in Uncategorized | Leave a comment

solaris – cpu strands

http://sparcv9.blogspot.nl/2010/02/thread-performance-on-modern-sparc.html

Posted in oracle, solaris | Leave a comment