Category Archives: Technical How-to

Data Migration Using PowerPath Migration Enabler

One project I recently led is the migration of data from an old EMC Clariion to the new EMC VNX. There are a couple of strategies for migrating block data on a storage area network (SAN) – either use storage-based migration (migration is between the two storage arrays) or use host-based migration (migration is done on the host). EMC provides several tools for accomplishing these tasks. SAN Copy for instance is an excellent storage-based migration tool.

There are many factors to consider when choosing a migration strategy – size of data, cost, SAN bandwidth, complexity of the setup, application downtime, among many others. One strategy that is relatively simple and requires no downtime is to use the host-based migration tool PowerPath Migration Enabler Hostcopy.

This tool is part of PowerPath when you install the full software. In version 5.7 SP2, as long as the PowerPath is licensed, there is no additional license needed for Hostcopy (unlike the older version).

The migration process is non disruptive. It does not require shutting down the application. The host is still operational while migration is going on. In general, the steps for migrating data are:

1. On Windows or Linux host, make sure Powerpath 5.7 SP2 is installed and licensed.

powermt check_registration

2. Check source disk and record the disk pseudo name.

powermt display dev=all

3. On new storage, present the target LUN to host.

4. On host, rescan and initialize the target disk.

5. Check that the target disk is present and record the pseudo name.

powermt display dev=all

6. Setup the PowerPath Migration Enabler session

powermig setup -src harddiskXX -tgt harddiskYY -techType hostcopy

7. Perform initial synchronization

powermig sync -handle 1

8. Monitor status of the session

powermig query -handle 1

9. The data transfer rate can also be throttled

powermig throttle -throttleValue 0 -handle 1

10. When ready to switch over to the new storage, enter the following command:

powermig selectTarget -handle 1

11. Commit the changes

powermig commit -handle 1

12.Cleanup/delete the session

powermig cleanup -handle 1

13. Remove the old storage by removing lun from the old storage group

14. On host, rescan HBA for hardware changes, then remove old LUNs from PowerPath

powermt display dev=all
powermt remove dev=all
powermt display dev=all

For more information about PowerPath Migration Enabler, visit EMC website.

Migrating Files to EMC VNX

There are several methodologies in migrating files to EMC VNX. One method I used recently to migrate Windows files (CIFS) was to copy files between the source CIFS server to the target VNX server using the emcopy migration suite of tools from EMC. EMC provides these free tools including emcopy, lgdup, and sharedup to accomplish the migration task. There are several steps you need to follow for a successful migration. In general, I used the following procedure:

1. Create the necessary VDM (Virtual Data Mover), File System, and CIFS server on the target VNX machine.
2. Create CIFS share. Copy the share permissions (or ACL) and the NTFS root folder ACLs from the old share to new share. You can also use the sharedup.exe utility.
3. Use lgdup.exe utility to copy local groups from the source CIFS server to the target CIFS server.
4. Run emcopy.exe to perform baseline copy.
5. Create an emcopy script to sync the files every night. This will temendously cut the time needed to update the files on the final day of migration.
6. Analyze emcopy log to make sure files are being copied successfully. You may also spot check the ACLs and/or run tools to compare files and directories between the source and target.
7. On the day of the cutover:
a. Disconnect users from the source CIFS server and make the file system read-only.
b. Run the final emcopy script.
c. Follow EMC156835 to rename the CIFS server so that the new CIFS server will have the name of its old CIFS server. This procedure entails unjoining the source and target CIFS server from Active Directory (AD), renaming NetBIOS name on the new CIFS server, joining back the CIFS server, etc. Update the DNS record too if necessary.
8. Check the new CIFS shares and make sure the users are able to read/write on the share.

To migrate UNIX/Linux files, use the UNIX rsync utility to copy files between source and target VNX.

Moving a Qtree Snapmirror Source in NetApp Protection Manager

A couple of weeks ago, one of the volumes in our NetApp Filer storage almost ran out of space. I cannot expand the volume since its aggregate is also low in space. I have to move it to a different volume contained in an aggregate with plenty of space on the same filer. The problem is, this volume contained qtrees that are snapmirrored to our Disaster Recovery (DR) site and it is being managed by NetApp Protection Manager. How do I move the qtree snapmirror sources without re-baselining the snapmirror relationship using Protection Manager? Unfortunately, there is no way to do this using Protection Manager, and re-baselining is not an option – it has terabytes of data that may take couple of weeks to complete.

Like any sane IT professional, I googled how to do this. I did not find a straight forward solution, but I found bits and pieces of information. I consolidated this information and generated the steps below. Generally, the steps below are the combination of the snapmirror CLI commands and Protection Manager configuration tasks.

1. On the CLI of the original source filer, copy the original qtree to a new qtree on a new volume by using the following command:

sourcefiler> snapmirror initialize -S sourcefiler:/vol/oldsourcevol/qtree sourcefiler:/vol/newsourcevol/qtree

This took some time, and I also updated snapmirror.conf so that snapmirror updates daily.

2. On the day of the cutover, perform a final snapmirror update on the new volume. Before doing this, make sure that nobody is accessing the data by removing the share.

sourcefiler> snapmirror update sourcefiler:/vol/newsourcevol/qtree

3. Login to the Operations Manager server, and run the following on the command prompt:

c:\> dfm option set dpReaperCleanupMode=Never

This prevents Protection Manager’s reaper cleaning up any relationship.

4. Issue the following command to relinquish the primary and secondary member:

c:\> dfpm dataset relinquish destination-qtree-name

This will mark the snapmirror relationship as external and Protection Manager will no longer manage the relationship.

5. Using NetApp Management Console GUI interface, remove the primary member from the dataset. Then remove the secondary member.

6. On the CLI of the source filer, create a manual snapshot copy by using the following command:

sourcefiler> snap create oldsourcevol common_Snapshot

6. Update the destinations by using the following commands:

sourcefiler> snapmirror update -c common_Snapshot -s common_Snapshot -S sourcefiler:/vol/oldsourcevol/qtree sourcefiler:/vol/newsourcevol/qtree

destinationfiler> snapmirror update -c common_Snapshot -s common_Snapshot -S sourcefiler:/vol/oldsourcevol/qtree destinationfiler:/vol/destinationvol/qtree

7. Quiesce and break the SnapMirror relationship between source filer and destination filer, and oldsource and newsource volumes, using the following commands:

destinationfiler> snapmirror quiesce /vol/destinationvol/qtree
destinationfiler> snapmirror break /vol/destinationvol/qtree
sourcefiler> snapmirror quiesce /vol/volnewsourcevol/qtree
sourcefiler> snapmirror break /vol/volnewsourcevol/qtree

8. Establish the new snapmirror relationship using the following command on the destination system:

destinationfiler> snapmirror resync -S sourcefiler:/vol/newsourcevol/qtree destinationfiler:/vol/destinationvol/qtree

The new SnapMirror relationship automatically picks the newest common Snapshot copy for replication. This is the common Snapshot copy.

9. Verify that the SnapMirror relationship is resynchronizing by using the following command:

destinationfiler> snapmirror status

10. Recreate the shares on the new source volume.

11. At this point, on the Protection Manager GUI console, you will see the snapmirror relationship in the External Relationship tab.

12. Create a new dataset with required policy and schedule. Use the import wizard to import the snapmirror relationship to the new dataset.

13. On the Operations Manager server command prompt, set back the reaper cleanup mode back to orphans.

c:\> dfm options set dpReaperCleanupMode=orphans.

Please send me a note if you need more information.


Restoring NetApp LUNs

The other day, I was tasked to restore a NetApp LUN from several years ago. Since the backup was so long ago, I have to restore it from an archived tape. After restoring the file, it did not show up as a LUN. It turned out that there are few rules to follow in order to restore LUN from tape and that it shows up as LUN.

There are also a couple of requirements when backing up LUNs to tape. Thankfully when we backed up the LUNs, we followed these rules:

1. The data residing on the LUN should be in a quiesced state prior to backup so the file system caches are committed to disk.

2. The LUN must be backed up using a NDMP compliant backup application in order for it to retain the properties of a LUN. Symantec Netbackup is an NDMP compliant backup application.

When restoring, I learned that:

1. It must be restored to the root of a volume or qtree. If it is restored anywhere else, it will simply show up as a file and lose the metadata allowing it to be recognized as a LUN.

2. The backup software should not add an additional directory above the LUN when it is restored.

For instance, on Symantec Netbackup application, when restoring the LUN, you should select the option to “Restore everything to its original location.” If this is not possible, you can select the second option which is “Restore everyting to a different location, maintaining existing structure.” This means that you can restore it on a different volume.

For example, if the LUN resides in /vol/vol1/qtree1/lun1 and we chose to restore to /vol/vol2, the location where the LUN would be restored is /vol/vol2/qtree1/lun1 because it maintains the existing structure.

Do not select the third option which is “Restore individual directories and files to different locations” because the Netbackup software will add an extra directory beyond the qtree and the LUN will not show up as a LUN.

When restore is complete, a “lun show -v” output on the NetApp CLI will show the restored LUN on /vol/vol2 volume.

Backing Up NetApp Filer on Backup Exec 2012

The popularity of deduped disk-based backup, coupled with snapshots and other technologies, may render tape backup obsolete. For instance, if you have a NetApp Filer, you can use snapshot technology for backup, and snapmirror technology for disaster recovery. However, there may be some requirements such as regulatory requirements to keep files for several years, or infrastructure limitations such as low bandwidth to remote DR (disaster recovery) site that inhibits nightly replication. In these instances, using tape backup is still the best option.

The proper way to backup a NetApp Filer to tape on Backup Exec 2012 is via NDMP. You can backup your Filer on the network, using remote NDMP. If you can directly connect a tape device to the NetApp Filer, that would even be better, because backup will not go through the network anymore, thus backup jobs will be faster.

However, using NDMP requires a license on Backup Exec. The alternative way to backup the Filer without buying the NDMP license is via the CIFS share. Configuring the Backup Exec 2012 via CIFS shares though can be a little tricky. These are the things you need to do to make it work:

1. Disable NDMP service on the NetApp Filer. This is done by issuing the command “ndmpd off” at the command line.
2. Change the default NDMP port number on the Backup Exec 2012 server. The default port number is 10000. You may use port 9000. This is done by editing the “services” file located at C:\Windows\system32\drivers\etc and adding the line “ndmp 9000/tcp” Reboot server after editing the file.
3. Make sure you have at least one Remote Agent for Windows license installed on your Backup Exec server.
4. Make sure that the “Enable selection of user shares” is checked in the “Configuration and Settings -> Backup Exec Settings -> Network and Security” settings.
5. When defining the backup job, select “File Server” at the type of server to backup.
6. When entering the NetApp Filer name, use IP address or the fully qualified domain name (FQDN).

The backup status for backing up NetApp Filer this way will always be “Completed with Exceptions,” since Backup Exec still looks for remote agent on the client. But this is fine, as long as all files are being backed up.

Easy and Cheap vCenter Server

If your VMware infrastructure contains no more than 5 hosts and 50 virtual machines, you can save some effort and Windows license fee by using the VMware vCenter Server Appliance instead of the vCenter Server on a Windows machine. The vCenter Server Appliance is a preconfigured Suse Linux-based virtual machine, with PostgreSQL for the embedded database.

The vCenter appliance is easy to deploy and configure, and it will save you time and maintenance effort, because unlike Windows, you do not have to install anti-virus and monthly patches. It can join Active Directory for user authentication. It will save you Windows license fee, but you still need to purchase vCenter license.

The vCenter appliance can be downloaded from the VMware site as an ova or an ovf plus vmdk files. You do not need to download the ovf and the vmdk files if you downloaded the ova file. Ova file is merely a single file distribution of ovf and vmdk, stored in tar format.

To deploy the appliance, use the vSphere Client and deploy the downloaded ova file as an ovf template. You can deploy it as a thin provisioned format if you do not want to commit 80GB space right away. Once deployed and powered on, you can continue with the rest of the configuration using the GUI browser based interface at https://vCenterserver:5400/. The vCenter Server Appliance has the default user name root and password vmware.

The wizard will guide you through the rest of the configuration. There are really very few configuration items. The common ones are static IP address (if you don’t want dhcp), and the Active Directory settings. And the best thing is, you do not have to manage/configure the Suse-Linux-based appliance via CLI. Everything can be managed via the GUI browser-based interface.

Hot Adding NetApp Shelf

One of the great features of NetApp FAS 3200 series is the ability to add shelves without any downtime. As our need for storage space exponentially increases, we need the ability for our storage system to be expanded without any interruption to our business users. I recently added a DS4243 shelf into an existing stack, and followed the steps below:

1. Change the disk shelf ID. Make sure the shelf ID is unique among the stack. On the DS4243 shelf, the ID can be changed by pressing the U-shaped button located near the shelf LEDs. The shelf needs to be power cycled for the new ID to take effect.

2. Cable the SAS connection. It is very important to unplug/connect the cable one at time.

a. Unplug the cable from the I/O module A (IOM A)circle port from the last shelf in the stack.

b. Connect the cable from the new shelf IOM A square port to the IOM A circle port that was disconnected in step a.

c. Reconnect the cable that was disconnected in step a to the IOM A circle port of the new shelf.

d. Repeat the same procedure for IOM B.

3. Check connectivity by running the following commands on the console:

sasadmin expander_map
sasadmin shelf
storage show disk -p

4. Assign disk to the filer. If auto assign is turned on, the disks will be auto assigned to the filer. I disabled autoassign disk, since in a cluster, I want to control where the disks go. I usually go to the console of the filer where I want the disk assigned, check to see all unassigned disk drive using the command disk show -n, and finally issue the command disk assign all to assign the disks.

For a complete step by step instructions, consult your NetApp manuals.

Upgrading Netbackup from Version 6.5 to 7.1

I recently upgraded a Netbackup infrastructure from version 6.5 to version 7.1. Here are some of my observations and advice:

1. Preparation took longer than the actual upgrade of the server. Pre-installation tasks included understanding the architecture of the backup infrastructure including the master and media server, disk-based backup, and ndmp; checking the hardware (processor, memory, disk space) and operating system version were compatible and up to snuff; checking the general health of the running Netbackup software including the devices and policies; backing up the catalog database; obtaining updated Netbackup licenses from Symantec; downloading the base Netbackup software and the patches; joining, unzipping, and untarring software and patches; and other related tasks. Planning and preparation are really the key for a successful upgrade. These activities will save a lot of trouble during the upgrade process.

2. The upgrade process was seamless. On Solaris server, I ran the “install” command to start the upgrade. The process asked for several questions. Some packages were already integrated in the base package such as ndmp, so the program asked for the existing Netbackup ndmp package to be uninstalled. The part that took longer was the catalog database upgrade.

3. Upgrade of client agents was also easy. Upgrading UNIX and Linux clients was completed using the push tool “update_clients.” Windows clients were upgraded using the Netbackup Windows installation program. One good thing though was that no reboot was necessary. Also, I found out that Windows 2000 and Solaris 8 clients were not supported on 7.1, although it will still backup using the old 6.5 agent.

4. For bmr (bare metal restore), there was no need for a separate boot server. All client install included the boot server assistant software.

5. The GUI administration interface is almost the same, except for some new features such as vmware support.

6. The java administration console is so much better, in terms of responsiveness.

Creating LUN in NetApp Using CLI

If you want to create a LUN (Logical Unit Number) on a vfiler in NetApp, you will be forced to use CLI commands.  There is no wizard GUI tool for vfilers at least for now.

To carve up a storage space in NetApp to be presented to a SPARC Solaris machine using iSCSI HBA, I used the following steps:

1. Configure iSCSI HBA on Solaris (i.e., configure IP address, netmask, gateway, vlan tagging [it if its on a separate vlan], etc)

2. Login through NetApp console or remote session.

3. Go to the vfiler

nas3240> vfiler context vfiler-iscsi

4. Determine which volume to create the LUN. Make sure it has enough space.

nas3240@vfiler-iscsi> vol status

nas3240@vfiler-iscsi> df -h

5. Create qtree. I usually create the LUN on qtree level instead of volume. This makes my structure cleaner.

nas3240@vfiler-iscsi> qtree create /vol/iscsi_apps/solaris

6. Create the LUN using this syntax: lun create -s size -t ostype lun_path

nas3240@vfiler-iscsi> lun create -s 200g -t solaris /vol/iscsi_apps/solaris/lun0

Successful execution of this command wil create LUN “/vol/iscsi_apps/solaris/lun0” with a size of 200GB, and space-reserved.  For LUN, the best practice is to thick provision (space-reserved), so you won’t have problems when the storage runs out of space.

7. Create initiator group or igroup which contain the IQN for the solaris host. Initiate a iscsi login command from solaris host, and NetApp will see the IQN number.  This IQN number will appear on the console and you can cut and paste that number. Use this syntax to create igroup: igroup create -i -t ostype initiator_group iqn_from_host

nas3240@vfiler-iscsi> igroup create -i -t solaris solaris_group

8. Map the LUN to the host using igroup you created. Use this syntax: lun map lun_path initiator_group [lun_id] where: lun_path is the path name of the LUN you created, initiator_group is the name of the igroup you created, and lun_id is the identification number that the initiator uses when the LUN is mapped to it. If you do not enter a number, Data ONTAP generates the next available LUN ID number.

nas3240@vfiler-iscsi> lun map /vol/iscsi_apps/solaris/lun0 solaris_group

9. Verify LUN list and their mapping.

nas3240@vfiler-iscsi> lun show -m

LUN path                                    Mapped to              LUN ID    Protocol
vol/iscsi_apps/solaris/lun0      solaris_group        2                   iSCSI

10.  Go to solaris box, and do iSCSI refresh.  Check that it can see the LUN disk that has been provisioned.

Cloning Linux on VMware

When you clone or ‘deploy from template’ a linux virtual machine on Vmware, specifically Red Hat based linux such as CentOS, you need additional steps on the cloned machine to make it work. The obvious settings you need to change are the IP address and hostname. But changing those settings is not enough. You also need to change other parameters.

When you clone a linux machine, the hardware address (or MAC address) of the NIC changes, which is correct — the cloned machine should never have the same MAC address as the source. However, the new MAC address is assigned to eth1, not eth0. The eth0 is still assigned the MAC address of the source, although it is commented out in udev’s network persistent file, so it’s not active.

When you cloned a linux machine and noticed that the network does not work, it is probably because you assigned the new IP address to eth0 (which is not active). You can use eth1 and assign the new IP address on that interface. However, I usually want to use eth0 to make it clean and simple. You can easily switch back to eth0 by editing the file /etc/udev/rules.d/70-persistent-net.rules. Edit the string that starts with SUBSYSTEM, remove or comment out the line for eth1, uncomment the line for eth0, and replace the ATTR(address) for eth0 to get the MAC address from eth1. Here’s a sample edited file:

# This file was automatically generated by the /lib/udev/write_net_rules
# program, run by the persistent-net-generator.rules rules file.
# You can modify it, as long as you keep each rule on a single
# line, and change only the value of the NAME= key.

# PCI device 0x8086:0x100f (e1000)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:60:66:88:00:02",
ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

# PCI device 0x8086:0x100f (e1000)
#SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:60:66:88:00:02",
ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"

Now edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file to make sure that the DEVICE is eth0, the BOOTPROTO is static, and the HWADDR matches the ATTR{address} for eth0 in the 70-persistent-net.rules file.

Restart the network by issuing the command “service network restart” or you can reboot the system.