Monthly Archives: April 2014

Data Migration Using PowerPath Migration Enabler

One project I recently led is the migration of data from an old EMC Clariion to the new EMC VNX. There are a couple of strategies for migrating block data on a storage area network (SAN) – either use storage-based migration (migration is between the two storage arrays) or use host-based migration (migration is done on the host). EMC provides several tools for accomplishing these tasks. SAN Copy for instance is an excellent storage-based migration tool.

There are many factors to consider when choosing a migration strategy – size of data, cost, SAN bandwidth, complexity of the setup, application downtime, among many others. One strategy that is relatively simple and requires no downtime is to use the host-based migration tool PowerPath Migration Enabler Hostcopy.

This tool is part of PowerPath when you install the full software. In version 5.7 SP2, as long as the PowerPath is licensed, there is no additional license needed for Hostcopy (unlike the older version).

The migration process is non disruptive. It does not require shutting down the application. The host is still operational while migration is going on. In general, the steps for migrating data are:

1. On Windows or Linux host, make sure Powerpath 5.7 SP2 is installed and licensed.

powermt check_registration

2. Check source disk and record the disk pseudo name.

powermt display dev=all

3. On new storage, present the target LUN to host.

4. On host, rescan and initialize the target disk.

5. Check that the target disk is present and record the pseudo name.

powermt display dev=all

6. Setup the PowerPath Migration Enabler session

powermig setup -src harddiskXX -tgt harddiskYY -techType hostcopy

7. Perform initial synchronization

powermig sync -handle 1

8. Monitor status of the session

powermig query -handle 1

9. The data transfer rate can also be throttled

powermig throttle -throttleValue 0 -handle 1

10. When ready to switch over to the new storage, enter the following command:

powermig selectTarget -handle 1

11. Commit the changes

powermig commit -handle 1

12.Cleanup/delete the session

powermig cleanup -handle 1

13. Remove the old storage by removing lun from the old storage group

14. On host, rescan HBA for hardware changes, then remove old LUNs from PowerPath

powermt display dev=all
powermt remove dev=all
powermt display dev=all

For more information about PowerPath Migration Enabler, visit EMC website.

EMC VNX2 Storage Array Review

VNX is EMC’s unified enterprise storage solution for block and file. The latest release called VNX2, uses the advanced Intel Sandy Bridge processor with more cores. It also has more memory (RAM).

It’s Fast VP technology which dynamically moves data between SSD (flash drives), SAS drives and NL-SAS tiers, is now improved by decreasing the data “chunk” from 1GB to 256MB, which allows greater efficiency of data placement. Also, using SSD as the top tier is new in VNX2.

It’s Fast Cache technology has also been improved. Per EMC, “the warm up time has been improved by changing the behavior that when the capacity of FAST Cache is less than 80% utilized, any read or write will promote the data to FAST Cache.”

VNX2 boasts of its active/active LUNs configuration. However, active/active LUNs only work when the LUN is provisioned using RAID Groups. It does not work with Storage Pools. Hopefully, active/active LUNs will be available for Storage Pools in the future because more and more LUNs are being configured using Storage Pools instead of RAID Groups.

Another improvement is that in Unisphere, storage administrators do not need to set the storage processors (SP) cache settings – read and write cache settings and high and low water marks. It needs only to be turned on or off. The system now adjusts the cache settings automatically.

There are also no hot spare drives now. You simply don’t provision all the drives, and a blank drive becomes a hot spare. You can set the hot spare policy for each type of drive. The recommended is 1 per 30 drives.

I noticed a couple of shortcomings in this release. I do not like the fact that when creating a LUN in a pool, the “thin” is checked by default now. I believe that thick LUNs should be the default because of performance considerations. In addition, if storage administrators are not careful, they may end up over-provisioning the pool with thin LUNs.

On the file side, there is really no major improvement. I believe there is no updates on the data movers. Data movers still function in active/passive mode. One change though is that you can now use VDM (Virtual Data Mover) for NFS, although to configure this, you need to use the CLI.

Overall, VNX2 is one of the best enterprise storage array in terms of its performance and functionality.