Category Archives: Uncategorized

Migrating IT Workloads to the Cloud

As companies realize the benefits of the cloud, along with new cloud application deployments, they are also migrating existing on-premise applications to the cloud.

However, migration can be a daunting task, and if not planned and executed properly, it may end up in a catastrophe.

When migrating to the cloud, the first thing companies have to do is to define a strategy. There are several common migration strategies.

The first one is “lift and shift”. In this method, applications are re-hosted in the cloud provider (such as AWS or Azure). Re-hosting can be done by performing migration sync and fail over – using tools available from the cloud provider or third party vendors.

The second strategy is to re-platform. In this method, the core architecture of the application is unchanged, but some optimizations are done to take advantage of the cloud architecture.

The third strategy is to repurchase. In this method, the existing application is totally dropped and you buy a new one that runs on the cloud.

The fourth strategy is to re-architect the application by using cloud-native features. Usually you re-architect the application to take advantage of the scalability and higher performance offered by the cloud.

The last strategy is to retain the applications to run on-premise. Some applications (especially legacy ones) are very complicated to migrate and keeping them on-premise may be the best option to take.

One important task to perform after migration is to validate and test the applications. Once they are smoothly running, find opportunities for application optimization, standardization and future proofing.

Common Pitfalls of Deploying Applications in the Cloud

Due to the relatively painless way of spinning up servers in the cloud, business units of large companies are flocking to AWS and other cloud providers to deploy their applications instead of using internal IT. This is expected and even advantageous because the speed of deployment in the cloud is usually unmatched by internal IT. However, there are many things to consider and pitfalls to avoid in order to establish a robust and secure application.

I recently performed an assessment of an application in the cloud implemented by a business unit with limited IT knowledge. Here are some of my findings:

  1. Business units have the impression that AWS takes care of security of the application. While AWS takes care of security of the cloud (which means security from the physical level up to the hypervisor level), the customer is still responsible for the security in the cloud (including OS security, encryption, customer data protection, etc.). For instance, the customer is still responsible for OS hardening (implementing secure password policy, turning off unneeded services, locking down ssh root access, enabling selinux, etc.) and monthly security patching.
  2. These servers in the cloud also lack the integration with enterprise internal tools to properly monitor and administer the servers. Usually enterprises have developed mature tools for these purposes. Without integrating with these tools, they are usually blind to what’s going on with their servers, especially the very important task of monitoring their security.
  3. These servers do not have periodic auditing. For instance, although Security Groups have been setup properly in the beginning, they have to be audited and revisited every so often so that ports that are no longer needed can be disabled/removed from the Security Groups.
  4. There is no central allocation of IP addresses. IP addresses may overlap once their own VPC is connected to other VPCs and the enterprise internal network.
  5. One of the most commonly neglected task after spinning up servers is to configure their backup and retention. For companies that are regulated, it is extremely important to adhere to their backup and retention policy.
  6. Because of the business unit’s limited knowledge of IT infrastructure, fault tolerance and reliability may not be properly set up. For instance, they may only use one availability zone instead of using two or more.
  7. Business units may not be optimizing the cost of their deployment in the cloud. There are many ways to accomplish this, such as using tiered storage (for instance, using Glacier to archive data instead of S3), powering down servers when not in use, bidding for resources for less time sensitive tasks, etc.

Business units should be cautious, and should consider consulting internal IT before deploying in the cloud to ensure a reliable, secure, and cost-effective applications.

Annual New England VTUG Winter Conference

I have been attending the annual New England Virtualization Technology Users Group (VTUG) Winter Warmer Conference for the past couple of years. This year, it was held on January 19, 2017 at Gillette Stadium.

Gillette Stadium is where the New England Patriots football team plays. The stadium has nice conference areas and the event usually features meeting and getting autographs from some famous Patriots alumni. This year we got the chance to meet running back Kevin Faulk and Patrick Pass.

Although the event is sponsored by technology vendors, most of the keynotes and breakout sessions are not sale pitches. They are usually very informative sessions delivered by excellent speakers.

The key takeaways for me from the conference are the following:

  1. Cloud adoption remains a hot topic, but containerization of applications being led by Docker, enables companies to construct and deliver microservices applications at lightning speed. Coupled with DevOps practices and support from major software vendors and providers (Windows, RedHat, Azure, AWS, etc), containers will be the next big thing in virtualization.
  2. VMware is getting serious about infrastructure security. Security has become the front and center focus of the release of vSphere 6.5. Their objective is to make security easy to manage. Significant security features include VM encryption at scale, enhanced logging from vCenter, VM’s secure boot support, and secure boot support for ESX1. For more information, visit this website.
  3. As more and more companies are moving into hybrid cloud model (a combination of private and public cloud), vendors are getting more innovative on creating products and services that will help companies easily manage and completely secure the hybrid cloud.
  4. Hyper-converged infrastructure is now being broadly adopted, with EMC VXrails and Nutanix leading the pack. The quest for consolidation, simplification, and software-defined infrastructure is in full steam.
  5. New innovative companies are present at the event as well. One particular company called Igneous, offers “true cloud for local data.”

Replicating Massive NAS Data to a Disaster Recovery Site

Replicating Network Attached Storage (NAS) data to a Disaster Recovery (DR) site is quite easy when using big named NAS appliances such as NetApp or Isilon. Replication software is already built-in on these appliances – Snapmirror for NetApp and SyncIQ for Isilon. They just need to be licensed to work.

But how do you replicate terabytes, even petabytes of data, to a DR site when the Wide Area Network (WAN) bandwidth is a limiting factor? Replicating a petabyte of data may take weeks, if not months to complete even on a 622 Mbps WAN link, leaving the company’s DR plan vulnerable.

One way to accomplish this is to use a temporary swing array by (1) replicating data from the source array to the swing array locally, (2) shipping the swing frame to the DR site, (3) copying the data to the destination array, and finally (4) resyncing the source array with the destination array.

On NetaApp, this is accomplished by using the Snapmirror resync command. On Isilon, this is accomplished by turning on the option “target-compare-initial” in SynqIQ which compares the files between the source and destination arrays and sends only data that are different over the wire.

When this technique is used, huge company data sitting on NAS devices can be well protected right away on the DR site.

Book Review: The Industries of the Future

I came across this book while browsing the New Arrivals section at a local bookstore. As a technology enthusiast, the title has piqued my interest. However, the other reason why I wanted to read this book was to find an answer to the question “How do we prepare our children for the future?” As a father of a teenage daughter, I would like to provide her with all the opportunities and exposure she needs to enable her to make the right career choice and be better prepared for the future.

The author Alec Ross states in the introduction, “This book is about the next economy. It is written for everyone who wants to know how the next wave of innovation and globalization will affect our countries, our societies, and ourselves.”

The industries of the future are:

1. Robotics. Robots have been around for many years, but the ubiquity of network connection, availability of big data, and faster processors are making significant progress in robotics.

2. Genomics. If the last century is the age of Physics, the coming century will be the the age of Biology. The sequencing of genomics has opened the door to many opportunities in life sciences.

3. Blockchains. The financial industry and the way we handle commerce will be transformed by this technology.

4. Cybersecurity. The Internet will be the next place where war between nations will be waged.

5. Big Data. Use of predictive analytics or other advanced methods to extract value from data will allow us to “perform predictions of outcomes and behaviors” and alter the way we live.

There is nothing new about these technologies. However, what made the book really worth reading were the examples, anecdotes and interesting stories told by Ross. The author has traveled extensively around the world and has first hand experience of these technologies.

Back to the question, “How do we prepare our children for the future?” —  the best thing we can do is to encourage them to pursue a career in science and technology and allow them to travel so they will be comfortable in a multicultural world.

Backup Replication Best Practices

Backup infrastructures that are utilizing disks to backup data on premise and not using tapes to store copies offsite must replicate their data to a disaster recovery or secondary site, in order to mitigate the risks of losing data when the primary sites go away due to disaster.

Popular backup solutions such as Avamar usually include replication feature that logically copies data from one or more source backup servers to a destination or target backup server. In addition, Avamar uses deduplication methodology at the source server, transferring unique data only to the target server and encrypting the data during transmission. Avamar replication is accomplished via asynchronous IP transfer and can be configured to run on a scheduled basis.

Some of the best practices of Avamar replication are:

1. Replicate during low backup activity and outside of the routine server maintenance
2. Replicate all backup clients
3. Avoid filtering backup data because it may inadvertently miss backups
4. Ensure available bandwidth is adequate to replicate all daily changed data within a 4-hour period.