Category Archives: IT Management

Getting Promoted in IT

One of the perks of serving at an Harvard alumni club (I am currently the Secretary of the Harvard-Radcliffe Club of Worcester), was attending a 2-day Alumni Leadership Conference in Cambridge, MA. It was a nice break from work. I met alumni leaders from all over the world, talked to accomplished people (I met the writer of one of my daughter’s favorite movies – Kung Fu Panda), learned what’s new in the Harvard world, and learned leadership skills from great speakers.

One of those speakers is David Ager, a faculty member at the Harvard Business School. He totally engaged the audience while delivering his opening address – “Leadership of High Performing Talent: A Case Study.” We discussed a case study about Rob Parson, a superstar performer in the financial industry. In a nutshell, Rob Parson delivered significant revenue to the company but his abrasive character and non-teamwork attitude didn’t fit well into the culture of the company. He was due for performance review and the question was – Should Rob be promoted?

The setting of the case study was in the financial industry, but the lesson holds true as well in the Information Techology (IT) industry. There are a lot of Rob Parson in IT – software developers, architects, analysts, programmers – who are high performers, but they rub other people the wrong way. They are intelligent, smart, and they develop very sophisticated software — the bread and butter of IT companies. Some of these IT superstars aspire for promotion for managerial role. Should they be promoted? Too often we hear stories about a great software architect who went to manage people, but faltered as a result.

IT professionals who would really like to manage people should be carefully evaluated for their potential. They should learn people and business skills in order to succeed. Before giving them any managerial position, they should undergo a development program and they should be under a guidance of a mentor (or a coach) for at least a year. Most IT professionals should not take on the managerial role. They should remain on their technical role to be productive, but they should be given other incentives that motivate and make them happy – such as complete authority of their work, flex time, an environment that foster creativity and so on.

Upgrading Netbackup from Version 6.5 to 7.1

I recently upgraded a Netbackup infrastructure from version 6.5 to version 7.1. Here are some of my observations and advice:

1. Preparation took longer than the actual upgrade of the server. Pre-installation tasks included understanding the architecture of the backup infrastructure including the master and media server, disk-based backup, and ndmp; checking the hardware (processor, memory, disk space) and operating system version were compatible and up to snuff; checking the general health of the running Netbackup software including the devices and policies; backing up the catalog database; obtaining updated Netbackup licenses from Symantec; downloading the base Netbackup software and the patches; joining, unzipping, and untarring software and patches; and other related tasks. Planning and preparation are really the key for a successful upgrade. These activities will save a lot of trouble during the upgrade process.

2. The upgrade process was seamless. On Solaris server, I ran the “install” command to start the upgrade. The process asked for several questions. Some packages were already integrated in the base package such as ndmp, so the program asked for the existing Netbackup ndmp package to be uninstalled. The part that took longer was the catalog database upgrade.

3. Upgrade of client agents was also easy. Upgrading UNIX and Linux clients was completed using the push tool “update_clients.” Windows clients were upgraded using the Netbackup Windows installation program. One good thing though was that no reboot was necessary. Also, I found out that Windows 2000 and Solaris 8 clients were not supported on 7.1, although it will still backup using the old 6.5 agent.

4. For bmr (bare metal restore), there was no need for a separate boot server. All client install included the boot server assistant software.

5. The GUI administration interface is almost the same, except for some new features such as vmware support.

6. The java administration console is so much better, in terms of responsiveness.

CISSP

A couple of days ago, I got the official renewal of my CISSP (Certified Information Systems Security Professional) certification from ISC2.  My certification is valid again for another three years, until October 2015.

CISSP certification is one of the certifications I make sure to maintain because of its usefulness. No question every IT professional should be aware of security implications in any system he/she develops, build, or maintain.  Security breaches are becoming the norm and IT professionals should be prepared to face these challenges.  CISSP certification greatly help IT professionals like me in creating and enforcing security policies and procedures, and in designing and maintaining secure systems.

When I first obtained the certification six years ago, in Oct 2006, I remembered it was one of the toughest exam I ever took.  And passing the exam is just one of the requirements.  One should have at least five years information security experience, and should be endorsed by another CISSP professional.  In addition, one should abide by the ISC2 code of ethics.

To maintain certification, one should obtain Continuing Professional Education (CPE) credits of 120 points within three years, and pay the annual maintenance fee.   The requirement to obtain CPE credits keeps my security skills current.  There are many ways to obtain CPE credits.  My favorites are the security seminars and conferences such as Secure Boston, Source Boston, and IANS.  One can also get points by reviewing security books, reading and writing security articles, and speaking about security in seminars and conferences, among others.

To learn more about CISSP and how to get certified, go to the ISC2 website.

BYOD

Recently, I attended a security seminar on the newest buzzword in the IT industry – BYOD, or Bring Your Own Device – to complete my CISSP CPE (Continuing Professional Education) requirement for the year. The seminar was sponsored by ISC2 and the speaker, Brandon Dunlap, is a seasoned, insightful, and very entertaining speaker.  I highly recommend the seminar.

BYOD came about because of the popularity of mobile devices – iPhone, iPad, Android, Blackberry, etc.- , the consumerization of IT, and employees getting more flexible schedules.    Companies are starting to allow their employees to use their own devices – to improve productivity, mobility, and supposedly save the company money.  The millennials, in particular, are more apt to use their own devices.  Owning these devices for them signifies status symbol or a fashion statement.

However,  does it make sense to allow these devices into the company’s network?  What are the security implications of the BYOD phenomenon?

From a technology standpoint, there are a lot of innovations to secure both the mobile devices and the company’s applications and data, for instance, using containers, to separate personal apps and company’s apps.  Security companies are creating products and services that will improve the security of BYOD.  But from a policy and legal standpoint, very little is being done.  Companies who jumped into this BYOD buzz are getting stung by BYOD pitfalls as exemplified by one of the greatest IT companies in the world – IBM.   In addition, recent studies showed that BYOD does not really save company money.

Companies need to thoroughly understand BYOD before adopting it.  It is a totally new way of working.

The seminar highlighted the many problems of BYOD, and the immense work that needs to be done to make it successful.  No wonder the organizer entitled it “Bring Your Own Disaster” instead of “Bring Your Own Device.”

 

Networking Lessons

I’m not talking about computer networking. I’m talking about networking with people at events (such as social events, seminars, and conferences) to increase your contacts and build meaningful relationships. You’ll never know if these people could turn out to be your future employer, your business partner, or even just your friend.

I’m not saying I’m an expert in networking. Far from it. However, these are the lessons I’ve learned from attending numerous networking events.

First and foremost, I make sure this is an event that I really want to attend. I get invited to a lot of networking events, since I belong to different clubs – Toastmasters clubs, Harvard Alumni clubs, etc. In addition, I get invited to a lot of IT related events such as security conferences, trade shows, and vendor seminars. I ask myself the following questions before I sign up:

1. Will it add value to me?
2. Will I make new / meaningful connections?
3. Is it worth my time and money?

Once I determined that I am going to the event, I prepare the night before the event. I polish my elevator speech, I make sure I have enough business card, and if I have access to the list of attendees, I plan on the people I’d like to meet. I also prepare questions I’d like to ask. Some of the questions I ask to break the ice are the following:

1. How do you know the host?
2. What do you do for fun?
3. Where are you from? What do you do?
4. Compliment anything – appearance, health, clothing (eg. Wow, that’s a nice…? Where did you get it?)

During the event, I make sure to talk to people and be the first one to say hello. I admit this takes a lot of effort for me since I am an introvert. But if I don’t initiate the conversation, nobody will. I ask a lot of questions and offer help within my capacity. Remember, networking is a two way street. It’s not only about what you can get, but what you can do to help the other person.

If the event has a speaker, I try to ask questions and participate at sessions.

I also make sure that I meet at least 3 new people I can connect with. I usually ask to connect on LinkedIn, since it is the best way to keep in touch.

Finally, I try to have fun and enjoy the event.

Internal Web Analytics

There are a lot of tools out there that can analyze web traffic for your site. Leading the pack is Google Analytics. But what if you want statistics of your internal website, and you don’t necessarily want to send this information to an external provider such as Google? Here comes Piwik.  Piwik is very much like Google Analytics but can be installed on your internal network. The best part is that it’s free.

Since Piwik is a downloadable tool, you need to have a machine running web server and mysql. You can install it on your existing web server or on a separate web server. I installed it on a separate CentOS machine. I found the installation very easy. In fact, you just unzip a file and put those files in a web directory. The rest of the installation is via the browser. If there is a tool missing on your server, (in my case, I need the PDO extension) it will tell you how to install it. Pretty neat.

After installing the server, you just need to put a small javascript code on the pages you want to track. That’s it. Piwik will start gathering statistics for your site.

I also evaluated Splunk and it’s companion app – Splunk App for Web Intelligence, but I found that it is not ready for prime time. There are still bugs. No wonder it is still in beta. When I was evaluating, it wasn’t even able to get usable information from apache logs.

I’ve been using Awstats to extract statistics for internal websites for years. It has been very reliable but sometimes it provides inaccurate results. The open source Piwik web analytic tool provides accurate statistics and is the best tool I’ve used so far.

Focus on Existing Clients

I’ve been working as a part time consultant for small and start-up companies in Cambridge, MA. These clients ask me to design and build their IT infrastructure. Most of the time, the infrastructure is built in-house, and sometimes they are put into the “clouds”. It largely depends on which architecture make sense for the clients. For instance, some clients generate huge amount of data in-house, so it make sense to build the storage infrastructure inside their premise.

Once the infrastructure is built though, most will be in operations mode. This mode does not require huge amount of time — specially in small companies. You only get called when there are problems. Should you then look for new clients, so you can generate more revenue? I believe it is easier to focus on existing customers and generate more work (and revenue) from them. In fact, if you focus more on looking for new clients, your relationship with existing ones erode, your service become stagnant, and in some cases you end up losing their business.

To focus more on existing clients, here are three proven methods to generate more revenue from them:

1. Provide timely responses. When something breaks, fix it right away. If you cannot do it in the next hour, provide a feedback when you can work on it and the estimated completion time. Improve your customer service skills and communicate often.

2. Address unmet needs. There will always be unmet needs in the Information Technology space. For instance, the client may not know that due to regulation, data containing any personal information of employees and customers such as credit card numbers, social security numbers, etc. should be encrypted. Offer to create a project for this unmet need.

3. Offer value added services. For instance, offer a comprehensive Disaster Recovery Plan. Tell the client that a simple backup infrastructure is not enough for the business to continue to operate after a major disaster.

It’s hard and expensive to find new clients. Your existing clients will be happier (and will pay you more money) if you focus on them.

Security Strategy

Amidst the highly publicized security breaches, such as the LinkedIn hacked passwords, hacktivists defacing high profile websites, or online thieves stealing credit card information, one of the under-reported security breaches are nation states or unknown groups stealing Intellectual Property information from companies such as building designs, manufacturing secret formulas, business processes, financial information, etc. This could be the most damaging security breach in terms of its effect on the economy.

Companies do not even know they are being hacked, or are reluctant to report such breaches. And the sad truth is that companies do not even bother beefing up their security until they become victims.

In this day and age, all companies should have a comprehensive security program to protect their assets. It starts with an excellent security strategy, a user awareness program (a lot of security breaches are done via social engineering), and a sound technical solution. A multi-layered security is always the best defense – a firewall that monitors traffic, blocks IP addresses that launches attacks, and limits the network point of entry; an IDS/IPS that identifies attacks and gives signal; a good Security Information and Event Management (SIEM) system; and good patch management system to patch servers and applications immediately once vulnerabilities are identified, to name a few.

Cost is always the deciding factor in implementing technologies. Due diligence is needed in creating cost analysis and threat model. As with any security implementation, you do not buy a security solution that costs more than the system you are protecting.

Disaster Recovery using NetApp Protection Manager

In our effort to reduce tape media for backup, we have relied on disks for our backup and disaster recovery solution. Disks are getting cheaper and de-duplication technology keeps on improving. We still use tapes for archiving purposes.

One very useful tool for managing our backup and disaster recovery infrastructure is NetApp Protection Manager. It has replaced the management of local snapshots, snapmirror to Disaster Recovery (DR) site, and snapvault. In fact, it doesn’t use these terms anymore. Instead of “snapshot,” it uses “backup.” Instead of “snapmirror,” it uses the phrase “backup to disaster recovery secondary.” Instead of “snapvault,” it uses “DR backup or secondary backup.”

NetApp Protection Manager is policy-based (e.g. backup primary data every day @ 6pm, and retain backups for 12 weeks; backup primary data to DR site every day @ 12am; backup the secondary data every day @ 8am and retain for 1 year). As an administrator, one does not have to deal with the nitty-gritty technical details of snapshots, snapmirror, and snapvault.

There is a learning curve in understanding and using Protection Manager. I have been managing NetApp storage for several years and I am more familiar with snapshots, snapmirror, and snapvault. But as soon as I understood the philosophy behind the tool, it gets easier to use it. NetApp is positioning it for the cloud. The tool also has dashboards intended for managers and executives.

Backup Infrastructure

I have been designing, installing , and operating backup systems for the past several years.  I have mostly implemented and managed Symantec Netbackup (used to be Veritas Netbackup) for larger infrastructures and Symantec Backup Exec for smaller ones.

These software worked very well although some features are not very robust.  I’m very impressed for instance of the NDMP implementation in Netbackup.  Backing up terabytes of NetApp data via NDMP works very well.  However, I do not like the admin user interface of Netbackup since its not very intuitive. Their bare metal restore (BMR) implementation also is a pain.  Some of the bugs took years to fix.  Maybe because there are not too many companies using BMR.

Backup Exec works very well with small to medium systems. It has very intuitive interface, it is relatively easy to setup, and it has very good troubleshooting tools.  Lately though, Symantec has been playing catch up in their support for newer technologies such as VMware. It is so much easier to use Veeam to manage backup and restore of virtual machines.  In addition, Backup Exec has been breaking lately. Recent Microsoft patches have caused backup of System_State to hang.

But I think the biggest threat to these backup software are online backup providers. Crashplan, for instance, was initially developed for desktop backup, but it will not take long before companies will use it to back up their servers. When security concerns are addressed properly by these providers, companies will be more compelled to backup their data online. It’s just cheaper and easier to backup online.