Category Archives: IT Strategy

Network and Server Monitoring Using Open Source Tools

I am a fan of open source tools. The Internet, as we know it today, will not exist if not for the open source movement. We owe this to the countless architects and developers who dedicated their time and effort to write open source software.

Enterprise IT departments can also take advantage of open source software. Numerous companies have been using them for years. One particular area where they can be used is network and server monitoring.

There are a lot of open source network monitoring tools out there. Leading the pack are Nagios, Zabbix, and Cacti. My favorite tool though is OpenNMS. I particularly like it because it is very easy to setup and administer. It can automatically discover your nodes on the network. There were very few tweaks when I first set it up. It provides simple event and notification via email or pager. In addition, its web-based management interface interface is very easy to use.

I have been using OpenNMS for several years now and it has been running rock solid. I definitely recommend OpenNMS for IT departments who do not want to pay a hefty price to monitor their network and servers.

End User Experience on Enterprise IT

A lot of focus on adapting BYOD (Bring Your Own Devices) has been exerted by enterprise IT departments due to the popularity of mobile phones and tablets, and their cost savings to companies. However, I believe equal focus should be given to enterprise applications to enhance end user experience. Numerous enterprise applications are still antiquated, difficult to use, and not even suitable for mobile devices.

One of the goals of enterprise IT is to provide excellent user experience, thus increasing end user productivity. If the hardware devices are state of the art mobile phones and tablets but the apps are very hard to use, then the purpose is defeated.

For instance, searching for information inside the enterprise is still very difficult. Information is scattered across different file servers and applications. Very few companies have Google-like enterprise search capability. People are frustrated because it’s easier to search just about anything on the Internet, but it’s very difficult to find simple information inside the enterprise.

Enterprise applications should be like consumer IT applications, such as those provided by innovative companies like Amazon, Google, Facebook, etc. These web-based or mobile-based enterprise apps should be very user friendly and intuitive. In addition, training should not be required to use these enterprise apps. Google does not ask us to train whenever they deploy a new consumer app.

Enterprise apps should also be secure, just like those provided by online banking sites. Data should be encrypted and users properly authenticated.

End users should have the same user experience when at home doing online shopping, banking, and searching, and when at work using enterprise applications.

IT Converged Infrastructure

Is converged infrastructure the future? Major technology companies are now offering integrated compute, storage, and network in a box. Leading the pack is the Vblock system by VCE. Vblock consists of hardware and software from Cisco, EMC, and VMware.

Similarly, servers, storage, and network vendors are also offering their own integrated system. NetApp, a storage vendor, is selling FlexPod. A FlexPod combines NetApp storage systems, Cisco Unified Computing System servers, and Cisco Nexus fabric into a single, flexible architecture.

Cisco, a networking company, has been selling x86 Unified Computing System for years and recently bought Whiptail, a high performance storage company, to enhance their unified infrastructure offering. HP, a server company, is offering the POD solution.

These converged infrastructure solutions are not only suited for small or medium sized data centers but they are engineered for large scale, high performance, and highly reliant data centers. In addition, security, automation, and monitoring are built into the package.

With these solutions, companies do not need to spend time and money architecting and integrating servers, storage, and networks. Most importantly, operations and vendor support will be simplified. There will only be one point of contact for vendor support and finger pointing between vendors will be minimized.

The Evolving Role of IT Professionals

I started my career in software development.  I wrote codes, performed system analysis, and software quality assurance.   Then I switched to system and network administration, and infrastructure architecture.   While the roles of software developers may not change that much (software programs need to be created), the roles of IT administrators, architects, analysts, and IT departments in general are changing.  This is due to cheap hardware, smarter software and appliances, and the availability of the cloud.

I still remember some time ago when I would spend a lot of time troubleshooting a system.  Today, due to redundant systems, off the shelves and online applications, and the use of appliances, troubleshooting times have been reduced to a minimum.  When a component breaks, it’s easy to replace it.

IT companies are now selling converged network, server, and storage in a box which eliminated the need for elaborate architecture and implementation and has simplified IT operations.

With virtualization and the “cloud”, more and more applications and IT services (infrastructure as a service, software as a service, etc.) are being available online.

When it comes to IT, companies now have various choices – host their IT services externally via public cloud, build IT systems in house, or use the combination of the two.

Thus, the future role of IT professionals will be like brokers.  When the business comes to them for a need, they should be able to deliver quickly and provide the best IT solution.  They should be able to determine when to use the public cloud and when to use internal IT systems.  The key is to understand the business. For instance, it may not make sense to put data in the cloud if you are concerned about security or if your company is regulated by the government.  If your company is small, it may not make sense to build a costly IT infrastructure in house.

Successful IT professionals are not only technically savvy but also business savvy.

The Value of IT Certifications

I recently passed the VMware Certified Professional 5 – Data Center Virtualization exam. The last VMware certification I took was in 2007 when I passed the VMware Certified Professional 3 exam. It’s nice to have the latest VMware certification under my belt.

VMware certification is a little bit unique, because it requires one-week training and hands-on experience. You will find it difficult to pass the test without hands-on experience. Most of the questions in the test are real life scenarios and you can only understand the questions if you have encountered them in real life.

Some people argue the value of certifications. They say that certifications are useless because most of those people who have them are inexperienced. I agree that experience is the best way to learn in the IT field. I can attest to this after almost 20 years in the field. But IT certifications are valuable for the following reasons:

1. Not all IT certifications are created equal. While some certifications are easier to pass just by reading books, most IT certifications such as VCP (VMware Certified Professional), CISSP (Certified Information Systems Security Professional), and RHCE (Red Hat Certified Engineer) certifications need a high degree of experience to pass the tests.

2. Not all people are lucky enough to have access to expensive hardware to gain hands-on experience nor lucky enough to be assigned to IT projects to get the maximum exposure. Some people take the certification route to get knowledge and experience.

3. Not all IT knowledge is learned via experience since not all scenarios can be encountered in real life. Some are learned via reading books and magazines, taking the training, and passing certification tests. For instance, if your company’s standard is Fiber Channel for VMware datastore, the only way to learn about iSCSI datastore is to read or get trained on it.

4. IT certifications are solid evidence of your career. It will be very useful, for instance, when looking for a job. Prospective employers do not have a concrete evidence of your accomplishments, but a solid and trusted IT certification can prove your worth.

5. And finally, seasoned IT professionals, just like me, take certification tests to validate our knowledge.

The Importance of Disaster Recovery (DR) Testing

Recently, we conducted disaster recovery (DR) testing on one of our crucial applications. The server was running Windows 2008 on an HP physical box. We performed bare metal restore (BMR) using Symantec Netbackup 7.1. However, after Symantec BMR completed the restore, the server will not boot up. We troubleshoot the problem and tried several configurations. It took a couple of days before we figured out the issue. The issue, by the way, was that the boot sector got misaligned after the restore and we have to use Windows installation disk to repair it.

What if it was a real server disaster? The business cannot wait for a couple of days to restore the server. We defined an RTO (Recovery Time Objective) for that server to be 8 hours. And we did not meet it during our testing. This is the reason why DR testing is very important.

During DR testing, we have to test the restore technology and the restore procedures. In addition, we need to test if we can restore it on time (RTO) and if we can restore the data at a point in time (or RPO – Recovery Point Objective) (e.g. from a day before, or from a week ago).

With a lot of companies outsourcing their DR to third parties or to the cloud, DR testing becomes even more important. How do you know if the restore works? How do you know if their DR solution meets your RPO and RTO? Companies assume that because backups are being done, then restore will automatically work.

We perform DR testing once a year. But for crucial applications and data, I recommend DR testing twice a year. Also, perform a test every time you make significant changes on your backup infrastructure, such as software updates.

Security Done Right

During my job-related trip to Israel a couple of months ago, I was subjected to a thorough security check at the airport. I learned later on that everybody goes through the same process. It was a little inconvenient, but in the end, I felt safe.

With all the advance technologies in security, nothing beats the old way of conducting security – thorough checks on individuals. I also noticed the defense in depth strategy at the Israel airport – the several layers of security people have to pass to get to their destinations. No wonder some of the greatest IT security companies come from Israel (e.g. Checkpoint Firewall).

As an IT security professional (I’m a CISSP certified), I can totally relate to the security measures Israel has to implement. And companies need to learn from them. Not a day goes by that we learn companies being hacked, shamed, and extorted by hackers around the world.

Sadly, some companies only take security seriously when it’s too late – when their data has been stolen, their systems have been compromised, and their twitter account has been taken over. It will be a never ending battle with hackers, but it’s a great idea to start securing your systems now.

Getting Promoted in IT

One of the perks of serving at an Harvard alumni club (I am currently the Secretary of the Harvard-Radcliffe Club of Worcester), was attending a 2-day Alumni Leadership Conference in Cambridge, MA. It was a nice break from work. I met alumni leaders from all over the world, talked to accomplished people (I met the writer of one of my daughter’s favorite movies – Kung Fu Panda), learned what’s new in the Harvard world, and learned leadership skills from great speakers.

One of those speakers is David Ager, a faculty member at the Harvard Business School. He totally engaged the audience while delivering his opening address – “Leadership of High Performing Talent: A Case Study.” We discussed a case study about Rob Parson, a superstar performer in the financial industry. In a nutshell, Rob Parson delivered significant revenue to the company but his abrasive character and non-teamwork attitude didn’t fit well into the culture of the company. He was due for performance review and the question was – Should Rob be promoted?

The setting of the case study was in the financial industry, but the lesson holds true as well in the Information Techology (IT) industry. There are a lot of Rob Parson in IT – software developers, architects, analysts, programmers – who are high performers, but they rub other people the wrong way. They are intelligent, smart, and they develop very sophisticated software — the bread and butter of IT companies. Some of these IT superstars aspire for promotion for managerial role. Should they be promoted? Too often we hear stories about a great software architect who went to manage people, but faltered as a result.

IT professionals who would really like to manage people should be carefully evaluated for their potential. They should learn people and business skills in order to succeed. Before giving them any managerial position, they should undergo a development program and they should be under a guidance of a mentor (or a coach) for at least a year. Most IT professionals should not take on the managerial role. They should remain on their technical role to be productive, but they should be given other incentives that motivate and make them happy – such as complete authority of their work, flex time, an environment that foster creativity and so on.

BYOD

Recently, I attended a security seminar on the newest buzzword in the IT industry – BYOD, or Bring Your Own Device – to complete my CISSP CPE (Continuing Professional Education) requirement for the year. The seminar was sponsored by ISC2 and the speaker, Brandon Dunlap, is a seasoned, insightful, and very entertaining speaker.  I highly recommend the seminar.

BYOD came about because of the popularity of mobile devices – iPhone, iPad, Android, Blackberry, etc.- , the consumerization of IT, and employees getting more flexible schedules.    Companies are starting to allow their employees to use their own devices – to improve productivity, mobility, and supposedly save the company money.  The millennials, in particular, are more apt to use their own devices.  Owning these devices for them signifies status symbol or a fashion statement.

However,  does it make sense to allow these devices into the company’s network?  What are the security implications of the BYOD phenomenon?

From a technology standpoint, there are a lot of innovations to secure both the mobile devices and the company’s applications and data, for instance, using containers, to separate personal apps and company’s apps.  Security companies are creating products and services that will improve the security of BYOD.  But from a policy and legal standpoint, very little is being done.  Companies who jumped into this BYOD buzz are getting stung by BYOD pitfalls as exemplified by one of the greatest IT companies in the world – IBM.   In addition, recent studies showed that BYOD does not really save company money.

Companies need to thoroughly understand BYOD before adopting it.  It is a totally new way of working.

The seminar highlighted the many problems of BYOD, and the immense work that needs to be done to make it successful.  No wonder the organizer entitled it “Bring Your Own Disaster” instead of “Bring Your Own Device.”

 

Internal Web Analytics

There are a lot of tools out there that can analyze web traffic for your site. Leading the pack is Google Analytics. But what if you want statistics of your internal website, and you don’t necessarily want to send this information to an external provider such as Google? Here comes Piwik.  Piwik is very much like Google Analytics but can be installed on your internal network. The best part is that it’s free.

Since Piwik is a downloadable tool, you need to have a machine running web server and mysql. You can install it on your existing web server or on a separate web server. I installed it on a separate CentOS machine. I found the installation very easy. In fact, you just unzip a file and put those files in a web directory. The rest of the installation is via the browser. If there is a tool missing on your server, (in my case, I need the PDO extension) it will tell you how to install it. Pretty neat.

After installing the server, you just need to put a small javascript code on the pages you want to track. That’s it. Piwik will start gathering statistics for your site.

I also evaluated Splunk and it’s companion app – Splunk App for Web Intelligence, but I found that it is not ready for prime time. There are still bugs. No wonder it is still in beta. When I was evaluating, it wasn’t even able to get usable information from apache logs.

I’ve been using Awstats to extract statistics for internal websites for years. It has been very reliable but sometimes it provides inaccurate results. The open source Piwik web analytic tool provides accurate statistics and is the best tool I’ve used so far.