Category Archives: IT Management

New Book: Organization and Management

Organization and Management

DepEd K-12 Curriculum Compliant
Outcomes Based Education (OBE) Designed
Grade Level: Grade 11
Semester: 1st Semester
Strands: ABM, GAS
Authors: Palencia J, Palencia F, Palencia S.
ISBN: 978-621-436-005-5
Edition: First Edition
Year Published: 2019
Language: English
No. of pages: 
Size: 7 x 10 inches

About the book:
This book deals with the basic concepts, principles, and processes related to business organization, and the functional areas of management. Emphasis is given to the study of management functions like planning, organizing, staffing, leading, controlling, and the roles of these functions in entrepreneurship.

Contents:
Chapter 1 – Nature and Concept of Management
Chapter 2 – The Firm and Its Environment
Chapter 3 – Planning
Chapter 4 – Organizing
Chapter 5 – Staffing
Chapter 6 – Leading
Chapter 7 – Controlling
Chapter 8 – Introduction to the Different Functional Areas of Management
Chapter 9 – Special Topics in Management

Please contact me if your school is interested to review this textbook for possible adoption.

Securing Your Apps on Amazon AWS

One thing to keep in mind when putting your company’s applications in the cloud, specifically on Amazon AWS, is that you are still largely responsible for securing them. Amazon AWS has solid security in place, but you do not entrust the security aspect to Amazon thinking that your applications are totally secure because they are hosted there. In fact, Amazon AWS has a shared security responsibility model depicted by this diagram:

Source:  Amazon AWS

Amazon AWS is responsible for the physical and infrastructure security, including hypervisor, compute, storage, and network security; and the customer is responsible for application security, data security, Operating System (OS) patching and hardening, network and firewall configuration, identity and access management, and client and server-side data encryption.

However, Amazon AWS provides a slew of security services to make your applications more secure. They provide the AWS IAM for identity and access management, Security Groups to shield EC2 instances (or servers), Network ACLs that act as firewall for your subnets, SSL encryption for data transmission, and user activity logging for auditing. As a customer, you need to understand, design, and configure these security settings to make your applications secure.

In addition, there are advance security services that Amazon AWS provides, so that you don’t have to build them, including the AWS Directory Service for authentication, AWS KMS for Security Key Management, AWS WAF Web Application Firewall for deep packet inspection, and DDOS mitigation.

There is really no perfect security, but securing your infrastructure at every layer tremendously improves the security of your data and applications in the cloud.

Building an Enterprise Private Cloud

Businesses are using public clouds such as Amazon AWS, VMware vCloud or Microsoft Azure because they are relatively easy to use, they are fast to deploy, businesses can buy resources on demand, and most importantly, they are relatively cheap (because there is no operational overhead in building, managing and refreshing an on-premise infrastructure). But there are downsides to using public cloud, such as security and compliance, diminished control of data, data locality issue, and network latency and bandwidth. On-premise infrastructure is still the most cost effective for regulated data and for applications with predictable workloads (such as ERP, local databases, end-user productivity tools, etc).

However, businesses and end-users are expecting and demanding cloud-like services from their IT departments for these applications that are best suited on-premise. So, IT departments should build and deliver an infrastructure that has the characteristics of a public cloud (fast, easy, on-demand, elastic, etc) and the reliability and security of the on-premise infrastructure – an enterprise private cloud.

An enterprise cloud is now possible to build because of the following technology advancements:

  1. hyper-converged solution
  2. orchestration tools
  3. flash storage

When building an enterprise cloud, keep in mind the following:

  1. They should be 100% virtualized.
  2. There should be a mechanism for self-service provisioning, monitoring, billing and charge back.
  3. A lot of operational functions should be automated.
  4. Compute and storage can be scaled-out.
  5. It should be resilient – no single point of failure.
  6. Security should be integrated in the infrastructure.
  7. There should be a single management platform.
  8. Data protection and disaster recovery should be integrated in the infrastructure.
  9. It should be application-centric instead of infrastructure-centric.
  10. Finally, it should be able to support legacy applications as well as modern apps.

Important Responsibilities of IT Infrastructure Operations

The main function of IT operations is to keep IT services running smoothly and efficiently. It would be nirvana if IT infrastructure services just work perfectly throughout their lifespan after they are initially installed and configured. However, the reality is that hardware fails, bugs are found, features need to be added, security flaws are found, patches need to be applied, usage fluctuates, data needs to be protected, upgrades need to be done, demand increases, etc. The following are the important job of IT operations:

Monitoring

Monitoring is the only way to keep track of the health and availability of the systems. Monitoring can be accomplished by looking at the system’s health via dashboard or console, or via specialized monitoring appliances. One major component of monitoring is alerting via email or pagers when there is a major issue. Monitoring can also come from incident tickets generated by machines or end users that may not be apparent via machine monitoring. System logs can also be used for trending and monitoring as it can bring into light some flaws on the system.

Troubleshooting

Once issues are detected, IT operations should be able to troubleshoot these issues and fix them as soon as possible to bring the service back online. Issues that are more complicated to fix should be escalated to vendors, higher level support, or engineers and developers.

Change Control

IT operations should not make any changes (such as configuration change, hardware replacement, upgrades, etc) without following the proper change control procedure. More than 50% of outages are caused by changes on the system. IT services are often tightly integrated with other system and a change on one system may be able to affect the others. Subject matter experts of the various systems should make sure that a change will not affect their system. Planning and testing are vital steps in performing changes.

Capacity planning

IT operations should monitor and trend the utilization of resources (compute, storage, network) and allocate resources to ensure that there is enough capacity to serve demand. They should be able to predict and allocate resources so that there is capacity when they are needed.

Performance optimization

IT operations should optimize IT services and ensure efficient use of resources. The goal is provide an excellent user experience for these services. Mechanisms such as redundancy, local load balancing, global load balancing and caching improves utilization, efficiency and end user experience.

Backups

In addition to keeping the IT services running smoothly, IT operations should also protect these systems and their data by backing them up and replicating them to a remote site. The goal is to bring these systems back online in as little time as possible when there is a catastrophic failure on the systems.

Security

IT operations should also be responsible for securing the systems. Due to its enormous task, a lot of companies employ a dedicated Security Operations Center (SOC) that watches security breaches.

Automation

One of the goals of IT operations is to automate most manual activities via scripting and self-healing mechanism. This enables them to focus on higher value tasks and not get bogged down on repetitive tasks.

Mitigating Insider Threats

With all the news about security breaches, we often hear about external cyber attacks, but internal attacks are widely unreported. Studies show that between 45% to 60% of all attacks were carried out by insiders. In addition, it is harder to detect and prevent insider attacks because access and activities are coming from trusted systems.

Why is this so common and why is this so hard to mitigate? The following reasons have been cited to explain why there are more incidents of internal security breaches:

1. Companies don’t employ data protection, don’t apply patches on time, or don’t enforce any security policies/standards (such as using complex passwords). Some companies wrongly assume that installing a firewall can protect them from inside intruders.

2. Data is outside of the control of IT security such as when the data is in the cloud.

3. The greatest reason for security breach is also the weakest link in the security chain – the people. There are two types of people in this weak security chain:

a. People who are vulnerable such as careless users who use USB, send sensitive data using public email services, or sacrifice security in favor of convenience. Most of the time, users are not aware that their account has already been compromised via malware, phishing attacks, or stolen credentials gleaned from social networks.

b. People who have their own agenda or what we call malicious users. These individuals want to steal and sell competitive data or intellectual properties to gain money, or they probably have personal vendetta against the organization.

There are however proven measures to lessen the gravity of insider threats:

1. Monitor the users, especially those who hold the potential for greatest damage – top executives, contractors, vendors, at-risk employees, and IT administrators.

2. Learn the way they access the data, create a baseline and detect any anomalous behavior.

3. When a divergent behavior is detected such as unauthorized download or server log-ins, perform an action such as block or quarantine user.

It should be noted that when an individual is caught compromising security, more often than not, damage has already been done. The challenge is to be proactive in order for the breach to not happen in the first place.

An article in Harvard Business Review has argued that psychology is the key to detecting internal cyber threats.

In essence, companies should focus on understanding and anticipating human behavior such as analyzing employee language (in their email, chat, and text) continuously and in real time. The author contends that “certain negative emotions, stressors, and conflicts have long been associated with incidents of workplace aggression, employee turnover, absenteeism, accidents, fraud, sabotage, and espionage”

Applying big data analytics and artificial intelligence on employees language in email, chat, voice, text logs and other digital communication may uncover worrisome content, meaning, language pattern, and deviation in behavior, that may make it easier to spot indications that a user is a security risk or may perform malicious activity in the future.

Translating Business Problems into Technology Solutions

One of the most important jobs of IT Consultants/Architects is to translate business problems into technology solutions. Many companies today and in the future will need to solve business problems to remain competitive. Exponential advances in information technology will enable businesses to solve problems.

But translating business problems into technology solutions is often hard. Most of the time there is a disconnect between business people and technology people. For example, business people speak of vision, strategy, processes, and functional requirements, whereas technology folks speak about programming, infrastructure, big data and technical requirements. In addition, people who understand the business typically are not smart about technology, and vice versa – technology folks often do not understand business challenges. Both have totally different perspectives – business folks are concerned about business opportunities, business climate, and business objectives, while technology folks are concerned about technology challenges, technical resources, and technical skills.

To be successful, IT Consultants/Architects should bridge the gap and provide businesses the services and the solution they need. IT Consultants/Architects need to translate business objectives into actions. In order to do this, they should be able to identify business problems, determine the requirements to solve problems, determine the technology available to help solve them, and architect the best solution. In addition, they should be able to identify strategic partners that will help move the project and determine likely barriers.

Most importantly though, IT Consultants/Architects should be able to manage expectations. It’s always better to under promise and over deliver.

Enterprise File Sync and Share

Due to increased usage of mobile devices (iPhone, iPad, Android, tablet, etc) in the enterprise, the need for a platform where employees can synchronize files between their various devices is becoming a necessity. In addition, they need a platform where they can easily share files both inside and outside of the organization. Some employees have been using this technology unbeknownst to the IT department. The popular file sync and share cloud-based app dropbox has been very popular in this area. The issue with these cloud-based sync-and-share apps is that for corporate data that are sensitive and regulated, it can pose a serious problem to the company.

Enterprises must have a solution in their own internal data center where the IT department can control, secure, protect, backup, and manage the data. IT vendors have been offering these products over the last several years. Some examples of enterprise file sync are share are: EMC Syncplicity, Egnyte Enterprise File Sharing, Citirx Sharefile, and Accellion Kiteworks.

A good enterprise file sync and share application must have the following characteristics:

1. Security. Data must protected from malware and it must be encrypted in transit and at rest. The application must integrate with Active Directory for authentication and there must be a mechanism to remote lock and/or wipe the devices.
2. Application and data must be supported via WAN acceleration, so users do not perceive slowness.
3. Interoperability with Microsoft Office, Sharepoint, and other document management system.
4. Support for major endpoint devices (Android, Apple, Windows).
5. Ability to house data internally and in the cloud.
6. Finally, the app should be easy to use. Users’ files should be easy to access, edit, share, and restore, or else people will revert back to cloud-based apps that they find super easy to use.

The Battle Between External Cloud Providers and Internal IT Departments

Nowadays, when business units require computing resources for their new software application, they have a choice between using an external provider or using the company’s internal IT department. Gone are the days when they solely rely on the IT department to provide them with compute and storage resources. Business units are now empowered because of the growing reliability and ubiquity of external cloud providers such as Amazon Web Services (AWS).

Services provided by external providers are generally easy to use and fast to provision. As long as you have a credit card, a Windows or Linux server can be running within a few hours, if not minutes. Compare that to internal IT departments which usually take days, if not weeks, to spin-up one. Large companies especially have to follow a bureaucratic procedure that takes weeks to complete.

Because of this, business units who are under the pressure to provide the application or service to the end users end up using external providers. This is the fast growing “shadow IT.” More often than not, IT departments do not know about this, until they are called to troubleshoot issues, such as to fix a slow network connection or to restore data after a security breach or data loss.

Using external providers can be good for the company. They have their merits such as fast provisioning and ability to quickly scale up, but they also have their limitations. Security, vendor lock-in, integration with on-premise applications and databases are some of the concerns. Some of these business units do not know the implication on the company’s network which may impact users during normal business hours. Some of them do not consider backup and disaster recovery. For regulated companies, compliance and data protection are important. They should be able to tell the auditors where the data resides and replicates. Also, as you scale up the use of compute and storage, it gets more costly.

External cloud providers are here to stay and their innovation and services will get better and better. The future as I foresee it will be a hybrid model – a combination of external providers and internal IT providers. The key for companies is to provide guidelines and policies on when to use external provider vs internal IT. For instance, a proof of concept application may be well suited to an external cloud because it is fast to provision. An application that is used only by a few users and does not need any integration with existing application is another one. Applications that integrates with the company’s internal SAP system, on the other hand, is well suited for internal cloud. These policies must be well communicated to business units.

For IT departments, they must be able to provide a good level of service to the business, streamline the process of provisioning, adapt technologies that are able to respond to the business quickly, and provide an internal cloud services that matches the services offered by external providers. This way, business units will be forced to use internal IT instead of external providers.

Data-centric Security

Data is one of the most important assets of an organization; hence, it must be secured and protected. Data typically goes in and out of an organization’s internal network in order to conduct business and do valuable work. These days, data reside in the cloud, go to employees’ mobile devices or to business partners’ networks. Laptops and USB drives containing sensitive information sometimes get lost or stolen.

In order to protect the data, security must travel with the data. For a long time, the focus of security is on the network and on the devices where the data resides. Infrastructure security such as firewalls, intrusion prevention systems, etc. are not enough anymore. The focus should now shift to protecting the data itself.

Data-centric security is very useful in dealing with data breaches, especially with data containing sensitive information such as personally identifiable information, financial information and credit card numbers, health information and intellectual property data.

The key to data-centric security is strong encryption because if the public or hackers get ahold of sensitive data, it will show up as garbled information which is pretty much useless to them. To implement a robust data-centric security, the following should be considered:

1. Strong data at rest encryption on the server/storage side, applications and databases.
2. Strong in-transit encryption using public key infrastructure (PKI).
3. Effective management of encryption keys.
4. Centralized control of security policy which enforce standards and protection on data stored on the devices at the endpoints or on the central servers and storage.

Enterprise Search Using Google Search Appliance

One of the pain points for companies these days is how difficult it is to find relevant information inside their corporate network. I often hear people complain that it is easier to find any information on the Internet using Google or Bing rather than inside the enterprise.

Well, Google has been selling their Google Search Appliance (GSA) for many years. GSA brings Google superior search technology to a business corporate network. It even has the familiar look and feel that people have been accustomed to when doing a search on the Internet.

GSA can index and serve content located on the internal websites, documents located on file servers, and Microsoft Sharepoint repositories.

I recently replaced an old GSA, and quickly remembered how easy and fast it is to deploy. The hardware of the GSA is a souped up Dell server with a bright yellow casing. Racking the hardware is a snap. It comes with instructions on where to plug the network interfaces. The initial setup is done via a back-to-back network connection to a laptop, where network settings such as the IP address, netmask, gateway, time server, mail server, etc are configured.

Once the GSA is accessible on the network, the only other thing to do is to configure the initial crawl of the web servers and/or file systems, which may take a couple of hours. Once the documents are indexed, the appliance is ready to answer user search requests.

The search appliance has many advanced features and can be customized to your needs. For instance, you can customize the behavior and appearance of the search page. You can turn on or off the auto-completion feature. You can configure security settings, so that content is only available to certain people that are properly authenticated, and many other features.

Internal search engines such as the Google Search Appliance will increase the productivity of corporate employees by helping them save time looking for information.