Category Archives: Cloud

How to Permanently Delete Data in the Cloud

In the pre-cloud era, to permanently delete data, the sectors on the physical disk must be overwritten multiple times with zeros and ones to make sure the data is unrecoverable. if the device will not be re-used, it must be degaussed. The Department of Defense standard, DoD 5220.22-M, goes so far as destroying the physical disk through melting, crushing, incineration or shredding to completely get rid of the data.

But these techniques do not work for data in the cloud. First, cloud customers probably will not have access to the provider’s data centers to access the physical disks. Second, cloud customers do not know where they are written, i.e. which specific sectors of the disk, or which physical disks for that matter. In addition, drives may reside on different arrays, located in multiple availability zones, or data might even be replicated in different regions.

The only way to permanently erase data in the cloud is via crypto-shredding. It works by deleting the encryption keys used to encrypt the data. Once the encryption keys are gone, the data cannot be recovered. So it is imperative that even before putting data in the cloud, they should be encrypted. Unencrypted data in the cloud will be impossible to permanently delete. As a cloud customer, it is also important that you own and manage the encryption keys and not the cloud provider.

Characteristics of a True Private Cloud

A lot of companies like to claim that their internal IT infrastructure is a “private cloud.”  But what really qualifies as a “cloud?”  According to ISC2 (International Information System Security Certification Consortium), ISO/IEC 17788, and NIST, a true private cloud must have the following characteristics similar to a public cloud such as AWS or Azure.

1. On-demand self-service.  This characteristic enables the provisioning of cloud resources including compute, storage and network whenever and wherever they are required.  It allows self-provisioning where the user can setup, manage or operate the cloud services without assistance from the cloud provider or IT personnel.

2. Broad network access. The cloud should always be available and accessible anytime and anywhere.  Users should have widespread access to their compute resources as well as their data at home, office, or on the road, using any device such as laptop, desktop, smartphone or tablet.

3. Resource pooling. A cloud typically has a large number of compute, storage, and network devices as well as sophisticated applications which can be pooled to address various user needs. These resources can be scaled and adjusted to meet user workloads or requirements.

4. Rapid elasticity.  This allows the user to obtain additional compute, storage, network and other resources as their workload requires.  This is often automated and transparent to the user.

5. Measured service.  This is a critical component for a cloud service because this is the only way the user can be charged back for its use of the resources.  A cloud should be able to measure, control, and report the user’s usage of resources. 

Most companies probably meet one or two of the above criteria.  Resource pooling for instance is one of them because of the widespread use of virtualization technology.  However, they usually struggle to provide measured service, as they usually over provision resources and unable to quantify usage. 

For the most part these companies are still traditional IT.   Without all of the cloud computing characteristics, it is simply not possible to deliver and maintain a reliable service to the rapid and changing requirements of the business.

Cloud Security Best Practices

Two of the most common security issues in AWS are platform misconfigurations and credential mismanagement.  Although AWS offers many security features, if they are not used or not configured correctly, your applications and data will be vulnerable .  However, these common security issues can be easily mitigated using the following best practices:

1.  Use VPCs (virtual private clouds). Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined. This virtual network closely resembles a traditional network that you’d operate in your own data center. It is logically isolated from other virtual networks in the AWS Cloud. You can launch your AWS resources, such as Amazon EC2 instances, into your VPC.  You can apply security groups and access control lists to the VPC to secure it.

2. Limit administrative access with AWS Security Groups. A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic.  Security groups helps block attackers who may try to probe your AWS environment.

3. Lock down your root, domain, and administrator-level account credentials. For day-to-day operations, use your own account and only use these privileged accounts when absolutely necessary.  Don’t share passwords and only a handful of administrators should have possession of the passwords.

4.  Use IAM Roles. An IAM role is an IAM identity that you can create in your account that has specific permissions. An IAM role is similar to an IAM user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have standard long-term credentials such as a password or access keys associated with it. Instead, when you assume a role, it provides you with temporary security credentials for your role session. IAM roles can be used to define permission levels for different resources and applications that run on EC2 instances. When you launch an EC2 instance, you can assign an IAM role to it, eliminating the need for your applications to use AWS credentials to make API requests. 

5. Enable Multi Factor Authentication (MFA). MFA is a simple best practice that adds an extra layer of protection on top of your user name and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password (the first factor—what they know), as well as for an authentication response from their AWS MFA device (the second factor—what they have). Taken together, these multiple factors provide increased security for your AWS account settings and resources.

6. Mitigate distribute denial of service (DDoS) attacks by using elastic load balancing, auto scaling, Amazon Clouldfront, AWS WAF, or AWS Shield. AWS provides flexible infrastructure and services that help customers implement strong DDoS mitigations and create highly available application architectures.

7. Monitor your environment by using AWS tools including CloudTrail, CloudWatch and VPC Flow Logs.  They provide information about how data flows in and out of your AWS environment. They also provide data that you can mine and analyze to check intrusions, security breaches, and data leaks. You can also integrate these tools with third party applications that can perform thorough log analysis and event correlation.

Source: https://docs.aws.amazon.com/

Using the Cloud for Disaster Recovery

One of the common use cases for using the cloud, especially for companies with large on-prem data centers, is Disaster Recovery (DR).  Instead of building or continuing to maintain an expensive on-prem DR site, the cloud can provide a cheaper alternative for replicating and protecting your data.

There are many products and services out there for DR in the cloud.  If your company is using EMC devices – specifically Avamar and Data Domain (DD) – for data protection, you can replicate your virtual machines (VM) backup to AWS and be able to perform disaster recovery of your servers in AWS.  This solution is called Data Domain Cloud DR (DDCDR) and  it enables DD to backup to AWS S3 object storage. Data is sent securely and efficiently, requiring minimal compute cycles and footprint within AWS. In the event of a disaster, VM images can be restored and run from within AWS. Since neither Data Protection Suite nor DD are required in the cloud, compute cycles are only required in the event of a restore.

Backup Process

  • DDCDR requires that a customer with Avamar backup and Data Domain (DD) storage install an OVA which deploys an “add-on” to their on-prem Avamar/DD system and install a lightweight VM (Cloud DR server) utility in their AWS domain.
  • Once the OVA is installed, it will read the changed data and will segment, encrypt, and compress the backup data and then send this and the backup metadata to AWS S3 object storage.
  • Avamar/DD policies can be established to control how many daily backup copies are to be saved to S3 object storage. There’s no need for Data Domain or Avamar to run in AWS.

Restore Process

  • When there’s a problem at the primary data center, an admin can click on a Avamar GUI button and have the Cloud DR server uncompress, decrypt, rehydrate and restore the backup data into EBS volumes, translate the VMware VM image to an AMI image, and then restarts the AMI on an AWS virtual server (EC2) with its data on EBS volume storage.
  • The Cloud DR server will use the backup metadata to select the AWS EC2 instance with the proper CPU and RAM needed to run the application. Once this completes, the VM is running standalone, in an AWS EC2 instance. Presumably, you have to have EC2 and EBS storage volumes resources available under your AWS domain to be able to install the application and restore its data.

Source: https://www.dellemc.com/

Guiding Principles for Cloud Security

To create a solid security for your servers, data, and applications hosted in the cloud, you must adhere to the following security guiding principles:

Perimeter Security

The first line of defense against attacks is perimeter security.  Creating private networks to restrict visibility into computing environment is one of them.   Micro-segmentation which  isolates applications and data with a hardened configuration is another one. Creating  a strong abstraction layer from hardware and virtualization environment will also strengthen perimeter security.  

Continuous Encryption

There shouldn’t be any more reason why data traversing the network (public or private) and data stored on storage arrays shouldn’t be encrypted.  Even the popular Google Chrome browser started to flag unencrypted websites to alert users.  Leverage cheap computing power, secure key management, and the Public Key Infrastructure to achieve data-in-transit and data-at-rest encryption. 

Effective Incident Response

Attacks to your servers, data, and applications in the cloud will definitely occur.  It’s just a question of “when” will it happen.  An effective incident response program – using automated and manual response – ready to be invoked once an attack occurs will lessen the pain of the breach.

Continuous Monitoring

Continuous and robust monitoring of your data, applications, and security tools and on-time alerting when security breach happens is a must.  In addition, easy integration of third party monitoring capabilities will also help in achieving sound monitoring system.

Resilient Operations

The infrastructure should be capable of withstanding attack.  For instance, you should maintain data and applications availability by mitigating DDoS attacks. The applications should continually function in the presence of ongoing attack.  In addition, there should be minimal degradation of performance as a result of environmental failures. Employing high availability, redundancy, and disaster recovery strategy will help achieve resilient operations.

Highly Granular Access Control

Organizations need to make sure that their employees and customers can access the resources and data they need, at the right time, from wherever they are. Conversely they need to make sure that bad actors are denied access as well.  They should have a strong cryptographic Identity and Access Management (AIM).  They should leverage managed Public Key Infrastructure service to authenticate users, restrict access to confidential information and verify the ownership of sensitive documents.

Secure Applications Development

Integrate security automation into DevOps practices (or DevSecOps), ensuring security is baked in, not bolted on.

Governance, Risk Management, Compliance

Finally, a great cloud security program should be properly governed, for instance, by having visibility of configurations. Risks should be managed by readily identifying gaps or other weakness.  Lastly, your security program should have broad regulatory and compliance certifications.

Cloud Security vs On-Prem Security

One of the big differences between cloud security and on-prem security is that the former is built from the ground up while the latter is bolted in the process. AWS for instance had made their infrastructure secure ever since they first built it. They realized early on that companies will not be putting their data in the cloud if it’s not inherently secure.

However, security is still a shared responsibility between the cloud provider and the consumer. By now, everybody should be aware of the AWS Shared Responsibility Model. Companies who are used to the traditional security model will find that cloud security entails a different mindset. In the cloud, the focus shifts from network, operating systems, and perimeter security to security governance, access control, and secure development. Since the underlying infrastructure of the cloud is secured by the provider, companies utilizing it can now focus on the true information security – the ones that really matters to the company, such as data, users, and workflow security.

Security governance is important in the cloud. Security folks should spend more time planning and less fire fighting. They should be crafting and implementing policies that truly secure the company’s assets – such as data-centric security policies and secure software development. There should be a solid access control. For example, users are only granted access if they really need it.

There are a couple of challenges with cloud security. First is the obvious disconnect between shared security model and traditional security model. Companies used to on-prem security will still want to spend resources on perimeter security. Second is compliance. For instance, how can traditional auditors understand how to audit new technologies in the cloud like Lambda, where there is no server to verify?

Companies using the cloud should realize that security is still their responsibility but they should focus more on data and application security.

Cloud Security Challenges and Opportunities

I recently attended the ISC2 Security Congress held on Oct 8 to 10, 2018 at the Marriott Hotel in New Orleans, Louisiana.  Based on the keynotes, workshops, and sessions at the conference, these are the challenges and opportunities facing cloud security:

  1. Container and serverless (e.g. AWS Lambda) security.  For instance, how will you ensure isolation of various applications?
  2. Internet of Things (IOT) and endpoint security.  As more and more sensors, smart appliances and devices with powerful CPUs and bigger memories are connected to the cloud, more computation will happen on the edge, thus increasing security risks.
  3. Machine learning and artificial intelligence (AI).  How can AI help guard against cyber-attacks, predicts impending security breach, or improve investigation or forensics?
  4. Blockchain technology. Blockchain will be transforming how audits will be performed in the future.
  5. Quantum computing if and when it comes into fruition will break cryptography.  Cryptography is the reason why commerce happens on the Internet.  New encryption algorithm is needed when quantum computing becomes a reality.
  6. How will the implementation of GPDR (General Data Protection Regulation) in the European Union affects data sovereignty (“a concept that information which is stored in digital form is subject to the laws of the country in which it is located”), data privacy, and alignment of privacy and security?
  7. DevSecOps (having a mindset about application and infrastructure security from the start) will continue to gain momentum.

We are likely to be seeing continuing innovations in these areas within the next few years.

Defining the Different Types of Cloud Services

There are several kinds of cloud services, depending on which tier of the technology stack the service resides:

Software-as-a-Service (SaaS) delivers entire functioning applications through the cloud. SaaS frees companies from building their own data centers, buying hardware and software licenses, and developing their own programs. Salesforce is an example of a SaaS provider.

Infrastructure-as-a-Service (IaaS) delivers the underlying resources – compute, storage and networking – in a virtual fashion to organizations who purchase service “instances” of varying sizes. In addition, IaaS vendors provide security, monitoring, load balancing, log access, redundancy, backup and replication. Amazon Web Services, Microsoft Azure and Google Compute Platform are all examples of IaaS providers.

Platform-as-a-Service (PaaS) lies in the middle of SaaS and IaaS. It delivers hardware, software tools, and middleware – usually for application development – to users over the Internet. Google App Engine, Red Hat OpenShift, and Microsoft Azure are examples of PaaS providers.

Containers-as-a-Service (CaaS) is the newest cloud service that focuses on managing container-based workloads. A CaaS offers a framework for deploying and managing application and container clusters by delivering container engines, orchestration, and the underlying resources to users. Google Container Engine, Amazon EC2 Container Service, and Azure Container Services are the leading CaaS providers.

Data Protection in AWS

Data protection along with security used to be an afterthought in many in-house IT projects. In the cloud, data protection has became the forefront for many IT implementations. Business users spinning up servers or EC2 instances in AWS clamor for the best protection for their servers and data.

Luckily, AWS provides a highly effective snapshot mechanism on EBS volumes that are stored on a highly durable S3 storage. Snapshots are storage efficient and use copy-on-write and restore-before-read which allow for both consistency and immediate recovery. Storing snapshot in S3 which is a separate infrastructure from EBS, has the added benefit of data resiliency – failure in the production data will not affect the snapshot data.

However, this backup and restore mechanism provided by AWS lacks many of the features found in the traditional backup solutions such as cataloging, ease of management, automation, and replication. In response, third party vendors are now offering products and services that will make backup and recovery easy and efficient in AWS. Some vendors provide services to manage and automate this. Other vendors provide products that mimics the ease of management of the traditional backup. For instance, Dell EMC provides Avamar and Data Domain virtual editions that you can use on AWS.

Optimizing AWS Cost

One of the advantages of using the cloud is cost savings since you only pay for what you use. However, many companies still waste resources in the cloud, and end up paying for services that they don’t use. A lot of people are stuck in the old ways of implementing IT infrastructure such as overprovisioning and keeping the servers on 24×7 even when they are idle most of the time.

There are several ways you can optimize AWS in order to save money.

1. Right sizing

With AWS you can right size your services to meet exactly the capacity requirements you need without having to overprovision. On the compute side, you should select the correct EC2 instance appropriate with the application, and provision only enough number of instances to meed the need. When the need for more compute increases, you can scale up or scale out compute resources. For instance during low-demand, use only a couple of EC2 instances, but during high-demand, autoprovision additional EC2 instances to meet the load.

On the storage side, AWS offers multiple tiers to fit your storage need. For instance, you can store frequently used files/objects on S3 Standard tier, store less frequently used files/objects on S3 Infrequent Access (IA) tier, and store archive data on Glacier. Finally you should delete data that you don’t need.

2. Reserve capacity

If you know that you will be using AWS for a long period of time, you can commit to reserve capacity from AWS and save a lot of money on equivalent on-demand capacity.

Reserved Instances are available in 3 options – All up-front (AURI), partial up-front (PURI) or no upfront payments (NURI). When you buy Reserved Instances, the larger the upfront payment, the greater the discount. To maximize your savings, you can pay all up-front and receive the largest discount. Partial up-front RI’s offer lower discounts but give you the option to spend less up front. Lastly, you can choose to spend nothing up-front and receive a smaller discount, but this option allows you to free up capital to spend on other projects.

3. Use spot market

If you have applications that are not time sensitive such as non-critical batch workloads, you may be able to save a lot of money by leveraging Amazon EC2 Spot Instances. This works like an auction where you bid on spare Amazon EC2 computing capacity.

Since Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications

4. Cleanup unused services

One of the best ways to save money is to turn off unused and idle resources. These include EC2 instances with no network or CPU activity for the past few days, Load Balancers with no traffic, unused block storage (EBS), piles of snapshots and detached Elastic IPs. For instance, one company analyzed their usage pattern and found that during certain periods, they should be able to power off a number of EC2 instances, thereby minimizing their costs.

One thing you really need to do on a regular basis is to monitor and analyze your usage. AWS provides several tools to track your costs such as Amazon CloudWatch (which collects and tracks metrics, monitors log files, and sets alarms), Amazon Trusted Advisor (which looks for opportunities to save you money such as turning off non-production instances), and Amazon Cost Explorer (which gives you the ability to analyze your costs and usage).

Reference: https://aws.amazon.com/pricing/cost-optimization/