What’s New(s) in AWS – Part 1

A very Happy New Year 2020 to all of you

As you all know, the maximum number of new announcement wrt AWS is made in the re:Invent conference around the first week of Dec. Last year was no different. It makes you dizzy trying to catch up with all the new things that have been introduced. Here is an attempt by me to give you a gist of new products/features for some of the generic services. (I am not listing the developments in specialized services like Game Development, AI, Graph Database and so on). What I am going to talk about would probably impact a lot of people and may become part of the future certification exams.

Amazon’s Builders Library

If you are an architect you would be very interested in this. Amazon’s Builder Library is basically Amazon telling us how they build and operate the cloud. In Amazon’s own words, “The Amazon Builders’ Library is a collection of living articles that describe how Amazon develops, architects, releases, and operates technology”. The articles here talk about technology, they talk about how releases are planned and how the operations are performed. If you want to get an idea on how the cloud is actually operated, this is the place for you

Amazon’s Builders Library link

AWS Local Zones

We all know about AWS Regions and Availability Zones. In some cases you may want much more faster response than what you can get from a Region closest to you. For example, assume you have your maximum users in Bangalore. Currently you can have your resources only in Mumbai region. You feel that latency of connecting to Mumbai is not acceptable for your end users. In this case, if your resources are in Bangalore, it would help improve the latency.

AWS Local Zones try to address this problem. AWS is now going to create Local Zones (or maybe we can call it mini-Regions) closer to a large number of users. These Local Zones will not have the full gamut of AWS Services. They will have services like EC2, EBS, Load Balancers and VPC available to the users. The Local Zones will be connected to the Region via a dedicated link so that you can establish the connection between your resources in the Local Zone and the resources in the region of your choice. Currently only the Los Angeles Local Zone is available (on invitation).

This is an important development. I am sure that there will be more Local Zones in the near future and this will have an impact on how we architect our solutions

AWS Local Zones link

S3 Access Points and RTC

Access Points

With growth in data and subsequently the need to store large amount of common data in S3, has given rise to security issues. You now have the scenario of multiple user / applications accessing the common data from S3. Assume you want to control access for multiple users / applications in a granular fashion. We can do this using the bucket policies but this can soon turn into a nightmare since one misstep would affect multiple users/application

AWS has now introduced the concept of S3 Access points to address this issue. We can now create multiple access points for the same bucket and provide permissions at the access point level. Each of these access points can then be provided to different users/applications. This way any problem in security configuration will only affect a small subset of users/applications. This will allow us to manage our S3 permission more effectively.

Read more details on at this link : S3 Access Points

Replication Time Control (RTC)

You must be aware that we can setup replication for a bucket. The destination of the replication can be a bucket in the same zone or in a different zone. With Replication Time Control, Amazon will try and complete the replication within a specified time period. Amazon backs it up with a SLA. Here is what AWS says about this feature: “S3 Replication Time Control is designed to replicate 99.99% of objects within 15 minutes after upload, with the majority of those new objects replicated in seconds. S3 RTC is backed by an SLA with a commitment to replicate 99.9% of objects within 15 minutes during any billing month”

More details here: S3 RTC

VPC Sharing

VPC sharing allows subnets to be shared with other AWS accounts within the same AWS Organization. So you can now have a VPC is spread across two or more accounts of the same AWS Organization. This allows you more control in terms of centralizing VPC control.

Check out the link to understand the benefits and how to share your subnets.

VPC Sharing

RDS Proxy

Let’s take a use case in the serverless domain. Assume when your Lambda function is triggered, it establishes a database connection to your RDS instance. Assume that you trigger a huge number of Lambda functions in parallel. Each of these functions has to establish a database connection with your RDS instance. When the function is completed, the connection will be tore down. Establishing connections take a toll on RDS instance as this consumes CPU/Memory resources. So the performance of RDS decreases if it is constantly opening and closing connections. The other kind of problem is when a huge number of connections are opened and many kept idle so that when a request comes the response could be fast. You basically overprovision the number of connections in this case.

RDS Proxy has been introduced to solve problems like these. The RDS Proxy sits between the application and RDS. It opens a pool of connections with the RDS instance. Your application now connects to the RDS Proxy and the proxy allocates a connection from the pool. In case of connections which are infrequently used, RDS Proxy will share these across applications. RDS Proxy ensures that there is no more open/closing connection burden on RDS instance, thus improving the efficiency of RDS instance and thus the efficiency of your application

RDS Proxy Link

AWS Resource Access Manager (RAM)

Let us take a use case like this: assume you have multiple accounts for your organization. Each account builds its own VPC and wants to manage the VPN connections. So each account may end up asking for Transit Gateway. So your organization has to pay for multiple Transit Gateways. Amazon has now introduced Resource Access Manager (RAM). This allows you to share resources amongst various AWS Accounts within the same organization, thus reducing the management effort and the cost.

Currently you can share Transit Gateways, Subnets, AWS License Manager configurations, and Amazon Route 53 Resolver rules resources with RAM

Resource Access Manager Link

AWS Detective

AWS provides a lot of tools for security. For example we have Amazon Guard Duty which looks at your logs (like VPC Flow Logs) and alerts you to possible security issues. It will also point out where the error lies. This is very useful tool and in many cases this may be sufficient. In some of cases though you will need to dig deeper in order to find out the root as why this security flaw came into existence. AWS Detective helps you in finding the root cause of potential security issues. It uses Machine Learning, Graph Theory and Statistical Analysis to build linkages, which will help you get to the root cause faster. For this AWS Detective uses various data sources like VPC Flow Logs, AWS CloudTrail and Guard Duty.

AWS Detective Link

IAM Access Analyzer

While services like Guard Duty & Detective tell you about security vulnerabilities, a challenge that all organizations have to face is giving permissions inadvertently to external principals. IAM Access Analyzer is a tool which tells you which of the resources have given permissions to external principals. Access Analyzer considers your account as the zone of trust. Access Analyzer analyzes all your policies and if it finds any policy giving permission to an external principal, it records the finding. Similarly, if any policy changes and that change provides access to external principal, you will be notified.

IAM Access Analyzer link

Will continue this in Part 2 tomorrow

AWS Latest Announcements

As usual, in re:Invent 2017, Amazon AWS has announced a spate of new services and added new features to existing services. I will summarize some of these. I am going to concentrate on the more generic and not the very specialized services, though I will mention a few of them.

Compute:

  1. AWS Fargate : This new service from AWS allows you to run your docker container without worrying about the systems that will run the container. In other words, AWS will take care of setting up a cluster of instances and will run your containers. The cluster will be maintained by AWS leaving you free to worry about your application. Azure already has the ability to run container instances. In this case, I think AWS is catching up with Azure
  2. Bare Metal: IBM and a few others had the Bare Metal offering earlier. As the name indicates, you get complete control of a server and you can load the hypervisor or OS of your choice on the system. This helps you in many ways, especially in getting better performance, achieving compliance, tackling licensing issues and you can also build a cloud of your choice within AWS!! Bare Metal is still in preview stage but I am sure you will see it being generally available soon
  3. Hibernation of Spot Instances: Earlier whenever your spot instance was running and your bid price fell below the spot price, AWS terminated your spot instances. So spot instances were suitable only for such applications which could withstand sudden termination. Later, they stopped the spot instances instead of terminating them. Now the spot instances will go into the hibernation mode. Here, the state of your memory is also stored on disk and when capacity becomes available again your instance will start running from where you left off. The private IP and the Elastic IP are also preserved. This makes spot instances even more attractive to use
  4. Elastic Container Service for Kubernetes (EKS): Many of you would know that Kubernetes is a Docker orchestration service. AWS had only ECS (Elastic Container Service) earlier for Docker orchestration. They have now given us the option of using Kubernetes as well. Here, AWS will take care of all the infrastructure required for running Kubernetes, so that we need not worry about setting up servers and setting up Kubernetes. Given that Kubernetes is having a lot of traction, this is a good move from Amazon. This is now in the Preview stage

Databases

  1. Amazon Aurora multi master: Now you can create more than one read/write master database. The applications can use these multiple databases in the cluster to read and write. As you can guess, the high availability of the database will increase as you can have each of the masters in a different Availability Zone
  2. DynamoDB Global Tables: In this case your DynamoDB tables are automatically replicated across regions of your choice. Earlier if you wanted a replica of your DynamoDB table in another region you had to setup the replication on your own. With DynamoDB you no longer need to worry about it now. You can immediately see how this will be effective in a DR scenario.
  3. DynamoDB Backup and Restore: Now AWS allows you to backup and restore your DynamoDB tables. This is to help enterprises meet the regulatory requirements. AWS promises that the backup will happen very fast irrespective of the size of the table
  4. AWS Neptune: Amazon launches a graph database which it has name AWS Neptune. If you have seen my webinar on NoSQL Databases you would know that graph database is a type of NoSQL Database. I will write a separate post on graph database and what AWS Neptune’s features are in a future post

Networking

  1. Inter-region VPC Peering: Earlier you could peer two VPCs only if they were in the same region. Now Amazon allows you to peer two VPCs even if they belong to different regions. So an EC2 instance can access another EC2 instance in a peered VPC of another region using only the private IP

Messaging

  1. Amazon MQ: This is a managed broker service for Apache ActiveMQ. Amazon will setup the ActiveMQ and maintain it. I don’t have much of an idea about ActiveMQ. I haven’t worked on it. From what I can gather, Amazon now has two messaging solutions, its own SQS (Simple Queue Service) and Amazon MQ. Maybe Amazon MQ has more features than SQS? I will find out and let you know

There are tons more of announcement that were made. I have just touched on ones that affect the AWS Solution Architect and AWS SysOps exams. I will write more about other new services and features in another post.

 

CloudSploit and Security in the Cloud : An Interview

cloudsploit

Security in the cloud is beyond a doubt the most important criteria for enterprises migrating to the cloud. Security in cloud is a shared responsibility. While Cloud providers like Amazon have certain responsibilities towards securing the infrastructure, users need to be vigilant and secure their data.

There are companies which help users to ensure that their cloud environment is secure. One such company is CloudSploit. The founder of Cloudsploit, Matthew Fuller, was kind enough to answer my questions regarding cloud security, over email.

Matt CloudSploit

 Matthew Fuller, Inventor and Co-Founder of CloudSploit

Matt is a DevOps Security Engineer with a wide array of security experience, ranging from web application pentesting to securing complex networks in the cloud. He began his security career, and love for open source, while working as a Web Application Security Engineer for Mozilla. He enjoys sharing his passion for technology with others and is an author of the best selling eBook on AWS’s new service – Lambda. He lives in Brooklyn, NY where he enjoys the fast paced, and growing, tech scene and abundant food options.

Here is our conversation

CloudSiksha: In your experience, what are the major security concerns of enterprises wanting to migrate to Cloud?

Matt: The biggest concern Enterprises should have with moving to the cloud is simply not understanding or having the in-house expertise to manage the available configuration options. Cloud providers like AWS do a tremendous job of securing their infrastructure and providing their users with the tools to secure their environments. However, without the proper knowledge and configuration of those tools, the settings can be mis-applied, or disabled entirely. Oftentimes, the experience that the various engineering teams may have with traditional infrastructure does not translate to the cloud equivalent, resulting in mismanaged environments. Multiply this across the hundreds of accounts and engineers a large organization may have, and the security risk becomes very concerning.

CloudSiksha: You are security company which helps people who migrate to AWS to be secure. What do you bring over and above what Amazon provides to users?

Matt: AWS does an excellent job of allowing users to tune their environments. However, while they provide comprehensive security options for every product they offer, they do not enforce best practice usage of those options. CloudSploit helps teams quickly detect which options have not been configured properly, and provides meaningful steps to resolve the potential security risk. We do not compete with any of AWS’s tools; instead, we help ensure that AWS users are using them correctly with the most secure settings.

CloudSiksha: AWS itself has services like Inspector, CloudTrail and so on. So can the users not use these services for their needs? How does CloudSploit differ from these? Or do you supplement / Complement these services?

Matt: AWS currently provides several security-related services including CloudTrail, Config, Inspector, and Trusted Advisor. The CloudTrail service is essentially an audit log of every API call made within the AWS account, along with metadata of those calls. From a security perspective, CloudTrail is a must-have, especially in accounts with multiple users. If there is ever a security incident, CloudTrail provides a historical log that can be analyzed to determine exactly what led to the intrusion, what actions the malicious user took, and what resources were affected.

AWS Config is slightly different in that it records historical states of every enabled resource within the account, allowing AWS users to see how a specific piece of the infrastructure changed over time and how future updates or changes might affect that piece.

Finally, Inspector is an agent that runs on EC2 instances, tracking potential compliance violations and security risks at the server level. These are aggregated to show whether a project as a whole is compliant or not.

While these services certainly aid in auditing the infrastructure, they only scratch the surface of potential risks. Like many of AWS’s services, they cover the basics, while leaving a large opening for third party providers. CloudSploit is one such service that aims to make security and compliance incredibly simple with as little configuration as possible. It uses the AWS APIs (so it is agentless, unlike Inspector) to check the configuration of the account and its resources for potential security risks. CloudSploit is most similar to AWS Config, but provides many advantages over it. For example, it does not require any manual configuration, continually updates with new rule sets, does not charge on a per-resource-managed basis, and covers every AWS region.

CloudSploit is designed to operate alongside these AWS services as part of a complete security toolset, and helps ensure that when you do enable services like CloudTrail, that you do so in a secure fashion (by enabling log encryption and file validation, for example).

See more at https://cloudsploit.com/compare

CloudSiksha: How does CloudSploit work in securing infrastructure?

Matt: CloudSploit has two main components. First, it connects to your account via a cross-account IAM role and queries the AWS APIs to obtain metadata about the configuration of resources in your account. It uses that data to detect potential security risks based on best practices, industry standards, and in-house and community-provided standards. For example, CloudSploit can tell you if your account lacks a secure password policy, if your RDS databases are not encrypted, or your ELBs are using insecure cipher suites (plus over 80 other checks). These results are compiled into scan reports at predefined intervals and sent to your email or any of our third-party integrations.

The second component of CloudSploit is called Events. Events is a relatively new service that we introduced to continually monitor all administrative API calls made in your AWS account for potentially malicious activity. Within 5 seconds of an event occurring, CloudSploit can make a security threat prediction and trigger an alert. The Events service is monitoring for unknown IP addresses accessing your account, activity in unused regions, high-risk API calls, modifications to security settings and over 100 other data points.

All of this information is delivered to your account to help them take action and improve the security of your AWS environment.

CloudSiksha: What are the dangers of providing you with a user account in AWS?

Matt: There is very little danger. CloudSploit uses a secure, third-party, cross-account IAM role to obtain temporary, read-only access to your AWS account. Even if this role information were compromised, an attacker would still not be able to gain access without also compromising CloudSploit’s AWS account resources. The information we obtain and store is also very limited in nature – metadata about the resources but never the contents of those resources.

 CloudSiksha: Can you tell me something about how your software has been used by companies and what value they are seeing?

Matt: Companies using our product have integrated it in a number of unique ways. For example, using our APIs, a number of our users have built integrations into their Jenkins-based pipelines, allowing them to scan for security risks when making changes to their accounts, shortening the feedback loop between changes being made and security issues being detected. Other companies have made CloudSploit the central dashboard for all of their engineering teams across every business unit to ensure that security practices are being implemented across the entire company.

Individual developers and pre-revenue projects tend to use our Free option, and are happy with the value it provides. 20% of these users move on to a paid plan in order to have the scans and remediation advice occur automatically.

Medium-sized teams prefer the Plus account in order to connect CloudSploit with third-party plug-ins such as email, SNS, Slack, and OpsGenie.

Advanced users, those who like to automate everything in their CI/CD workflow, as well as larger enterprises prefer the Premium plan for its access to APIs and all of our various features and maximum retention limits.

CloudSiksha: I see you have multiple options with varying payments. Has any of your client shifted from one tier to another? What was the reason for them upgrading to a higher tier?

Matt: Absolutely. Individual developers give the Free account a try and love the results. For many, it’s a “no brainer” to pay $8/month for automated scanning and alerts containing remediation advice. The biggest drivers of clients moving to higher-tier plans are a need for custom plugins, increased scan intervals, and longer data retention times.

CloudSiksha: What more can we expect to see from CloudSploit?

Matt: Expect to see a stronger focus on compliance. Besides the 80+ plugins and tests that we currently have, we are working to expand our footprint for more compliance-based best practices. In addition, we are launching a new strategy to get information sooner and react to it faster than any competing AWS security and compliance monitoring tool. Amazon released CloudWatch Events in January and a month later we had already taken advantage of those features. We plan to continue to enhance this Events integration, delivering ever more useful results to our users.

You can check out CloudSploit here

Disclosure: The links given here are affiliate links.

AWS Partner Summit : Customer comes first

AWS Partner Summit

I attended the AWS Partner Summit at Leela Palace, Bangalore on 29th April 2015. It was a well organized meet and the hall was full. They had stalls by sponsors outside the hall and we could get to interact with Solution Architects of Amazon, which was nice thing. I was able to talk to them and get some of my doubts clarified.

There were quite a few talks but the most impressive ones for me were the panel discussion in the morning and the keynote address by Terry Wise (did I get the name right?). The panel discussion had AWS customers and consulting partners as the panelist. This gave a good idea to people on how AWS is being used in India and what are the opportunities for consultants in this field. Each one spoke about how they came to AWS, what benefits they are deriving and also addressed important issues like security.

The keynote was very well done by Terry Wise. It was concise and at the same time covered a lot. The two important takeaways for me personally was about customer success and not being afraid to fail. When I joined the industry quite some time back, the buzz word was customer satisfaction. It later changed to customer delight. Customer delight was just not enough and the buzzword transformed to Customer Success. I remember the time in my previous company when this buzzword hit us and it lead to me preparing a deck for me CEO to deliver on this topic to Project Managers and other mid-level managers. In case of Amazon, the statement which made a great impact was not ‘We succeed when the customer succeeds’ but ‘We succeed only when the customer succeeds’. The ‘only’ was underlined. This does change everything doesn’t it. To be fair to Amazon, they have been following this diktat and been improving their services, introducing new services and cutting cost. (From my personal experience I can say that their team in India is also very hungry and they helped us land our first consulting deal.) I believe in the space we are in (Competency Development & Consulting) we cannot afford not to help our customers succeed.

The other part of not being afraid is also an important message. The importance here lies in the fact that this is a culture which comes top down. When the leadership is scared to fail, the people below become risk averse or is they are willing to take a risk, they get punished for failing. I have seen the sense of insecurity and the unwillingness to take initiative

amongst mid-level managers in more than one company and you can directly trace it to the leadership of the company.

I do hope for sake of other companies, Amazon succeeds in its endeavor. Going by their recent $5B announcement, they are on their way. More power to them.

I will conclude this with a trivial request: Next time when we have such a summit can we have coffee without sugar please !!