What’s New(s) in AWS : Part 2

I had written about some of the new AWS Services in Part 1 . We will continue from there

AWS Outposts

AWS Outposts can be seen as being a limited version of AWS Cloud in your data center. The way this works is as follows: You first order for an AWS Outpost. This is a rack containing many servers. Amazon delivers this to your data center and sets it up. This infrastructure can be handled using the same User Interface, CLI or APIs that you would use on AWS Cloud. This needs you have good network connectivity so that the Outposts can connect to the AWS Region. Once this is setup, you can run EC2 instances or RDS instances on these local servers. You can connect the Outpost to AWS Region using Direct Connect or VPN. You also have the option of using local storage with your Outposts instances

AWS Outposts link  Watch the video in the link for a better understanding

AWS Outposts FAQ for better understanding

AWS Image Builder

Keeping the images in your organization upto date is a very important and at the same time, a time consuming task. Someone has to either update the images or there needs to be some of automation script which will produce updated images.

AWS Image Builder now allows you to update your images without you having to perform any manual steps or having to write automation scripts. AWS Image Builder provides user a GUI , using which an automated pipeline can be built. Once done, Image Builder takes care of building and testing the images. Once all tests are passed, the images can be distributed to all regions

AWS Image Builder link

ALB supports Least Outstanding Requests Algorithm

The Application Load Balancer of AWS used to support only the round robin algorithm to distribute load. Now a new algorithm, Least Outstanding Requests algorithm, can also be used to distribute load. As the name implies, in this case, a new request would be sent to an instance which has the least outstanding requests. The user now has a choice between these two algorithms and can use the one which suits their use case

Application Load Balancer Link

AWS License Manager

When you are a large corporation and you have license agreements with software vendors you need to ensure that you stick to the license terms. Say you have license to use a particular software for 100 users, you cannot overshoot this limit without buying more license. AWS License Manager helps in managing licenses. Using License Manager, an administrator can create license rules which mirror the terms of your agreement. License Manager will ensure that these rules are enforced. For example, if you have already exhausted the number of users for a particular software as per the license, when another user tries to start an EC2 instance with the software, the instance may be stopped from starting or the administrator is notified immediately of the infringement. AWS License Managers helps in ensuring that are no non-compliances as far as licenses are concerned.

AWS License Manager Link

Autoscaling supports Instance Weighing

Till now, whenever you use Autoscaling, it was assumed that every new instance added will contribute the same CPU power as the other instances in the autoscaling group. With the support of Instance Weighing we can now define how much capacity units each instance must contribute. This will give us more flexibility in choosing various instance types. This helps us in optimizing costs, especially when we use spot instances.

Read this article to get more insight into how to optimize costs using this feature

EBS Direct API

This blog post by Jeff Barr explains this very well

EBS Fast Snapshot Restore

The way EBS volumes are built from snapshots are like this: when the volume is built from a snapshot, not all data is copied from the snapshot to the EBS volume. Instead, when a block is accessed, the data is then ‘lazy loaded’ from the snapshot to the disk. This means there will be a latency when the block is first accessed.

AWS now allows you to create snapshots with the FSR option. If this option is used, the volumes created from such snapshots will get the full provisioned performance instantly and you will not see any latency.

AWS Fast Snapshot Restore link

AWS Tag Policies

Tagging is very important, especially if you have large number of resources in AWS. Lot of services depend on tagging. For example, when you want to implement a Snapshot Lifecycle, you group volumes using tags. With the newly introduced AWS Tag Policy feature, you can define on how tags can be used in your AWS account. AWS projects the benefits of Tag Policy thus: “Using Tag Policies, you can define tag keys, including how they should be capitalized, and their allowed values. For example, you can define the tags CostCenter and SecurityGroup where CostCenter must be ‘123’ and SecurityGroup can be ‘red-team’ or ‘blue-team’. Standardized tags enable you to confidently leverage tags for critical use cases such as cost allocation and attribute-based access control because you can ensure your resources are tagged with the right attributes.”

AWS Tag Policy Link

As I had said, there were lot more announcements made. I have just chosen a smaller subset of services which I think will impact a large user base.

Hope you found the post useful.

What’s New(s) in AWS – Part 1

A very Happy New Year 2020 to all of you

As you all know, the maximum number of new announcement wrt AWS is made in the re:Invent conference around the first week of Dec. Last year was no different. It makes you dizzy trying to catch up with all the new things that have been introduced. Here is an attempt by me to give you a gist of new products/features for some of the generic services. (I am not listing the developments in specialized services like Game Development, AI, Graph Database and so on). What I am going to talk about would probably impact a lot of people and may become part of the future certification exams.

Amazon’s Builders Library

If you are an architect you would be very interested in this. Amazon’s Builder Library is basically Amazon telling us how they build and operate the cloud. In Amazon’s own words, “The Amazon Builders’ Library is a collection of living articles that describe how Amazon develops, architects, releases, and operates technology”. The articles here talk about technology, they talk about how releases are planned and how the operations are performed. If you want to get an idea on how the cloud is actually operated, this is the place for you

Amazon’s Builders Library link

AWS Local Zones

We all know about AWS Regions and Availability Zones. In some cases you may want much more faster response than what you can get from a Region closest to you. For example, assume you have your maximum users in Bangalore. Currently you can have your resources only in Mumbai region. You feel that latency of connecting to Mumbai is not acceptable for your end users. In this case, if your resources are in Bangalore, it would help improve the latency.

AWS Local Zones try to address this problem. AWS is now going to create Local Zones (or maybe we can call it mini-Regions) closer to a large number of users. These Local Zones will not have the full gamut of AWS Services. They will have services like EC2, EBS, Load Balancers and VPC available to the users. The Local Zones will be connected to the Region via a dedicated link so that you can establish the connection between your resources in the Local Zone and the resources in the region of your choice. Currently only the Los Angeles Local Zone is available (on invitation).

This is an important development. I am sure that there will be more Local Zones in the near future and this will have an impact on how we architect our solutions

AWS Local Zones link

S3 Access Points and RTC

Access Points

With growth in data and subsequently the need to store large amount of common data in S3, has given rise to security issues. You now have the scenario of multiple user / applications accessing the common data from S3. Assume you want to control access for multiple users / applications in a granular fashion. We can do this using the bucket policies but this can soon turn into a nightmare since one misstep would affect multiple users/application

AWS has now introduced the concept of S3 Access points to address this issue. We can now create multiple access points for the same bucket and provide permissions at the access point level. Each of these access points can then be provided to different users/applications. This way any problem in security configuration will only affect a small subset of users/applications. This will allow us to manage our S3 permission more effectively.

Read more details on at this link : S3 Access Points

Replication Time Control (RTC)

You must be aware that we can setup replication for a bucket. The destination of the replication can be a bucket in the same zone or in a different zone. With Replication Time Control, Amazon will try and complete the replication within a specified time period. Amazon backs it up with a SLA. Here is what AWS says about this feature: “S3 Replication Time Control is designed to replicate 99.99% of objects within 15 minutes after upload, with the majority of those new objects replicated in seconds. S3 RTC is backed by an SLA with a commitment to replicate 99.9% of objects within 15 minutes during any billing month”

More details here: S3 RTC

VPC Sharing

VPC sharing allows subnets to be shared with other AWS accounts within the same AWS Organization. So you can now have a VPC is spread across two or more accounts of the same AWS Organization. This allows you more control in terms of centralizing VPC control.

Check out the link to understand the benefits and how to share your subnets.

VPC Sharing

RDS Proxy

Let’s take a use case in the serverless domain. Assume when your Lambda function is triggered, it establishes a database connection to your RDS instance. Assume that you trigger a huge number of Lambda functions in parallel. Each of these functions has to establish a database connection with your RDS instance. When the function is completed, the connection will be tore down. Establishing connections take a toll on RDS instance as this consumes CPU/Memory resources. So the performance of RDS decreases if it is constantly opening and closing connections. The other kind of problem is when a huge number of connections are opened and many kept idle so that when a request comes the response could be fast. You basically overprovision the number of connections in this case.

RDS Proxy has been introduced to solve problems like these. The RDS Proxy sits between the application and RDS. It opens a pool of connections with the RDS instance. Your application now connects to the RDS Proxy and the proxy allocates a connection from the pool. In case of connections which are infrequently used, RDS Proxy will share these across applications. RDS Proxy ensures that there is no more open/closing connection burden on RDS instance, thus improving the efficiency of RDS instance and thus the efficiency of your application

RDS Proxy Link

AWS Resource Access Manager (RAM)

Let us take a use case like this: assume you have multiple accounts for your organization. Each account builds its own VPC and wants to manage the VPN connections. So each account may end up asking for Transit Gateway. So your organization has to pay for multiple Transit Gateways. Amazon has now introduced Resource Access Manager (RAM). This allows you to share resources amongst various AWS Accounts within the same organization, thus reducing the management effort and the cost.

Currently you can share Transit Gateways, Subnets, AWS License Manager configurations, and Amazon Route 53 Resolver rules resources with RAM

Resource Access Manager Link

AWS Detective

AWS provides a lot of tools for security. For example we have Amazon Guard Duty which looks at your logs (like VPC Flow Logs) and alerts you to possible security issues. It will also point out where the error lies. This is very useful tool and in many cases this may be sufficient. In some of cases though you will need to dig deeper in order to find out the root as why this security flaw came into existence. AWS Detective helps you in finding the root cause of potential security issues. It uses Machine Learning, Graph Theory and Statistical Analysis to build linkages, which will help you get to the root cause faster. For this AWS Detective uses various data sources like VPC Flow Logs, AWS CloudTrail and Guard Duty.

AWS Detective Link

IAM Access Analyzer

While services like Guard Duty & Detective tell you about security vulnerabilities, a challenge that all organizations have to face is giving permissions inadvertently to external principals. IAM Access Analyzer is a tool which tells you which of the resources have given permissions to external principals. Access Analyzer considers your account as the zone of trust. Access Analyzer analyzes all your policies and if it finds any policy giving permission to an external principal, it records the finding. Similarly, if any policy changes and that change provides access to external principal, you will be notified.

IAM Access Analyzer link

Will continue this in Part 2 tomorrow

CloudSiksha Completes 5 yrs

 

I am happy to let you know that CloudSiksha has now completed 5yrs of existence (on 28th Oct 2019) and we are in the 6th yr now. This happened on Oct 28th. I couldn’t post immediately as I was tied up with work.

It has been a great journey till now and as with any journey this comes with its own share of hopes, disappointments, effort and discovery. One thing I was sure when I started this journey was the Cloud would take over the world and that I should be prepared. I bet on AWS from day one and my belief has paid off.

As anyone who is quitting a job and venturing into doing something of their own would know, it is not easy. It is not easy in the beginning or is it easy later. It is a constant struggle filled with doubt as to how the business will grow, which way the market would move and how relevant we would be to the industry. This means we have to keep learning, keep ourselves updated and communicate constantly with the clients, both to let them know about your capabilities and to get a pulse on the market.

This journey has been successful because of some excellent partners that I had got, many of them now good friends of mine. They have helped me throughout and have kept faith in CloudSiksha. By God’s grace we have been able to deliver to their satisfaction and to the satisfaction of the client.

Lot of people keep asking me as to how it is to go all alone, to quit a high paying job and whether I make as much money as I was making in my job. My advice to them has always been the same. Moving out of a job and starting on your own is not just about money. It is about planning. Do you have a plan in mind? Do you know why you want to quit and what you want to start? The amount of money you make will depend on your plan and how to execute it. Ofcourse, a lot depends on market conditions, your efforts and some luck. The important thing about going alone is you decide on how much you want to earn. Once that is decided you need to plan accordingly. So a question like will I make same amount as my corporate salary doesn’t make sense. You can half or less of what you made earlier by being a freelancer and you can be happy about it or you can plan to build a enterprise where you will eventually earn double of what you were earning. Everything depends on your dreams, your plan and your efforts.

We started as a company teaching AWS Cloud. Later expanded to teach tools like Chef and Puppet. We also were involved in teaching IBM Cloud for a year. Later we expanded our courses to include Docker and Kubernetes. In the coming year we plan to enter the very big world of Big Data and if possible, Machine Learning. We want to be at the cutting edge of technology always. It takes enormous effort to get there but it is worth the effort.

Once again thanks to all my partners, my clients and especially to all my students. Wish you all the best and hope that our journey continues for many more years.