What’s New(s) in AWS : Part 2

I had written about some of the new AWS Services in Part 1 . We will continue from there

AWS Outposts

AWS Outposts can be seen as being a limited version of AWS Cloud in your data center. The way this works is as follows: You first order for an AWS Outpost. This is a rack containing many servers. Amazon delivers this to your data center and sets it up. This infrastructure can be handled using the same User Interface, CLI or APIs that you would use on AWS Cloud. This needs you have good network connectivity so that the Outposts can connect to the AWS Region. Once this is setup, you can run EC2 instances or RDS instances on these local servers. You can connect the Outpost to AWS Region using Direct Connect or VPN. You also have the option of using local storage with your Outposts instances

AWS Outposts link  Watch the video in the link for a better understanding

AWS Outposts FAQ for better understanding

AWS Image Builder

Keeping the images in your organization upto date is a very important and at the same time, a time consuming task. Someone has to either update the images or there needs to be some of automation script which will produce updated images.

AWS Image Builder now allows you to update your images without you having to perform any manual steps or having to write automation scripts. AWS Image Builder provides user a GUI , using which an automated pipeline can be built. Once done, Image Builder takes care of building and testing the images. Once all tests are passed, the images can be distributed to all regions

AWS Image Builder link

ALB supports Least Outstanding Requests Algorithm

The Application Load Balancer of AWS used to support only the round robin algorithm to distribute load. Now a new algorithm, Least Outstanding Requests algorithm, can also be used to distribute load. As the name implies, in this case, a new request would be sent to an instance which has the least outstanding requests. The user now has a choice between these two algorithms and can use the one which suits their use case

Application Load Balancer Link

AWS License Manager

When you are a large corporation and you have license agreements with software vendors you need to ensure that you stick to the license terms. Say you have license to use a particular software for 100 users, you cannot overshoot this limit without buying more license. AWS License Manager helps in managing licenses. Using License Manager, an administrator can create license rules which mirror the terms of your agreement. License Manager will ensure that these rules are enforced. For example, if you have already exhausted the number of users for a particular software as per the license, when another user tries to start an EC2 instance with the software, the instance may be stopped from starting or the administrator is notified immediately of the infringement. AWS License Managers helps in ensuring that are no non-compliances as far as licenses are concerned.

AWS License Manager Link

Autoscaling supports Instance Weighing

Till now, whenever you use Autoscaling, it was assumed that every new instance added will contribute the same CPU power as the other instances in the autoscaling group. With the support of Instance Weighing we can now define how much capacity units each instance must contribute. This will give us more flexibility in choosing various instance types. This helps us in optimizing costs, especially when we use spot instances.

Read this article to get more insight into how to optimize costs using this feature

EBS Direct API

This blog post by Jeff Barr explains this very well

EBS Fast Snapshot Restore

The way EBS volumes are built from snapshots are like this: when the volume is built from a snapshot, not all data is copied from the snapshot to the EBS volume. Instead, when a block is accessed, the data is then ‘lazy loaded’ from the snapshot to the disk. This means there will be a latency when the block is first accessed.

AWS now allows you to create snapshots with the FSR option. If this option is used, the volumes created from such snapshots will get the full provisioned performance instantly and you will not see any latency.

AWS Fast Snapshot Restore link

AWS Tag Policies

Tagging is very important, especially if you have large number of resources in AWS. Lot of services depend on tagging. For example, when you want to implement a Snapshot Lifecycle, you group volumes using tags. With the newly introduced AWS Tag Policy feature, you can define on how tags can be used in your AWS account. AWS projects the benefits of Tag Policy thus: “Using Tag Policies, you can define tag keys, including how they should be capitalized, and their allowed values. For example, you can define the tags CostCenter and SecurityGroup where CostCenter must be ‘123’ and SecurityGroup can be ‘red-team’ or ‘blue-team’. Standardized tags enable you to confidently leverage tags for critical use cases such as cost allocation and attribute-based access control because you can ensure your resources are tagged with the right attributes.”

AWS Tag Policy Link

As I had said, there were lot more announcements made. I have just chosen a smaller subset of services which I think will impact a large user base.

Hope you found the post useful.

What’s New(s) in AWS – Part 1

A very Happy New Year 2020 to all of you

As you all know, the maximum number of new announcement wrt AWS is made in the re:Invent conference around the first week of Dec. Last year was no different. It makes you dizzy trying to catch up with all the new things that have been introduced. Here is an attempt by me to give you a gist of new products/features for some of the generic services. (I am not listing the developments in specialized services like Game Development, AI, Graph Database and so on). What I am going to talk about would probably impact a lot of people and may become part of the future certification exams.

Amazon’s Builders Library

If you are an architect you would be very interested in this. Amazon’s Builder Library is basically Amazon telling us how they build and operate the cloud. In Amazon’s own words, “The Amazon Builders’ Library is a collection of living articles that describe how Amazon develops, architects, releases, and operates technology”. The articles here talk about technology, they talk about how releases are planned and how the operations are performed. If you want to get an idea on how the cloud is actually operated, this is the place for you

Amazon’s Builders Library link

AWS Local Zones

We all know about AWS Regions and Availability Zones. In some cases you may want much more faster response than what you can get from a Region closest to you. For example, assume you have your maximum users in Bangalore. Currently you can have your resources only in Mumbai region. You feel that latency of connecting to Mumbai is not acceptable for your end users. In this case, if your resources are in Bangalore, it would help improve the latency.

AWS Local Zones try to address this problem. AWS is now going to create Local Zones (or maybe we can call it mini-Regions) closer to a large number of users. These Local Zones will not have the full gamut of AWS Services. They will have services like EC2, EBS, Load Balancers and VPC available to the users. The Local Zones will be connected to the Region via a dedicated link so that you can establish the connection between your resources in the Local Zone and the resources in the region of your choice. Currently only the Los Angeles Local Zone is available (on invitation).

This is an important development. I am sure that there will be more Local Zones in the near future and this will have an impact on how we architect our solutions

AWS Local Zones link

S3 Access Points and RTC

Access Points

With growth in data and subsequently the need to store large amount of common data in S3, has given rise to security issues. You now have the scenario of multiple user / applications accessing the common data from S3. Assume you want to control access for multiple users / applications in a granular fashion. We can do this using the bucket policies but this can soon turn into a nightmare since one misstep would affect multiple users/application

AWS has now introduced the concept of S3 Access points to address this issue. We can now create multiple access points for the same bucket and provide permissions at the access point level. Each of these access points can then be provided to different users/applications. This way any problem in security configuration will only affect a small subset of users/applications. This will allow us to manage our S3 permission more effectively.

Read more details on at this link : S3 Access Points

Replication Time Control (RTC)

You must be aware that we can setup replication for a bucket. The destination of the replication can be a bucket in the same zone or in a different zone. With Replication Time Control, Amazon will try and complete the replication within a specified time period. Amazon backs it up with a SLA. Here is what AWS says about this feature: “S3 Replication Time Control is designed to replicate 99.99% of objects within 15 minutes after upload, with the majority of those new objects replicated in seconds. S3 RTC is backed by an SLA with a commitment to replicate 99.9% of objects within 15 minutes during any billing month”

More details here: S3 RTC

VPC Sharing

VPC sharing allows subnets to be shared with other AWS accounts within the same AWS Organization. So you can now have a VPC is spread across two or more accounts of the same AWS Organization. This allows you more control in terms of centralizing VPC control.

Check out the link to understand the benefits and how to share your subnets.

VPC Sharing

RDS Proxy

Let’s take a use case in the serverless domain. Assume when your Lambda function is triggered, it establishes a database connection to your RDS instance. Assume that you trigger a huge number of Lambda functions in parallel. Each of these functions has to establish a database connection with your RDS instance. When the function is completed, the connection will be tore down. Establishing connections take a toll on RDS instance as this consumes CPU/Memory resources. So the performance of RDS decreases if it is constantly opening and closing connections. The other kind of problem is when a huge number of connections are opened and many kept idle so that when a request comes the response could be fast. You basically overprovision the number of connections in this case.

RDS Proxy has been introduced to solve problems like these. The RDS Proxy sits between the application and RDS. It opens a pool of connections with the RDS instance. Your application now connects to the RDS Proxy and the proxy allocates a connection from the pool. In case of connections which are infrequently used, RDS Proxy will share these across applications. RDS Proxy ensures that there is no more open/closing connection burden on RDS instance, thus improving the efficiency of RDS instance and thus the efficiency of your application

RDS Proxy Link

AWS Resource Access Manager (RAM)

Let us take a use case like this: assume you have multiple accounts for your organization. Each account builds its own VPC and wants to manage the VPN connections. So each account may end up asking for Transit Gateway. So your organization has to pay for multiple Transit Gateways. Amazon has now introduced Resource Access Manager (RAM). This allows you to share resources amongst various AWS Accounts within the same organization, thus reducing the management effort and the cost.

Currently you can share Transit Gateways, Subnets, AWS License Manager configurations, and Amazon Route 53 Resolver rules resources with RAM

Resource Access Manager Link

AWS Detective

AWS provides a lot of tools for security. For example we have Amazon Guard Duty which looks at your logs (like VPC Flow Logs) and alerts you to possible security issues. It will also point out where the error lies. This is very useful tool and in many cases this may be sufficient. In some of cases though you will need to dig deeper in order to find out the root as why this security flaw came into existence. AWS Detective helps you in finding the root cause of potential security issues. It uses Machine Learning, Graph Theory and Statistical Analysis to build linkages, which will help you get to the root cause faster. For this AWS Detective uses various data sources like VPC Flow Logs, AWS CloudTrail and Guard Duty.

AWS Detective Link

IAM Access Analyzer

While services like Guard Duty & Detective tell you about security vulnerabilities, a challenge that all organizations have to face is giving permissions inadvertently to external principals. IAM Access Analyzer is a tool which tells you which of the resources have given permissions to external principals. Access Analyzer considers your account as the zone of trust. Access Analyzer analyzes all your policies and if it finds any policy giving permission to an external principal, it records the finding. Similarly, if any policy changes and that change provides access to external principal, you will be notified.

IAM Access Analyzer link

Will continue this in Part 2 tomorrow

CloudSiksha Completes 5 yrs

 

I am happy to let you know that CloudSiksha has now completed 5yrs of existence (on 28th Oct 2019) and we are in the 6th yr now. This happened on Oct 28th. I couldn’t post immediately as I was tied up with work.

It has been a great journey till now and as with any journey this comes with its own share of hopes, disappointments, effort and discovery. One thing I was sure when I started this journey was the Cloud would take over the world and that I should be prepared. I bet on AWS from day one and my belief has paid off.

As anyone who is quitting a job and venturing into doing something of their own would know, it is not easy. It is not easy in the beginning or is it easy later. It is a constant struggle filled with doubt as to how the business will grow, which way the market would move and how relevant we would be to the industry. This means we have to keep learning, keep ourselves updated and communicate constantly with the clients, both to let them know about your capabilities and to get a pulse on the market.

This journey has been successful because of some excellent partners that I had got, many of them now good friends of mine. They have helped me throughout and have kept faith in CloudSiksha. By God’s grace we have been able to deliver to their satisfaction and to the satisfaction of the client.

Lot of people keep asking me as to how it is to go all alone, to quit a high paying job and whether I make as much money as I was making in my job. My advice to them has always been the same. Moving out of a job and starting on your own is not just about money. It is about planning. Do you have a plan in mind? Do you know why you want to quit and what you want to start? The amount of money you make will depend on your plan and how to execute it. Ofcourse, a lot depends on market conditions, your efforts and some luck. The important thing about going alone is you decide on how much you want to earn. Once that is decided you need to plan accordingly. So a question like will I make same amount as my corporate salary doesn’t make sense. You can half or less of what you made earlier by being a freelancer and you can be happy about it or you can plan to build a enterprise where you will eventually earn double of what you were earning. Everything depends on your dreams, your plan and your efforts.

We started as a company teaching AWS Cloud. Later expanded to teach tools like Chef and Puppet. We also were involved in teaching IBM Cloud for a year. Later we expanded our courses to include Docker and Kubernetes. In the coming year we plan to enter the very big world of Big Data and if possible, Machine Learning. We want to be at the cutting edge of technology always. It takes enormous effort to get there but it is worth the effort.

Once again thanks to all my partners, my clients and especially to all my students. Wish you all the best and hope that our journey continues for many more years.

Multi-Cloud for Architects: A book that I have co-authored

I am happy to announce that I am a co-author of the book titled, “Multi-Cloud for Architects: Grow your IT business by means of a multi-cloud strategy” by Packt Publications. Florian Klaffenbach and Markus Klein are the other co-authors of this book.

I was talking to my friend Bala, who works for a firm which deals with multi cloud. I told him about this book. His immediate reaction was, “Multi cloud is how things are going. Our company is fully into multi cloud now. We help people to migrate to AWS as well as Azure and we maintain infrastructure on both these cloud platforms”. Another close friend of mine, Ramesh N R, who works for a Cloud security firm told me as to how they take infrastructure from both Google Cloud and AWS.

If you look at it carefully, you will see that this trend will continue and is here to stay. This is due to various factors. One of them is that the Cloud is now a highly competitive area and each of the providers wants to out do the other and ensure they get more market share. Towards this end you have each of these providers giving you lot of things free. A very cost conscious company would like to ensure they can use such offers and reduce their costs. This leads to multi cloud in many companies.

Secondly, the number of offerings that are given by each vendor is growing almost on a daily basis. You may find that a particular cloud provider is offering a better service on say, Big Data or AI, than another vendor. Whereas a service like PaaS may be better with a different vendor. For sake of efficiency and ensuring better productivity, you may end up choosing different services from different vendors. This again leads to a multi cloud strategy.

Another reason why we will see multi cloud strategy is because many companies are vary of lock in. They may believe in Cloud technology, they may believe that one particular vendor is the best of the lot. Yet, to throw millions of dollars on one particular provider, without de-risking, is something many senior managers would hate to do. So as a part of risk mitigation strategy, you would see multi-cloud in many Enterprises.

Sometimes multi-cloud may happen due to current relationships as well. Let us say you are an IBM shop and IBM wants you to migrate to IBM Cloud. At the same time your team has found that another Cloud service provider offers certain services which suit your team better. In such cases, companies would end up using the Cloud of their long time partners as well as other Cloud providers.

Whatever be the case, multi-cloud as a strategy is gaining importance. So it is essential for all architects and others like system admins, DevOps engineers to have a more holistic idea of Cloud service providers and not stick to only one Cloud service provider.

This book, which I have co-authored, addresses the Multi-Cloud and how it can be leveraged. The contents of the book will tell you that the focus is mainly on AWS and Azure. You will also learn about OpenStack offerings. The book focuses on one very important aspect when dealing with multi cloud, i.e., interconnects. It has a chapter on how to interconnect different cloud solutions. I am sure this book will be of great use to architects and others who want to learn about Multi Cloud.

The book is available both in physical as well as in e-book (Kindle) form. The links to buy the book:

In India: Multi-Cloud for Architects 

In US and other countries: Multi-Cloud for Architects 

Once you buy and read the book, would request you to leave a comment in Amazon. That would be of great help.

What’s new in AWS : Part 1 – Compute & Storage

Since this is the first post of this year, let me wish you a very Happy New Year 2019 even though more than half a month has already passed. Time flies, especially when you have work to do.

There have been lot of developments on the AWS front and given that re:Invent is held at the end of the year, the number of new services and features introduced is huge. In my blog post I will concentrate only on the new happenings in the Compute, Storage, Networking and Database services. This will be a two part post. In the first part I will talk about Compute and Storage.

  1. Hibernating EC2 instance: Till now we were only able to stop or terminate EC2 instances. AWS now gives us the ability to hibernate an instance. As you know, hibernation basically saves the system state and when you start it again it runs from where you left. Very similarly to closing your laptop without switching off the system. The billing will stop for the instance once it is in the hibernate state (ofcourse you will pay for the attached EBS disk and any attached EIP). In other words, billing wise it is similar to a stopped instance
  2. Increase in Max IOPS for Provisioned IOPS EBS Disks: The Max IOPS that can be requested of a Provisioned IOPS disk is now doubled. Earlier the limit was 32,000 IOPS. Now the limit is 64,000 IOPS. While this is definitely good for the performance, you also have to keep in mind that this is about the performance of a single disk. The Max IOPS that an instance can support is still 80,000 IOPS. If you want to use multiple disks to build a RAID, keep the instance limit in mind. The throughput of these disks is now at 1,000 MB/s maximum with the instance maximum throughput being 1,750 MB/s
  3. FSx : AWS already has EFS, Elastic File System, which is a shared file system. EFS though has a limitation. You can mount EFS only if you are using a Linux instance, since EFS uses NFS v4.1. You cannot mount it if you have a Windows instance. This limitation is now overcome by providing the FSx file system. FSx allows you to create file shares for Windows as well as Luster. FSx for Windows integrates with Microsoft AD as well.
  4. AWS Transfer for SFTP: In many cases you store files in order to share it to your clients via SFTP. Generally you have to setup the FTP server and store your files in your server. You also need to maintain the server. With AWS Transfer for SFTP,  AWS sets up the SFTP server for you and you can store your files durably in S3. Additionally you can reroute your current FTP domain name to the AWS domain name using Route53.
  5. S3 Intelligent-Tiering : One of the major issues for Enterprise is managing data and ensuring that costs of storing data is minimized. Many a times we resort to putting up lifecycle rules to ensure cost optimization. In major Enterprise storage arrays we have intelligent tiering, wherein data is moved between various classes of storage. Now this is available in AWS. The S3 Intelligent Tiering moves your objects between Standard Storage and Standard Storage – Infrequent Access tiers. So if your object in Standard Storage is not accessed for 30 days, it will be automatically moved to Infrequent Access Class, thus saving costs for you.
  6. Amazon S3 Object Lock: Till now, we were having the Vault Lock feature in Glacier. The lock feature would convert a vault to WORM (Write Once Read Many). Now AWS has extended this to objects in S3. So we have a lock at object level. Once an object is locked for a certain period of time you can neither over write it nor delete it
  7. S3 Batch Operations: This is mainly aimed at developers and automation engineers. In earlier times, we needed to make changes for each object separately. Now you can apply certain actions to a whole set of objects at the same time. This allows changes to happen in hours, what would have taken days or even a month, in earlier case.
  8. S3 Glacier Deep Archive: This what AWS says about S3 Glacier Deep Archive. This is self explanatory. “This new storage class for Amazon Simple Storage Service (S3) is designed for long-term data archival and is the lowest cost storage from any cloud provider. Priced from just $0.00099/GB-mo (less than one-tenth of one cent, or $1.01 per TB-mo), the cost is comparable to tape archival services. Data can be retrieved in 12 hours or less, and there will also be a bulk retrieval option that will allow you to inexpensively retrieve even petabytes of data within 48 hours.”

In the next part, we will discuss new items in Networking, Database and Security.

How do you keep up with technological advances?

This is a question I get asked almost every day. This post tries to answer this question and give you an idea of what tools/sites I use to keep myself updated.

One of the main challenges all of us face today is the pace at which technology is changing, new features which get added, new software which suddenly picks up speed and new architecture patterns that come up. How do you deal with this? The single answer I have is: with lot of hard work and discipline. There are no short cuts here.

I am going to give you an idea of the sites I follow and the tools I use to keep myself updated. First, I am more a reading person than a seeing/listening person. In the sense that I love to read articles and update my knowledge than hear podcasts or see videos. So what I am going to suggest are links to blogs and articles. Which means you need to read. If you are a podcast kind of person, then I don’t have much for you as of now except for a couple of links. Probably maybe some time later.

The tools that I use to get info are Feedly and Twitter. I use to be a great fan of Google Reader and felt bad when Google shut it down. After that I shifted to Feedly. I find it quite nice. Below are some of the sites I follow using Feedly:

First AWS based blogs

Azure

Google Cloud Platform

General

Another way to keep yourself updated is to subscribe to some newsletters. The ones I subscribe to are the following:

  • DZone: They have newsletters for various subjects: Java, DevOps, Containers and so on. They give good information and you also get some nice free e-books as well. I would recommend signing up for their newsletter. The website is here: https://dzone.com/
  • Thorn Tech: If you are really looking at some articles which talk about actual implementations implementing AWS, ThornTech is a good site to follow. Subscribe for their newsletter. I have read some very interesting and informative articles at this site https://www.thorntech.com
  • RightRelevance: This is another site, whose newsletter gives you lot of consolidated information. You can check them up here: https://www.rightrelevance.com/

Twitter is another platform which I use to keep myself updated. You should follow all the aws handles, azure handles, google handles as well as people like @jeffbarr, the AWS evangelist. Once you get onto twitter, you will get to know the handles to follow. Alternately, you can follow me on twitter @suresh_csiksha and then follow whomever I follow.

As I said, it is not about knowing these sites or following them or subscribing to the newsletters that will keep you updated. You have to make time to read those articles.

Disclosure: I have no connection with any of the websites I have quoted here. I don’t know who runs them. I have used them and benefited from them and hence I am putting them here.

Should I learn Puppet / Chef?

Last month I had put up a notice about my Puppet course in LinkedIn and there was good response to it. (I ran two batches of the course). While most people knew what Puppet was, there were also a significant number of people who wanted to learn DevOps and they have read Puppet is DevOps and hence wanted to learn it. I asked them if they had managed Linux systems and if they had done work as Linux System Admins, the answer was in negative. I advised them that Puppet may not prove very useful to their career.

We can understand that people want to ‘learn’ the latest in technology so as to keep themselves relevant, ensure progress within their company and increase the chances of landing a new desirable job. While keep oneself relevant must be the goal of everyone, we need to also understand which technologies to choose so that it flows naturally into what you are doing. I have been advising people that they should never give up the experience they have gained till now. Rather they should see how they can leverage on their experience when it comes to new technologies. I have had more than one conversation with folks with 10+ years of experience who have good experience in Storage / Database / Network Ops who want to give everything up and take up a Cloud job by clearing one of the certifications.

Similar things are happening on the DevOps front. Those who want to move to DevOps have to understand DevOps first. It has two parts: the Dev part and the Ops part. The Dev part is where we are looking it version control, its linkage to continuous integration and deployment. Here is where you will hear about Git, Jenkins and CICD. The job here could be to setup the CICD pipeline. The Ops part is about maintaining the configurations of all servers in the infrastructure. In order to do this, you must have been a system administrator earlier. You must understand how to install packages in various flavors of Linux, you must understand how to start and stop services in Linux, you must understand about configuration files and how they affect the installed software. Here is where you will hear about software like Puppet, Chef, Ansible and Salt. Remember, companies would go for these software mostly when they have large number of systems and want to automate. Essentially it means they want someone with good system administration knowledge to take care of this automation process. There will be some minor amount of code that needs to be written in these software. So you should also have some shell scripting experience.

So when you are thinking of learning Puppet or Chef or Ansible, make a note of what I had stated above. If you are already in the system administration domain and are not very afraid of programming a bit, then jump in to Puppet or Chef. If you are senior person and have never been involved in system administration, then you need to make it clear to yourself as to why you want to take up DevOps. You must have a plan in place since you will be spending your time and money on this. If you are relatively junior, say one or two years experience, understand system administration and jump in to DevOps. It is probably the correct time to do so.

We do have Puppet and Chef courses at CloudSiksha, in case you want to progress in the area of DevOps.

Tools for AWS

The demand to learn AWS is quite high and the number of people getting certified in AWS is going up steadily. Slowly having an AWS certification will become mandatory if you want to work in AWS or if you are searching for an AWS job.

Those who are working in AWS projects know that knowing AWS is just one part of the job. There are lot more things involved in their jobs. They have to more than AWS in order to execute their role to perfection. They need deep understanding of the Data Center, understanding of the network, understanding of the application, awareness of the compliance and security requirements and much more. The job of a data center architect is not easy and the lack of tools in AWS for certain things makes it more difficult.

Take for example, billing. I recently did a architect course for a MNC and lot of senior architects attended it. One of them asked me, “How do we makes sense out of AWS bill? The bill runs into hundreds of pages. Yes, there is total transparency but to understand what has been spent and why is a nightmare”. I had heard similar comments from architects of the now non-existing CSC. They were also talking about the huge bills they to contend with. One of their customers had more than 700 EC2 instances running. Not to mention storage and databases. So the bill was humongous. How do you read such a bill and make sense? It definitely needs some tool. What are the tools available?

Here is a post by AWS on how to analyze cost using Looker and Amazon Athena.

Here is a post which talks about various free and paid tools for the purpose of cost analysis

If you love to develop a solution of your own, here is a post on how you can do it using Google Big Query

When I teach the participants about VPC, there is question that is almost always asked, ‘Can we get a network diagram of the VPC we created’? It is not possible in the console and to be honest if you have more than one VPC with subnets in each VPC, it is a bit of pain to see things on the console even if you have named the VPC and subnets in a very logical fashion. What would be helpful is a diagram of each VPC detailing the subnets within the VPC, the route tables and the NACL.

There are multiple tools which help in both generating a design diagram as well as generating a diagram from the existing infrastructure. What would be cool is the feature which can generate a CloudFormation template from our existing infrastructure in a few clicks. I browsed through a few tools which I am listing here. Please note that I have NOT analyzed any of them and I have no idea how effective they are. I will do some testing the coming days to find out how useful these could be. For the time being here are some tools that you can explore.

Hava is a tool which can import your existing infrastructure as a diagram. They also claim that you can get a CloudFormation template from the infrastructure in a matter of a few steps.

CloudCraft allows you to do drawings of the infrastructure. You can also import your existing infrastructure into this tool.

LucidChart AWS Architecture Import As the name indicates, this tool too allows you to import your AWS Infrastructure and visualize the infrastructure

I am sure many of you may be using various tools. Which is your preferred tool and why? Do let me know in the comment section

AWS Latest Announcements

As usual, in re:Invent 2017, Amazon AWS has announced a spate of new services and added new features to existing services. I will summarize some of these. I am going to concentrate on the more generic and not the very specialized services, though I will mention a few of them.

Compute:

  1. AWS Fargate : This new service from AWS allows you to run your docker container without worrying about the systems that will run the container. In other words, AWS will take care of setting up a cluster of instances and will run your containers. The cluster will be maintained by AWS leaving you free to worry about your application. Azure already has the ability to run container instances. In this case, I think AWS is catching up with Azure
  2. Bare Metal: IBM and a few others had the Bare Metal offering earlier. As the name indicates, you get complete control of a server and you can load the hypervisor or OS of your choice on the system. This helps you in many ways, especially in getting better performance, achieving compliance, tackling licensing issues and you can also build a cloud of your choice within AWS!! Bare Metal is still in preview stage but I am sure you will see it being generally available soon
  3. Hibernation of Spot Instances: Earlier whenever your spot instance was running and your bid price fell below the spot price, AWS terminated your spot instances. So spot instances were suitable only for such applications which could withstand sudden termination. Later, they stopped the spot instances instead of terminating them. Now the spot instances will go into the hibernation mode. Here, the state of your memory is also stored on disk and when capacity becomes available again your instance will start running from where you left off. The private IP and the Elastic IP are also preserved. This makes spot instances even more attractive to use
  4. Elastic Container Service for Kubernetes (EKS): Many of you would know that Kubernetes is a Docker orchestration service. AWS had only ECS (Elastic Container Service) earlier for Docker orchestration. They have now given us the option of using Kubernetes as well. Here, AWS will take care of all the infrastructure required for running Kubernetes, so that we need not worry about setting up servers and setting up Kubernetes. Given that Kubernetes is having a lot of traction, this is a good move from Amazon. This is now in the Preview stage

Databases

  1. Amazon Aurora multi master: Now you can create more than one read/write master database. The applications can use these multiple databases in the cluster to read and write. As you can guess, the high availability of the database will increase as you can have each of the masters in a different Availability Zone
  2. DynamoDB Global Tables: In this case your DynamoDB tables are automatically replicated across regions of your choice. Earlier if you wanted a replica of your DynamoDB table in another region you had to setup the replication on your own. With DynamoDB you no longer need to worry about it now. You can immediately see how this will be effective in a DR scenario.
  3. DynamoDB Backup and Restore: Now AWS allows you to backup and restore your DynamoDB tables. This is to help enterprises meet the regulatory requirements. AWS promises that the backup will happen very fast irrespective of the size of the table
  4. AWS Neptune: Amazon launches a graph database which it has name AWS Neptune. If you have seen my webinar on NoSQL Databases you would know that graph database is a type of NoSQL Database. I will write a separate post on graph database and what AWS Neptune’s features are in a future post

Networking

  1. Inter-region VPC Peering: Earlier you could peer two VPCs only if they were in the same region. Now Amazon allows you to peer two VPCs even if they belong to different regions. So an EC2 instance can access another EC2 instance in a peered VPC of another region using only the private IP

Messaging

  1. Amazon MQ: This is a managed broker service for Apache ActiveMQ. Amazon will setup the ActiveMQ and maintain it. I don’t have much of an idea about ActiveMQ. I haven’t worked on it. From what I can gather, Amazon now has two messaging solutions, its own SQS (Simple Queue Service) and Amazon MQ. Maybe Amazon MQ has more features than SQS? I will find out and let you know

There are tons more of announcement that were made. I have just touched on ones that affect the AWS Solution Architect and AWS SysOps exams. I will write more about other new services and features in another post.

 

Storage Gateways and FUSE

Image source : aws.amazon.com

Cloud brought with it cheap storage and it also brought with it durability This meant that storing your data on the cloud is cost effective and you don’t need to worry about losing data. (S3 of Amazon for example, gives us 11 9s durability. This means that for all practical purposes, you will never ever lose your data). It is but natural that people start using the Cloud storage service and these Cloud Storage service of different vendors are storing trillions and trillions of objects.

Cloud Storage is based on Object Storage. This is different from the standard Block and File storage we are used to. In case of Object store, we need to fetch the object from the Cloud using REST APIs. This is quite different from reading and writing a file on to your disk. There are other differences as well between Object store and Block/File based storage devices.

While Cloud Storage is cheap, users are more comfortable with a filesystem interface. Is there a way in which we can deal with the Cloud Storage as if it is a filesystem? This means that the user just reads and writes a file and does not need to use REST API to fetch or store a object. If this can be done, users will find it easier to use the Cloud storage, thus reducing storage cost. This is possible by using Storage Gateways.

Storage Gateways have the Cloud Storage as their backend but expose a filesystem to the users. The users deal with files whereas the Storage Gateway will store these files are objects in Cloud Storage. This background processing is done transparent to the user. The Storage Gateways could expose either a block device or a filesystem (or a virtual tape library) to the user. AWS, for example, has Storage Gateways which expose a filesystem, Storage Gateways which expose a block device (iSCSI device) and a Storage Gateway which exposes a Virtual Tape Library (VTL). All of these use S3 as their backend to store the data.

The question that will be uppermost in your mind is that if the storage is in the Cloud and if you are using this storage as the primary storage, will there be no impact on the performance? It is a very pertinent question. Accessing the Cloud is definitely not as fast as accessing your disk drive in the data center. In order to address this, Storage Gateways have disks in them wherein the cache the recently accessed files. This helps in bolstering the performance of the gateways.  Other than AWS, Avere System is another company which does Cloud based NAS filers. ( http://www.averesystems.com/ )

Azure has now come with a FUSE adapter for BLOB storage (BLOB storage is the object store of Azure). Once you install this FUSE adapter on a Linux system, you can mount a BLOB container onto your Linux system. Once that is done, you can access the files as if they are part of your filesystem. You don’t need to use the REST APIs. The advantage of this wrt the Storage Gateways is that Storage Gateways are generally virtual appliance. For example, in case of AWS Storage Gateway, you need VMWare on prem because AWS Storage Gateway is a virtual appliance which runs on VMWare ESXi. In case of FUSE, you don’t need any additional device. Once you have the driver installed, you can start accessing the object storage as normal files.

Ofcourse, FUSE adapter of Azure is in the initial stage and hence has limitations. Not all filesystem calls have been implemented. So you need to be careful when you are using it.

You can check up this Azure article for more details on the FUSE adapter:  https://azure.microsoft.com/en-us/blog/linux-fuse-adapter-for-blob-storage/