Multi-Cloud for Architects: A book that I have co-authored

I am happy to announce that I am a co-author of the book titled, “Multi-Cloud for Architects: Grow your IT business by means of a multi-cloud strategy” by Packt Publications. Florian Klaffenbach and Markus Klein are the other co-authors of this book.

I was talking to my friend Bala, who works for a firm which deals with multi cloud. I told him about this book. His immediate reaction was, “Multi cloud is how things are going. Our company is fully into multi cloud now. We help people to migrate to AWS as well as Azure and we maintain infrastructure on both these cloud platforms”. Another close friend of mine, Ramesh N R, who works for a Cloud security firm told me as to how they take infrastructure from both Google Cloud and AWS.

If you look at it carefully, you will see that this trend will continue and is here to stay. This is due to various factors. One of them is that the Cloud is now a highly competitive area and each of the providers wants to out do the other and ensure they get more market share. Towards this end you have each of these providers giving you lot of things free. A very cost conscious company would like to ensure they can use such offers and reduce their costs. This leads to multi cloud in many companies.

Secondly, the number of offerings that are given by each vendor is growing almost on a daily basis. You may find that a particular cloud provider is offering a better service on say, Big Data or AI, than another vendor. Whereas a service like PaaS may be better with a different vendor. For sake of efficiency and ensuring better productivity, you may end up choosing different services from different vendors. This again leads to a multi cloud strategy.

Another reason why we will see multi cloud strategy is because many companies are vary of lock in. They may believe in Cloud technology, they may believe that one particular vendor is the best of the lot. Yet, to throw millions of dollars on one particular provider, without de-risking, is something many senior managers would hate to do. So as a part of risk mitigation strategy, you would see multi-cloud in many Enterprises.

Sometimes multi-cloud may happen due to current relationships as well. Let us say you are an IBM shop and IBM wants you to migrate to IBM Cloud. At the same time your team has found that another Cloud service provider offers certain services which suit your team better. In such cases, companies would end up using the Cloud of their long time partners as well as other Cloud providers.

Whatever be the case, multi-cloud as a strategy is gaining importance. So it is essential for all architects and others like system admins, DevOps engineers to have a more holistic idea of Cloud service providers and not stick to only one Cloud service provider.

This book, which I have co-authored, addresses the Multi-Cloud and how it can be leveraged. The contents of the book will tell you that the focus is mainly on AWS and Azure. You will also learn about OpenStack offerings. The book focuses on one very important aspect when dealing with multi cloud, i.e., interconnects. It has a chapter on how to interconnect different cloud solutions. I am sure this book will be of great use to architects and others who want to learn about Multi Cloud.

The book is available both in physical as well as in e-book (Kindle) form. The links to buy the book:

In India: Multi-Cloud for Architects 

In US and other countries: Multi-Cloud for Architects 

Once you buy and read the book, would request you to leave a comment in Amazon. That would be of great help.

What’s new in AWS : Part 1 – Compute & Storage

Since this is the first post of this year, let me wish you a very Happy New Year 2019 even though more than half a month has already passed. Time flies, especially when you have work to do.

There have been lot of developments on the AWS front and given that re:Invent is held at the end of the year, the number of new services and features introduced is huge. In my blog post I will concentrate only on the new happenings in the Compute, Storage, Networking and Database services. This will be a two part post. In the first part I will talk about Compute and Storage.

  1. Hibernating EC2 instance: Till now we were only able to stop or terminate EC2 instances. AWS now gives us the ability to hibernate an instance. As you know, hibernation basically saves the system state and when you start it again it runs from where you left. Very similarly to closing your laptop without switching off the system. The billing will stop for the instance once it is in the hibernate state (ofcourse you will pay for the attached EBS disk and any attached EIP). In other words, billing wise it is similar to a stopped instance
  2. Increase in Max IOPS for Provisioned IOPS EBS Disks: The Max IOPS that can be requested of a Provisioned IOPS disk is now doubled. Earlier the limit was 32,000 IOPS. Now the limit is 64,000 IOPS. While this is definitely good for the performance, you also have to keep in mind that this is about the performance of a single disk. The Max IOPS that an instance can support is still 80,000 IOPS. If you want to use multiple disks to build a RAID, keep the instance limit in mind. The throughput of these disks is now at 1,000 MB/s maximum with the instance maximum throughput being 1,750 MB/s
  3. FSx : AWS already has EFS, Elastic File System, which is a shared file system. EFS though has a limitation. You can mount EFS only if you are using a Linux instance, since EFS uses NFS v4.1. You cannot mount it if you have a Windows instance. This limitation is now overcome by providing the FSx file system. FSx allows you to create file shares for Windows as well as Luster. FSx for Windows integrates with Microsoft AD as well.
  4. AWS Transfer for SFTP: In many cases you store files in order to share it to your clients via SFTP. Generally you have to setup the FTP server and store your files in your server. You also need to maintain the server. With AWS Transfer for SFTP,  AWS sets up the SFTP server for you and you can store your files durably in S3. Additionally you can reroute your current FTP domain name to the AWS domain name using Route53.
  5. S3 Intelligent-Tiering : One of the major issues for Enterprise is managing data and ensuring that costs of storing data is minimized. Many a times we resort to putting up lifecycle rules to ensure cost optimization. In major Enterprise storage arrays we have intelligent tiering, wherein data is moved between various classes of storage. Now this is available in AWS. The S3 Intelligent Tiering moves your objects between Standard Storage and Standard Storage – Infrequent Access tiers. So if your object in Standard Storage is not accessed for 30 days, it will be automatically moved to Infrequent Access Class, thus saving costs for you.
  6. Amazon S3 Object Lock: Till now, we were having the Vault Lock feature in Glacier. The lock feature would convert a vault to WORM (Write Once Read Many). Now AWS has extended this to objects in S3. So we have a lock at object level. Once an object is locked for a certain period of time you can neither over write it nor delete it
  7. S3 Batch Operations: This is mainly aimed at developers and automation engineers. In earlier times, we needed to make changes for each object separately. Now you can apply certain actions to a whole set of objects at the same time. This allows changes to happen in hours, what would have taken days or even a month, in earlier case.
  8. S3 Glacier Deep Archive: This what AWS says about S3 Glacier Deep Archive. This is self explanatory. “This new storage class for Amazon Simple Storage Service (S3) is designed for long-term data archival and is the lowest cost storage from any cloud provider. Priced from just $0.00099/GB-mo (less than one-tenth of one cent, or $1.01 per TB-mo), the cost is comparable to tape archival services. Data can be retrieved in 12 hours or less, and there will also be a bulk retrieval option that will allow you to inexpensively retrieve even petabytes of data within 48 hours.”

In the next part, we will discuss new items in Networking, Database and Security.

How do you keep up with technological advances?

This is a question I get asked almost every day. This post tries to answer this question and give you an idea of what tools/sites I use to keep myself updated.

One of the main challenges all of us face today is the pace at which technology is changing, new features which get added, new software which suddenly picks up speed and new architecture patterns that come up. How do you deal with this? The single answer I have is: with lot of hard work and discipline. There are no short cuts here.

I am going to give you an idea of the sites I follow and the tools I use to keep myself updated. First, I am more a reading person than a seeing/listening person. In the sense that I love to read articles and update my knowledge than hear podcasts or see videos. So what I am going to suggest are links to blogs and articles. Which means you need to read. If you are a podcast kind of person, then I don’t have much for you as of now except for a couple of links. Probably maybe some time later.

The tools that I use to get info are Feedly and Twitter. I use to be a great fan of Google Reader and felt bad when Google shut it down. After that I shifted to Feedly. I find it quite nice. Below are some of the sites I follow using Feedly:

First AWS based blogs


Google Cloud Platform


Another way to keep yourself updated is to subscribe to some newsletters. The ones I subscribe to are the following:

  • DZone: They have newsletters for various subjects: Java, DevOps, Containers and so on. They give good information and you also get some nice free e-books as well. I would recommend signing up for their newsletter. The website is here:
  • Thorn Tech: If you are really looking at some articles which talk about actual implementations implementing AWS, ThornTech is a good site to follow. Subscribe for their newsletter. I have read some very interesting and informative articles at this site
  • RightRelevance: This is another site, whose newsletter gives you lot of consolidated information. You can check them up here:

Twitter is another platform which I use to keep myself updated. You should follow all the aws handles, azure handles, google handles as well as people like @jeffbarr, the AWS evangelist. Once you get onto twitter, you will get to know the handles to follow. Alternately, you can follow me on twitter @suresh_csiksha and then follow whomever I follow.

As I said, it is not about knowing these sites or following them or subscribing to the newsletters that will keep you updated. You have to make time to read those articles.

Disclosure: I have no connection with any of the websites I have quoted here. I don’t know who runs them. I have used them and benefited from them and hence I am putting them here.

Should I learn Puppet / Chef?

Last month I had put up a notice about my Puppet course in LinkedIn and there was good response to it. (I ran two batches of the course). While most people knew what Puppet was, there were also a significant number of people who wanted to learn DevOps and they have read Puppet is DevOps and hence wanted to learn it. I asked them if they had managed Linux systems and if they had done work as Linux System Admins, the answer was in negative. I advised them that Puppet may not prove very useful to their career.

We can understand that people want to ‘learn’ the latest in technology so as to keep themselves relevant, ensure progress within their company and increase the chances of landing a new desirable job. While keep oneself relevant must be the goal of everyone, we need to also understand which technologies to choose so that it flows naturally into what you are doing. I have been advising people that they should never give up the experience they have gained till now. Rather they should see how they can leverage on their experience when it comes to new technologies. I have had more than one conversation with folks with 10+ years of experience who have good experience in Storage / Database / Network Ops who want to give everything up and take up a Cloud job by clearing one of the certifications.

Similar things are happening on the DevOps front. Those who want to move to DevOps have to understand DevOps first. It has two parts: the Dev part and the Ops part. The Dev part is where we are looking it version control, its linkage to continuous integration and deployment. Here is where you will hear about Git, Jenkins and CICD. The job here could be to setup the CICD pipeline. The Ops part is about maintaining the configurations of all servers in the infrastructure. In order to do this, you must have been a system administrator earlier. You must understand how to install packages in various flavors of Linux, you must understand how to start and stop services in Linux, you must understand about configuration files and how they affect the installed software. Here is where you will hear about software like Puppet, Chef, Ansible and Salt. Remember, companies would go for these software mostly when they have large number of systems and want to automate. Essentially it means they want someone with good system administration knowledge to take care of this automation process. There will be some minor amount of code that needs to be written in these software. So you should also have some shell scripting experience.

So when you are thinking of learning Puppet or Chef or Ansible, make a note of what I had stated above. If you are already in the system administration domain and are not very afraid of programming a bit, then jump in to Puppet or Chef. If you are senior person and have never been involved in system administration, then you need to make it clear to yourself as to why you want to take up DevOps. You must have a plan in place since you will be spending your time and money on this. If you are relatively junior, say one or two years experience, understand system administration and jump in to DevOps. It is probably the correct time to do so.

We do have Puppet and Chef courses at CloudSiksha, in case you want to progress in the area of DevOps.

Tools for AWS

The demand to learn AWS is quite high and the number of people getting certified in AWS is going up steadily. Slowly having an AWS certification will become mandatory if you want to work in AWS or if you are searching for an AWS job.

Those who are working in AWS projects know that knowing AWS is just one part of the job. There are lot more things involved in their jobs. They have to more than AWS in order to execute their role to perfection. They need deep understanding of the Data Center, understanding of the network, understanding of the application, awareness of the compliance and security requirements and much more. The job of a data center architect is not easy and the lack of tools in AWS for certain things makes it more difficult.

Take for example, billing. I recently did a architect course for a MNC and lot of senior architects attended it. One of them asked me, “How do we makes sense out of AWS bill? The bill runs into hundreds of pages. Yes, there is total transparency but to understand what has been spent and why is a nightmare”. I had heard similar comments from architects of the now non-existing CSC. They were also talking about the huge bills they to contend with. One of their customers had more than 700 EC2 instances running. Not to mention storage and databases. So the bill was humongous. How do you read such a bill and make sense? It definitely needs some tool. What are the tools available?

Here is a post by AWS on how to analyze cost using Looker and Amazon Athena.

Here is a post which talks about various free and paid tools for the purpose of cost analysis

If you love to develop a solution of your own, here is a post on how you can do it using Google Big Query

When I teach the participants about VPC, there is question that is almost always asked, ‘Can we get a network diagram of the VPC we created’? It is not possible in the console and to be honest if you have more than one VPC with subnets in each VPC, it is a bit of pain to see things on the console even if you have named the VPC and subnets in a very logical fashion. What would be helpful is a diagram of each VPC detailing the subnets within the VPC, the route tables and the NACL.

There are multiple tools which help in both generating a design diagram as well as generating a diagram from the existing infrastructure. What would be cool is the feature which can generate a CloudFormation template from our existing infrastructure in a few clicks. I browsed through a few tools which I am listing here. Please note that I have NOT analyzed any of them and I have no idea how effective they are. I will do some testing the coming days to find out how useful these could be. For the time being here are some tools that you can explore.

Hava is a tool which can import your existing infrastructure as a diagram. They also claim that you can get a CloudFormation template from the infrastructure in a matter of a few steps.

CloudCraft allows you to do drawings of the infrastructure. You can also import your existing infrastructure into this tool.

LucidChart AWS Architecture Import As the name indicates, this tool too allows you to import your AWS Infrastructure and visualize the infrastructure

I am sure many of you may be using various tools. Which is your preferred tool and why? Do let me know in the comment section

AWS Latest Announcements

As usual, in re:Invent 2017, Amazon AWS has announced a spate of new services and added new features to existing services. I will summarize some of these. I am going to concentrate on the more generic and not the very specialized services, though I will mention a few of them.


  1. AWS Fargate : This new service from AWS allows you to run your docker container without worrying about the systems that will run the container. In other words, AWS will take care of setting up a cluster of instances and will run your containers. The cluster will be maintained by AWS leaving you free to worry about your application. Azure already has the ability to run container instances. In this case, I think AWS is catching up with Azure
  2. Bare Metal: IBM and a few others had the Bare Metal offering earlier. As the name indicates, you get complete control of a server and you can load the hypervisor or OS of your choice on the system. This helps you in many ways, especially in getting better performance, achieving compliance, tackling licensing issues and you can also build a cloud of your choice within AWS!! Bare Metal is still in preview stage but I am sure you will see it being generally available soon
  3. Hibernation of Spot Instances: Earlier whenever your spot instance was running and your bid price fell below the spot price, AWS terminated your spot instances. So spot instances were suitable only for such applications which could withstand sudden termination. Later, they stopped the spot instances instead of terminating them. Now the spot instances will go into the hibernation mode. Here, the state of your memory is also stored on disk and when capacity becomes available again your instance will start running from where you left off. The private IP and the Elastic IP are also preserved. This makes spot instances even more attractive to use
  4. Elastic Container Service for Kubernetes (EKS): Many of you would know that Kubernetes is a Docker orchestration service. AWS had only ECS (Elastic Container Service) earlier for Docker orchestration. They have now given us the option of using Kubernetes as well. Here, AWS will take care of all the infrastructure required for running Kubernetes, so that we need not worry about setting up servers and setting up Kubernetes. Given that Kubernetes is having a lot of traction, this is a good move from Amazon. This is now in the Preview stage


  1. Amazon Aurora multi master: Now you can create more than one read/write master database. The applications can use these multiple databases in the cluster to read and write. As you can guess, the high availability of the database will increase as you can have each of the masters in a different Availability Zone
  2. DynamoDB Global Tables: In this case your DynamoDB tables are automatically replicated across regions of your choice. Earlier if you wanted a replica of your DynamoDB table in another region you had to setup the replication on your own. With DynamoDB you no longer need to worry about it now. You can immediately see how this will be effective in a DR scenario.
  3. DynamoDB Backup and Restore: Now AWS allows you to backup and restore your DynamoDB tables. This is to help enterprises meet the regulatory requirements. AWS promises that the backup will happen very fast irrespective of the size of the table
  4. AWS Neptune: Amazon launches a graph database which it has name AWS Neptune. If you have seen my webinar on NoSQL Databases you would know that graph database is a type of NoSQL Database. I will write a separate post on graph database and what AWS Neptune’s features are in a future post


  1. Inter-region VPC Peering: Earlier you could peer two VPCs only if they were in the same region. Now Amazon allows you to peer two VPCs even if they belong to different regions. So an EC2 instance can access another EC2 instance in a peered VPC of another region using only the private IP


  1. Amazon MQ: This is a managed broker service for Apache ActiveMQ. Amazon will setup the ActiveMQ and maintain it. I don’t have much of an idea about ActiveMQ. I haven’t worked on it. From what I can gather, Amazon now has two messaging solutions, its own SQS (Simple Queue Service) and Amazon MQ. Maybe Amazon MQ has more features than SQS? I will find out and let you know

There are tons more of announcement that were made. I have just touched on ones that affect the AWS Solution Architect and AWS SysOps exams. I will write more about other new services and features in another post.


Storage Gateways and FUSE

Image source :

Cloud brought with it cheap storage and it also brought with it durability This meant that storing your data on the cloud is cost effective and you don’t need to worry about losing data. (S3 of Amazon for example, gives us 11 9s durability. This means that for all practical purposes, you will never ever lose your data). It is but natural that people start using the Cloud storage service and these Cloud Storage service of different vendors are storing trillions and trillions of objects.

Cloud Storage is based on Object Storage. This is different from the standard Block and File storage we are used to. In case of Object store, we need to fetch the object from the Cloud using REST APIs. This is quite different from reading and writing a file on to your disk. There are other differences as well between Object store and Block/File based storage devices.

While Cloud Storage is cheap, users are more comfortable with a filesystem interface. Is there a way in which we can deal with the Cloud Storage as if it is a filesystem? This means that the user just reads and writes a file and does not need to use REST API to fetch or store a object. If this can be done, users will find it easier to use the Cloud storage, thus reducing storage cost. This is possible by using Storage Gateways.

Storage Gateways have the Cloud Storage as their backend but expose a filesystem to the users. The users deal with files whereas the Storage Gateway will store these files are objects in Cloud Storage. This background processing is done transparent to the user. The Storage Gateways could expose either a block device or a filesystem (or a virtual tape library) to the user. AWS, for example, has Storage Gateways which expose a filesystem, Storage Gateways which expose a block device (iSCSI device) and a Storage Gateway which exposes a Virtual Tape Library (VTL). All of these use S3 as their backend to store the data.

The question that will be uppermost in your mind is that if the storage is in the Cloud and if you are using this storage as the primary storage, will there be no impact on the performance? It is a very pertinent question. Accessing the Cloud is definitely not as fast as accessing your disk drive in the data center. In order to address this, Storage Gateways have disks in them wherein the cache the recently accessed files. This helps in bolstering the performance of the gateways.  Other than AWS, Avere System is another company which does Cloud based NAS filers. ( )

Azure has now come with a FUSE adapter for BLOB storage (BLOB storage is the object store of Azure). Once you install this FUSE adapter on a Linux system, you can mount a BLOB container onto your Linux system. Once that is done, you can access the files as if they are part of your filesystem. You don’t need to use the REST APIs. The advantage of this wrt the Storage Gateways is that Storage Gateways are generally virtual appliance. For example, in case of AWS Storage Gateway, you need VMWare on prem because AWS Storage Gateway is a virtual appliance which runs on VMWare ESXi. In case of FUSE, you don’t need any additional device. Once you have the driver installed, you can start accessing the object storage as normal files.

Ofcourse, FUSE adapter of Azure is in the initial stage and hence has limitations. Not all filesystem calls have been implemented. So you need to be careful when you are using it.

You can check up this Azure article for more details on the FUSE adapter:

Announcing CloudSiksha Academy

I am excited to announce that we will be launching CloudSiksha Academy on 30th September 2017.

We have been listening to the needs of myriad set of engineers: sysadmins, network admins, developers, engineering managers, senior executives and so on. Based on the feedback we received, we perceived that there is a need for an Academy which can focus on role based courses in technical areas. Hence this initiative of CloudSiksha Academy.

Our idea is to make you perform better in your role or if you so desire, to shift to a new role. We understand that the requirement is different individually. Some want to ensure they upgrade their skills to perform their jobs better and move up the ladder within the organization. Some find they are stuck in obsolete technology and want to shift to a place where exciting things are happening. Managers want to update themselves on new technologies, which will have an impact on their jobs. They want a holistic view and not a hands on training. Senior executives would be looking at how the newer technologies challenge them in terms of cost, people management, process change and so on. Ofcourse, there are people who are looking at jobs, be it the college freshers or experienced folks. It is important that we address the needs of each of the constituents in a unique way. Hence you will find modules in CloudSiksha Academy tailored towards various roles and just not towards certification.

What we also realized while talking to people is that each person has their own pace of learning and each person is more comfortable with a certain methodology. For example, some engineers who are already doing their job well don’t need person instruction. They are comfortable looking at videos and learning. While some are not so comfortable and would love to interact with an instructor and want to ‘attend’ a course. Whereas some others would watch the videos and then may want to talk to an expert to clarify their doubts. Keeping this in mind, for each of the roles we will have video based at-your-pace learning, blended learning and hand holding online classes. This will allow you to choose course based on the role and it will also allow you to decide on which methodology you will be comfortable with and choose that methodology for knowledge acquisition.

What courses with CloudSiksha Academy offer? What roles are we envisaging? What will be the duration of each course? What will be the fee?

Wait till 30th September 2017 to get all your questions answered. I can promise you that you will have some excellent deals when CloudSiksha Academy is inaugurated. Looking forward to your kind support to make this venture a success.

Watch this space for more details.

Which Cloud is better?

In this post, I will talk about another question that I get asked often: “Which Cloud is Better?”. As with many things in life, there is no single or a simple answer to this question.

When you are looking to use a public cloud, you are looking at various aspects of the cloud. Some of them would include:

  • What services does the Cloud provide?
  • What will be the performance of my VMs?
  • What is the cost that I will incur?
  • How easy it is to migrate to this Cloud?
  • Will I be locked in with this vendor?

These are the basic minimum questions that will arise when you are choosing a Cloud provider. Against all of these, you will find that it is very difficult to do an Apple to Apple comparison between various Cloud providers.

Let us take performance for example. Assume you have the same configuration (say a 2 vCPU system with 4 GB RAM and 500GB hard disk) from two vendors, will the performance of your VM be the same in both places? We cannot answer this with any assurance because performance of a VM will depend on how over-provisioned the bare metal is and also on the noisy neighbors. Noisy neighbors are the other VMs which are running on the same bare metal your VM is running on and if any of the other VMs start consuming more of the resources, it can have an impact on the performance of your VM. We do not know how the Cloud provider places the VMs on Bare Metal and hence you cannot speak with confidence about performance of the VM. As you would have guessed, depending on the neighbors and when they consume resources, your performance will vary.

A recent blog from Google talks about this aspect and claims that Google has the best price performance. You can read the Google blog here.

One way of guaranteeing the performance would be to take a dedicated VM. This means only your VMs will run on the bare metal and no other VM will be placed on this bare metal. AWS has dedicated instance and Softlayer has Virtual Private instances. As you can expect, these options will give you a more reliable performance but at a higher cost.

This brings us to the cost comparison. The standard question I hear is ‘Which Cloud is cheaper?’. Once again this is not an easy question to answer and will depend on the workload you have and the services that you use. Let us take a very simple case in AWS. If you are using say a t2.micro instance with say 100GB disk as a web server, you cannot immediately calculate what will be your monthly outflow. You need to have an idea on the network traffic which goes out from your instance. AWS doesn’t charge for incoming traffic but outgoing traffic is charged. So the cost you incur will depend on the traffic. Or take the case of S3. It is not just about the cost of storing object. The no:of GET, POST, PUT etc requests also get charged. Hence computing the cost is not an exact science. You need to get some data before you can compute cost with some confidence.

Most people tend to compare the instance cost and decide on which is cheaper. We need to understand that instance/VM is just a small part of the larger equation. We have storage costs, I/O costs, networking costs, support cost and so on. Comparing only the VM costs does not give the big picture regarding costs at all.

Migration is a topic which requires a post of its own. I will write about it in the near future.

Will I lose my job to the Cloud? : Concern of the mid level managers

In my last blog post, I had spoken about the concern of Administrators about losing their job to the cloud. In this post I want to examine the concerns of middle level managers with respect to their job security in the era of Cloud.

I have had many conversations with mid and senior level managers, who have experience ranging from 10 to 20 yrs in the industry and are now feeling insecure about their job because of projects slowly moving to Cloud. Their major concern is two fold: One, the IT industry itself has been harsh on the mid level managers, laying off lot of them. Second, they fear that their skills, or lack of it, will not fetch them another job at the same level in the industry.

Many of them want to learn about Cloud in order to keep themselves relevant in the industry but are faced with the problem, what should I learn? The biggest challenge for mid level managers is not that they cannot learn new technologies but what should be the next step after learning that technology? The dilemma is due to the fact that the managers have lot of experience and the industry will hire you for your experience and that experience is not on the new technology (Cloud, in our case here).

Many ask me if they should take up a course and get themselves certified as AWS Architect – Associate. This can only lead you so far but not farther. It will demonstrate that you are willing to learn new technologies, that you are willing to adapt yourself to new situations and you are quite aware of how the environment is changing. Along with it, you need to try and check out how you can work on Cloud and how you can bring in a perspective which a person with 3 to 4 yrs experience cannot bring to the table. It is very important that you think about this carefully. Because the companies will not hire you for your certification. They can hire a 3 to 4 yr experience person for that. What they will hire you for is your knowledge on the development processes, your knowledge on migrations and your ability to understand Cloud in a wider enterprise context.

So what should managers so in this situation? One, is to choose one of the Public Cloud providers and try to understand the working of the Cloud. Get a certification if you can. Second, understand the challenges of migrating workloads to Cloud. (There is lot of literature out there.) How would you meet these challenges as a Manager? Thirdly, understand why moving to Cloud would benefit your organization and what could be the limitations. Finally, try and write articles (in your own blog, on LinkedIN and so on) in order to display your passion for the Cloud. It will also let the world know that you are interested in Cloud and have expertise on it. The best way ofcourse is to lead a project within your company (either a full fledged project or atleast a proof-of-concept project) which is based on Cloud. Nothing gives you more leverage than working on a project.

Times are tough for mid level managers in many organizations but you can definitely tide over them if you consistently work hard in learning and disseminating your knowledge.