CloudSiksha completes two years

second anniversary

It gives me great pleasure to share with you that CloudSiksha completes two years as on today. It has been a fascinating journey so far. We have trained close to 900 people in these two years for various companies ranging from small startups all the way to huge IT giants. We have trained people in AWS, SoftLayer, DevOps, Chef, Puppet, Storage and MongoDB. I take this opportunity to thanks all the participants, friends and partners who have made this journey fruitful.

As you know, any company can survive only if it keeps running harder and takes calculated risks. Going forwards, we wish to offer more training via the video training mode, which will enable you to study at your own pace and we will also do the blended learning part. We also plan to be a platform for various SMEs to deliver their training. We will remain focused on Cloud, Big Data and DevOps and want to make CloudSiksha’s online training courses best in class. I know the plan is ambitious but then we can’t go far without ambition, can we?

Today as a gift to those who are very new to Cloud, we have uploaded three videos on youtube. These tell you how to start an EC2 Instance and how login into a Linux EC2 instance and how to login into a Windows EC2 instance. Here are the links:

Starting EC2 Instances:

Login into Windows Instance:

Login into Linux Instance:

I must confess that I have been a bit lax in updating the blog due to professional commitments but that is no excuse. I will ensure that this blog gets updated once every fortnight from now on. Do follow this blog in your favorite reader to get updated regularly.

We will also be starting a newsletter soon. We used to send out newsletter but we stopped it because we wanted a ‘No Spam’ policy in place and we realized we were spamming people. We will now take people’s approval before we send the newsletter. This will be a monthly newsletter. If you want to subscribe to it, kindly send a mail to with ‘Subscribe’ as the subject.

Looking forward to interacting with all of you on a regular basis.

Watching your AWS Bill


I have heard more than one story about how people were using AWS and suddenly one day they got a hefty bill. I too had this experience. The bill was not hefty but for a startup like mine even 20 or 30 dollars is an unwanted expenditure.

Why does this happen? Is this due to lack of knowledge on how AWS works and its billing schemes? No, lot of it happens due to lethargy. So the first battle to be fought is with lethargy. Being active is not enough but you must know how you may lose money unintentionally. I am trying to list some of my experiences here.

I am assuming you are a small or medium company and you do not want to purchase any additional software to manage your AWS infrastructure. You are managing it from the console. Here are the things you must do / look out for in order not to overspend.

1. Check the projected bill regularly: The best and simplest way to avoid unwanted charges is to check the project bill on a regular basis. In your billing section you will find a projection for the month. Check if it is the limit that you expect. If not dig deep down to see where the problem lies.

2. Set an alarm: You can set a billing alarm to alert you when the bill crosses a certain value. I think Amazon allows you to get this limit only once so use it wisely and you will be alerted in case of the bill crossing a certain amount

3. Stopping EC2 instances isn’t enough: Remember that when you stop your instance, only the billing for the instance stops. Your EBS is still billed. That means if you have have EBS volumes whose total size is more than what your free tier permits, you will be billed for EBS volumes even though the instance to which they are connected are stopped

4. Check the regions: A couple of times I had terminated my instances and seeing no running instance I was satisfied. Yet, I got a bill. This was because instances were running in other regions and I hadn’t shut them off. So remember what you see on  the dashboard pertains to one single region. So religiously check all regions regularly else you will end up paying a decent sum to Amazon

5. Release Elastic IPs: Elastic IPs cost nothing when they are attached to a running instance. In case you terminate an instance which has Elastic IP attached, ensure you also release the Elastic IP. Else you will be charged for the Elastic IP which is not in use

6. Delete Autoscaling groups: This happened to me once. I had terminated all instances and then logged off. What I had not realized is that I had an autoscaling group running with minimum number of instances set to 2. So after I logged off, autoscaling had done its job. It started two instances and I ended up paying for them. So always check if you have any autoscaiing group and whether the instances you are terminating belong to an autoscaling group

7. Delete Elastic LoadBalancer: Ensure you delete the ELB as well when you delete the instance attached to it. Else you will be charged for the ELB.

8. Understand which is free and which is not: I should have probably put this up as the first point. It is very important that we understand which services are free and which services are paid services. You must as well understand the limits (of free service). This will go a long way in ensuring that you do not pay anything in excess.

Large corporates will have their IT teams which will monitor for wastage. People like me, who run small companies, may not have the bandwidth to keep a continuous eye on the status of the infrastructure. So ensure that you scan for all things I had mentioned above whenever you log in into your AWS console. Only by discarding your lethargy will you be able to ensure you don’t waste money.

If you have any such experience. Let me know. I will add the learning to this post.

Dedicated Hosts from Amazon

AWS Dedicate Host

Amazon AWS recently announced the availability of Dedicated Hosts for users. This means that you can order a dedicated hosts for yourself and run your VMs in this host. Amazon says, “Dedicated Hosts provide you with visibility into the number of sockets and physical cores that are available so that you can obtain and use software licenses that are a good match for the actual hardware.” You can read all the technical details of how to order a dedicated host and how to place your instance on this host at this blog: 

In case of a Dedicated Host, the billing starts as soon as you are provisioned a Dedicate Host. The billing doesn’t depend on how many instances you are running on the Dedicated Host. You can check the pricing of the dedicated hosts here: 

This is a good move by Amazon and I sure this will slowly lead towards a bare metal provisioning being available as well in future course. The reason I say this is because IBM SoftLayer has bare metal and it is their USP. Bare Metal offers enterprises lot of control and also ensure that the compliance requirements and performance requirements are taken care of. So if you want to build your own data center within the public cloud infrastructure using bare metal could be preferable for certain use cases. IBM SoftLayer gives that flexibility to its users. IBM SoftLayer has both dedicated hosts and bare metal. Amazon has caught up with the dedicated hosts path. I think in future they may get to the bare metal part.

In case of Bare Metal you ‘own’ the server, in the sense that you can control the server fully by using say IPMI. You get a KVM for your use and then you decide if you want to use it as a standalone machine or you can load any hypervisor that you want. Since you completely own the server many of your compliance headaches are solved.

In due course of time i think every cloud provider will be pushed to offer Bare Metal servers. Ofcourse the main value proposition of the cloud is that it takes the management headache away from you and leaves you free to concentrate on your product or service. Slowly everyone is realizing that this dream of a No-IT is not a possibility since there are multiple reasons why an Enterprise, especially a large one, needs to have control over the infrastructure. For such large Enterprises, it is not the No-IT but rather the flexibility and elasticity of the cloud which will be the main value proposition. The Dedicated Hosts offering from AWS tells us that it is indeed true.

CloudSiksha’s First Anniversary


I am delighted to share with you that CloudSiksha completed one year of its existence on Oct 28th 2015. It has been a great year so far with lot of initiatives taken which will yield results in the coming years.

We started our first course in the month of December 2014. The first two courses we did were related to Storage. N R Ramesh, Saravanakumar, Ratnasagar, as well as my former colleague Sarath were the initial participants. My sincere thanks to them. We started the Cloud related course in Jan 2015. After that it has been majorly Cloud related work that we have been doing.

We did our first online course with participants from London. This was the MongoDB course which was conducted by Maniappan. Later I did a AWS Solution Architect Course for participants from US. From then on we have steadily been doing online courses for participants from US, Australia, Mumbai, Delhi, Hyderabad etc.

We also expanded into the corporate world. We did courses for Accenture, TechMahindra, HCL, Sonata, RazorThink and Adobe. We also had participants from companies like HP, IBM etc. for our courses

From Storage and AWS, we expanded to Puppet, Chef, Python and IBM’s SoftLayer. Within AWS itself, we have been conducting courses on Solution Architect, SysOps and Development. We will be venturing into DevOps soon.

I got myself certified as a AWS Solution Architect – Associate. I am also glad to report that quite a few of the participants of our classes passed this exam.

Additionally we have now partnered with other companies to develop online content. Python based content is under development. Similarly AWS based content is also under development. We will publish all details of this once the development is complete. In the coming year we will be focusing a lot on high quality online content.

At this time I would like to thanks Maniappan and Sarath, who took the initial classes. I also would like to thank Ramesh Murthy and his team at StridesIT, who have supported us both in terms of customer acquisition and in terms of fulfilling the infrastructure requirement. Kavirajan designed my website and I have to thanks him for the great job he did. I also wish to thank all my other partners on this occasion.

My sincere thanks to all participants of our courses. We would not have grown this much were it not for your active participation and support.

Hoping the we will scale greater heights in the coming years.


You succeed if your Eco-System succeeds


In my former company I prepared a deck for my CEO wherein he was to give a talk which went ‘you succeed only if your client succeeds’. The basic idea was that days of customer satisfaction, customer delight, customer ecstasy and similar synonyms were no longer applicable. It was clear that as a service provider we would win only if our client wins. If the clients are satisfied with your work but if that work doesn’t lead to clients succeeding in their business, eventually you will go out of business. To achieve this sort of synergy requires a high level of trust and a deep understanding of client’s business.

That was from a service provider angle where the service we were providing was more of offshoring client’s operations and ensuring cost reductions. Service providers like Amazon face a different type of challenge. The first step is to ensure that those who are direct customers succeed by using Amazon’s cloud services. The second step is to accept that Amazon alone cannot provide all the services that a client needs and ensure an eco system around Amazon is built so that client has lot more choice. And all those choices have Amazon as their underlying layer.

Very often when I taking an Amazon AWS class I get asked about some features that participants would love to see in Amazon. While some of them will eventually be developed by Amazon and provided to the users but we must understand that even a company like Amazon will have a limit on resource when it comes to development. This is where building an eco-system helps as it allows other startups with agility to build tools and software which can be of use to the Enterprise. Amazon does a good job in their documentation (probably the most extensive and the best documentation I have seen on the web) and it becomes easy for startups to develop applications / software which are based on Amazon APIs. Once such tools and software proliferate, more and more customers will jump on to the Amazon bandwagon.

Recently I was talking to the CTO of Kumolus, Michael Salleo, about their product. I had seen their product during the AWS Conference. It was an impressive product. You can have a look at it here: It does quite a few things which many Amazon users want. You can see that you can get very good view of how your finances are spent and lot more management and deployment capabilities are present in the software.

That was just one example. Here is a link to a set of slides which talks about various tools / products which use AWS in one way or the other.

Hot Products from Amazon re:Invent 2014

They range from management software, security software, big data analytics, backup & DR and more. This gives an idea of the sort of products that are being built using cloud technologies. This is what will ensure that Amazon, Azure and other cloud providers will win in the longer run. In other words, you have to put in effort and money to help those who depend on you and you have to ensure they succeed. So that in the long run you succeed.


AWS Enterprise Summit 2015 Bangalore

AWS Enterprise Summit

I attended the AWS Enterprise Summit held at Ritz Carlton in Bangalore. Due to prior engagements I was not able to attend the second half of the summit. Here are some observations about the summit.

1. Amazon will be coming to India soon. They are building infrastructure in India and they will have multiple Availability Zones in India by 2016. I think there was a news article on this a few days back. This news was confirmed in this summit.

2. Hybrid Cloud or shall I say On Premise Data Center + Cloud will be the way the Enterprises would be going in the future. This is what many in the panel discussion said. It is difficult, if not impossible, for large enterprises to ignore their legacy systems and hardware. These are very difficult to move to the cloud. Hence the legacy hardware and software will exist on-premise and newer applications will run from the cloud. This co-existence will remain a reality for quite some time.

3. A Gartner Quadrant diagram was displayed. It was stated that the compute power of Amazon was 10X more than that of all the companies put together (in the same quadrant). That is quite impressive and tells us about the scale of Amazon.

4. The partner showcase was impressive with some innovative products on display. The partner exhibition gave an idea on how the eco-system around cloud is developing and how Amazon’s growth is spanning a lot of innovation from other companies and thus ensuring more companies grow in the Cloud services area.

5. There were companies which are into consulting, companies which are into disaster recovery, companies which are into products and more. All of them partnering with Amazon. I was particularly impressed with the product from ‘kumolus’ . The interface was good and was very easy to operate. You can check out their product here:

6. Was talking to UmaShankar, VP, Delivery for Cloud Kinetics We were discussing about how Cloud has now ushered in a need for multi faceted personalities in the Admin area. In order to be a good Solution Architect for Cloud based services, it is not enough if the person is only a Server Admin or Network Admin or Storage Admin. She / he has to be all these. So to all the admins out there: if you want to move to cloud, expand your skills.

7. Some general observations:

a) The crowd was impressive. It gives an idea of how much interest exists for Cloud and its march in India is inevitable

b) The hotel was small for this crowd size

c) The percentage of women overall was very less. No idea on why it is so

d) Was good to see m friend and former colleague KKV present a case study on behalf of Azim Premji foundation. Couldn’t get to meet him though.

e) Met another colleague, Varada, who is now with ‘ifruid labs’

SNIA Storage Developer Conference 2015

This will be a small note.

I know I am a bit late here but thought I would anyway let you know in case you have not seen this.

SNIA is holding its Annual Storage Developer Conference India at Royal Orchid Road, HAL Airport Road, Bangalore on May 29th. Unfortunately I have a prior commitment on that day and hence may miss attending the conference.

The home page of the conference:

The Agenda of the conference:

The Agenda is interesting and if I were to attend the conference I would be in two minds on which tracks to attend since all of them are interesting. Scale Out Filesystem, OpenStack, Cloud are areas of my interest. As could be expected, the talks promise to be a mix of theory and practice. I am happy to note that Sundara Nagarajan (whom we fondly call SN), who is like a mentor to many of us, is also giving a talk in this conference.

The Storage-Cloud relationship seems to be covered quite extensively here. Would have loved to see more of Storage-Hypervisor relationship being covered as well. Lot of innovation is happening in this area with lot of APIs coming out which allow for tighter integration between the Hypervisor and the Storage Array. (You would know this if you are following the developments at VMware and the Storage APIs on offer there.) A company like Nutanix or VMware (with its VSAN) presenting would have been great. But then, like the Top 10 Movies of all time list, each of us will have our own preferences.

If you are working on Storage this is definitely a conference you should attend. I hope it is not too late for registration.

AWS Partner Summit : Customer comes first

AWS Partner Summit

I attended the AWS Partner Summit at Leela Palace, Bangalore on 29th April 2015. It was a well organized meet and the hall was full. They had stalls by sponsors outside the hall and we could get to interact with Solution Architects of Amazon, which was nice thing. I was able to talk to them and get some of my doubts clarified.

There were quite a few talks but the most impressive ones for me were the panel discussion in the morning and the keynote address by Terry Wise (did I get the name right?). The panel discussion had AWS customers and consulting partners as the panelist. This gave a good idea to people on how AWS is being used in India and what are the opportunities for consultants in this field. Each one spoke about how they came to AWS, what benefits they are deriving and also addressed important issues like security.

The keynote was very well done by Terry Wise. It was concise and at the same time covered a lot. The two important takeaways for me personally was about customer success and not being afraid to fail. When I joined the industry quite some time back, the buzz word was customer satisfaction. It later changed to customer delight. Customer delight was just not enough and the buzzword transformed to Customer Success. I remember the time in my previous company when this buzzword hit us and it lead to me preparing a deck for me CEO to deliver on this topic to Project Managers and other mid-level managers. In case of Amazon, the statement which made a great impact was not ‘We succeed when the customer succeeds’ but ‘We succeed only when the customer succeeds’. The ‘only’ was underlined. This does change everything doesn’t it. To be fair to Amazon, they have been following this diktat and been improving their services, introducing new services and cutting cost. (From my personal experience I can say that their team in India is also very hungry and they helped us land our first consulting deal.) I believe in the space we are in (Competency Development & Consulting) we cannot afford not to help our customers succeed.

The other part of not being afraid is also an important message. The importance here lies in the fact that this is a culture which comes top down. When the leadership is scared to fail, the people below become risk averse or is they are willing to take a risk, they get punished for failing. I have seen the sense of insecurity and the unwillingness to take initiative

amongst mid-level managers in more than one company and you can directly trace it to the leadership of the company.

I do hope for sake of other companies, Amazon succeeds in its endeavor. Going by their recent $5B announcement, they are on their way. More power to them.

I will conclude this with a trivial request: Next time when we have such a summit can we have coffee without sugar please !!


Proof of the pudding: AWS Solution Architect – Associate certification

Solutions Architect-Associate

As the saying goes, ‘The proof of the pudding is in the eating’. So I decided to check as to how much of the training material we had at CloudSiksha would help people in their quest to get themselves certified as AWS Solution Architect – Associate. I am glad to say that we do cover quite a lot of ground but there are a few areas we still need to touch upon so that we cover almost everything that you would expect in the exam.

Amazon states that this exam is split up as follows:

– Design : 60%

– Deployment / Implementation : 10%

– Data Security : 20%

– Troubleshooting: 10%

The major issue many would face about this exam is that the scope is wide. It covers lots of services that AWS provides. It is possible that you will not be using all these services. Yet you must know about them as questions from these services appear in the exam.

For example, I had questions from Simple Notification Service (SNS), Simple Queue Service (SQS), DynamoDB, Route 53. Now not all of us would use these services (I had not) and yet you need to know about them. We need to read up about these services. The best option is to read the FAQ about each service. The questions in these services though were not very complex. Many of them you can logically sort out.

Security is a key issue for cloud and not surprisingly AWS gives 20% questions from this area. I also got security related questions in trouble shooting as well. So the overall emphasis on security is much more than 20%.

The questions are segregated into different areas. You just get a stream of 60 questions which you need to answer in 80 mins. Honestly the time is more than enough. The questions are all multiple choice questions. While most questions have one correct answer there are questions which have multiple right answers. Amazon does you a favor by stating how many right answers exist and you cannot submit until you have clicked all the right answers. These questions are generally the tricky ones.

Who should take the exam? I personally feel that you should have a decent understanding of what a data center is, what a 3 tier application means and what networking is all about. Not indepth but atleast a bit more than just awareness. If you really have no hands on experience in managing systems or networks, you would find it tough to clear the exam. Additionally this certification adds a lot of value to experience people but may not add much value is you are new to data center and server/network management.

Let me now come to the training part. I feel that it is not enough reading the documents. You need to work on the AWS Infrastructure. As I had mentioned earlier you may not be able to work on some of the services but you can definitely work on most important service using the free tier. It will cost you nothing and it will give you a good grip on managing the infrastructure. I personally feel that you must not attempt this exam without having done some hands on work. I saw quite a few questions which I could immediately answer because I had gone through that experience when setting up the infrastructure. One of the reasons I was able to score 100% in trouble shooting.

Finally, the workshop we conduct at CloudSiksha, is a complete hands-on workshop. This will probably cover around 70 to 75% of the questions asked. This workshop will give you a good grip in managing the infrastructure. Since this is a complete hands-on training, we are not teaching about the services like Route 53, SQS, SNS, DynamoDB. We plan to add another one day module which will deal with these theoretical subjects and also prepare you for the exam by giving tips on how to take the exam. Please await the announcement which will happen soon.

Cloud Pricing and Lock-in


Price (that is low price) is one of the aspects of Cloud which is highlighted and seen as a major selling point of Cloud. The fact that infrastructure is managed by someone else and the  infrastructure costs less are the main reasons why many go to the Cloud. Some do realize that as they grow and as their resources are being used all the time, things aren’t as cheap as they thought it would be initially. My friend, Ramesh, who runs StridesIT, told me once, “One startup started working on the cloud and initially they were quite happy with a small bill. As more people joined the startup and the usage of resources increased, they started feeling the pinch. You must do a through cost analysis initially else you may end up paying a lot”

Each Cloud provider has his own cost policy and they provide cost calculators which can be used to get a fair idea of how much you may shell out every month. They also have different cost if you are willing to commit upfront for one year or more. There was an interesting article published recently by Google which through some calculations showed that Google was cheaper when compared to Amazon. You can read the paper here: Understanding Cloud Pricing If you click on the ‘Estimate’ link in the article you will be taken into the cost calculators of Amazon and Google. You will find that you will need to take into consideration multiple aspects like disk usage, network usage and usage of any of their services.

You need also understand the granularity of pricing . For example Amazon bills you per hour for some resources. This means that even if you use for 5mins you will be charged for an hour. The time measured is always between when you started the system and when you stopped it. Assume a case where an instance is up for ten minutes. Then you shutdown. You come up again after some time and run the instance for ten minutes again. You have used 20 mins of the instance but you will be charged for 2 hrs !! Each start and shutdown is charged and you have two start and shutdown. Each of them will be charged a minimum of one hour price !!

An important point to note in this article is about the potential lock-in that occurs if you sign up for a long term deal. This is always something which customers worry about. Will the supplier not exploit us if he has locked us down? I believe there are two sides to this coin and lock-in is not as bad as it seems. There are so many companies which are say EMC shop, or NetApp shop or HP shop etc. It is not as if these companies have burnt their fingers because they decided to go with a single vendor.

From my experience in my earlier company I can say with a fair degree of confidence that the clients who are big and who are committed to the company do get very good treatment. There are times when even a small escalation from such companies reach the ears of the top most person and the pressure to solve their problem is enormous. Additionally their input is also sought from newer releases. Since a committed customer is always something which every company wants, you can be sure they will do their best to keep the customer happy. So lock in is not as bad provided you have done your background check about the vendor and the future direction they would take. If they are dependable, lock-in need not be a very major factor in your choice.