2014-06-18

Back from the OpenStack Cloud Summit

This is a re-post of my article, originally published on The Ravello Blog.

Last month I attended the OpenStack Cloud Summit in Atlanta. It was a very interesting experience. I learned a lot of things from the people there, from the organization, and how the OpenStack cloud community works.

Without a doubt, the people who are writing the code are extremely talented. They are doing unbelievable work writing things that make something out of nothing – with just a few lines of code transform your whole system into a cloud that allows you do a huge number of things. It is absolutely amazing.


Not just for developers

Unfortunately, they don’t always understand that clouds have not really been used or built only for developers to write the code or for users to consume the cloud resources. There is a need to recognize the existence of a third party in the equation – the people who operate and manage the cloud, including the infrastructure. As a result, there are issues that have not yet been addressed but need to be considered and integrated into the system.

An operator is not a developer, and a developer is not an operator. No matter how much people talk about DevOps, there is still some kind of disconnect between the two groups. I could see a huge difference between the two groups at the conference. I had expected to see a more integrated community and was surprised by the extent of the divide.

Enterprises are definitely looking to OpenStack for the purpose of deploying cloud infrastructure in their organizations. There are a number of very large providers that are looking to deploy OpenStack as a service that they will resell to customers. The majority of those looking to OpenStack are looking to use it to deploy their own internal clouds.


Obstacles to adoption


There are a number of obstacles that are slowing down the adoption of OpenStack cloud.

The upgrade process and migration process between versions is problematic, and any kind of change to the infrastructure affects the underlying workloads. One of the most frequently heard requests from the enterprise point of view is to make the upgrade possible with zero downtime for underlying workloads.
Yes, they have taken huge steps in that direction. But they aren’t there yet. It will take time. Until then people need to understand and make their decisions accordingly. The enterprise can build infrastructure and processes around these limitations, and the need to have downtime for upgrades, or wait until features are available in the regular code.

Another thing that I noticed is that there are a number of “distributions” of OpenStack. I don’t think the use of the term “distribution” is apt here. There is only one OpenStack software. Yes, you have Redhat, which deploys Redhat OpenStack one way, and Ubuntu, RackSpace and others that deploy OpenStack in other ways, but the software in essence is the same.

The software takes the same code from the OpenStack foundation. These are not different distributions, they are different deployment methods. Even in those cases where additional pieces are added to the system package, in essence the software is the same. The problem is that these distributed solutions are not interchangeable. The solutions can be good for one thing and not so great for another. You cannot easily mix and match solutions, particularly with different operating systems.

There are currently no simple solutions for backing up your OpenStack cloud. Some maintain that you don’t need to provide such a solution, but this is the kind of thing that the enterprise may want, or provide as an added benefit to their customers.


OpenStack as a disrupting technology


OpenStack is a disrupting technology. It is making people think differently and work differently. It is making IT work differently. One of the contentions raised at the Summit addressed the need to fix the features without disrupting things. For example, there are features with problems from four versions ago that have not been completely resolved. In the interim, there have been some 17 other products introduced into the OpenStack product and surrounding infrastructure. Meanwhile those original features have not been fixed. People want to see OpenStack fix what’s there before investing effort in making the product even brighter and shinier.


Emergence of OpenStack ecosystem


There is an ecosystem evolving around OpenStack. For now it is a very small ecosystem. For example, there are people who provide monitoring solutions or deployment solutions based on OpenStack. This ecosystem is limited, though I expect it to continue to grow. (The ecosystem around VMware, for example, is much larger. It includes security, compliance, replication, backup , disaster recovery, and automation solutions.)

I praise the OpenStack Foundation for trying to commit to providing better solutions for operators. A half day of sessions was devoted to topics of interest to operators. The ensuing discussions were enlightening. Not everybody agreed about everything. But in my opinion, this was definitely a step in the right direction. The Summit was a great learning experience for me, and would definitely plan to attend future OpenStack events.

2014-06-16

OpenStack Design Guide Book Sprint

It is said that once you get a bug in you – it is hard to get rid of it. I have been asked (and I have accepted) to participate in a book sprint commissioned by the OpenStack Foundation.

What is a book sprint you may ask? I am sure this will explain it better than I can – but in short…

A Book Sprint brings together a group to produce a book in 3-5 days. There is no pre-production and the group is guided by a facilitator from zero to published book. The books produced are high quality content and are made available immediately at the end of the sprint via print-on-demand services and e-book formats.

Book Sprint

A full book in 5 days? Is that even possible? Well, yes it is. There are a group of Subject Matter Experts coming together in the week of July 7th in Palo Alto (VMware are being so kind as to host us), where we sit and bash out a Design Guide/Book that will be used as the unofficial “bible” for OpenStack Architects wherever they may be.

Here is some more information about the two previous book sprints that were completed for the Openstack project.

OpenStack Security Guide: One Week, 38,000 Words, A Lot Of Security

OpenStack Operations Guide: One Week, One Book

The participants of this project will include:

And yours truly….

There is good participation from all parts of the globe and the OpenStack community – a diverse crowd – with different skills, experience and backgrounds.

I am really looking forward to this project – meeting such a group of interesting people, working on a new book project, but most of all – contributing back to the community – because that is what it is all about!

You can follow us all with the #openstackdesign hashtag for more information about this project.

2014-06-10

Cloud APIs and Programmatic Interfaces

This is a re-post of my article, originally published on The Ravello Blogapi-driven-cloud.

In talking about functionality and how a product works, people don’t always address the question of allowing interfaces into their software. As a product evolves and grows, the more important the implementation of programmatic interfaces becomes. APIs need to take the place of users sitting at keyboards to better facilitate access to interfaces, especially for large products.

Not everything can or should be done manually. For example, you don’t want to manually power on/off a thousand machines from your keyboard/mouse – it would take too long. When an environment exceeds a certain scale, you need to find a more efficient way to do things. For this reason, when a product is designed and built, you want it to include programmable interfaces that allow people to access interfaces in a programmatic way.

Almost all the cloud providers understand this and have been including such APIs from Day 1. This is true of VMware, Azure, Google, or even Amazon Cloud, with its very rich API, which can be used for almost everything, through API calls or Java components, and so on.

Different APIs for Different Products

But you have different APIs for different products; this is both good and bad. It is good for the vendors, in that it locks you into their products. Migration from one API/vendor to another is problematic. At the same time, the API makes it easier for the end users to handle what would otherwise be labor-intensive tasks in a more efficient, automated fashion. You can do this automatically with a program or a script, or with some kind of automation tool.

Vendors are beginning to be receptive to the idea of allowing their Cloud APIs to be open to the external community, and even external vendors. For example, Cisco UCS manager API is an interface that allows you to interact with underlying hardware so that anything you can do with the GUI, you can do through the API as well.

You can use APIs to embed your solution inside another solution. For example, allowing the interface into the API enables you to embed some kind of management portal inside another solution. This allows you to expand on the functionality that was originally there and differentiate between the default functionality and the added benefit that would like to add.

The idea of having an API, of course, is to expose it to the customer from Day 1 and to allow the customer to use it. Nonetheless, some vendors have APIs that are not exposed to the end user to prevent people from developing or using functionality within the product in a way that is not generally available or desirable.

As time goes on, the APIs are leading to better ecosystems. For the end user, the ideal solution would be to have some abstraction layer of APIs or a single API that is suitable for multiple vendors. This can make the end user’s life easier, eliminating the need to repeatedly recode everything. With regard to virtualization, APIs enable you to interact with multiple vendors using the same API calls, with an abstraction layer in between. It is not easy and does have its challenges, but there is growing interest in the concept of using an extraction layer for APIs and its implementation.

2014-06-09

Recap - Openstack Israel 2014 #OpenStackIL

14140667409_0a775784c0_o

It was a great event. They were expecting 300+ people at the event, and almost 500 people were there - it was really full - but not crowded.

Slide Decks from the whole day can be found here.

The slides from my session are embedded below, and I would like to add a few words to expand on the content.

OpenStack is an amazing community - very different from the VMware one I am so acquainted with - and yet very similar in many ways.

But I will iterate what I said in my session. If you are looking for a 1 to 1 comparison between your current virtual Infrastructure - and for arguments sake - let's say it a VMware one - you will not find a compatible solution or even an easy migration from one product to the other.

OpenStack is a cloud platform, it can be deployed on-premises (did you notice I did not fall into the semantics trap there) but it is a cloud platform. That means that will be a part of your current application infrastructure that will not run well on OpenStack - due to the nature of these applications - they were never designed for the cloud. They rely heavily on a sound and robust infrastructure below - which is something you cannot assume (or maybe should not assume) to have with OpenStack.

A few nuggets from the day - and quotes that I think are worth sharing.