March 2009 Archives

In sum, a busy, talkative workshop, with reasonable attendance (I counted upwards of 50 at peak). A good mix of government and some industry (the only real missing speaker was HP, though they attended). I'm also continually thankful that Elastra gets invited to these sorts of events; it sort of validates that our approach is interesting to the larger players.

Many shapes & sizes of Cloud

A clear indicator of the maturing of the cloud community was that private clouds and hybrid clouds were on everyone's lips, and seem to have been essential to generating "enterprise interest" in the concept of cloud computing.

I recommend reading NIST's presentation on this topic, it's well thought out (I'll link to it when the proceedings are published).

It's not that private clouds are what people are exclusively interested in, it's that a) in the short run, they're the only game that's acceptable for Federal agencies or for conservative enterprises, due to very real security and compliance fears, b) even in the long run, the reality will be a hybrid cloud world, not a Big Switch, and c) the benefit behind clouds cannot be just about outsourcing, or else we're screwed, it's just an over-hyped sales pitch. At this point, I think those that say "Private Clouds are a distraction" are full of shit.

The benefit of clouds

In my talk (slides should be available soon), I discussed how Elastra views the benefit of clouds to business. In order of "difficulty to achieve": Primarily it's about resetting cost structure -- moving ongoing IT demand consumption to a variable cost structure, and freeing IT from viewing everything as massive fixed cost. Secondly, it's about drastically reducing the lead times to change one's IT infrastructure in response to demand. Thirdly, it's about increasing visibility of the IT infrastructure and how it ties to business results. Fourth, it enables a more precise, commoditized approach to outsourcing. These benefits have their roots in the overall move towards services-oriented computing, just that they're being applied inwardly.

Some might say that clouds "TODAY" provide a commoditized , precise approach to outsourcing. And I would say "sort of" -- the caveat is that everything is still proprietary, and that SLAs are mostly crap, the price structures don't work well except for 'very elastic uses' of scale, and there aren't many large-scale clouds that are viable alternatives to Amazon EC2 (though Windows Azure is getting there, and IBM is likely coming).

I don't really know if the attendees bought into my view, but I still think we don't talk about this stuff enough -- we're too busy futzing with technology.

What about PaaS?

These sorts of interoperability discussions have a hard time reconciling with Platform as a Service. The talk was good, explaining their view of interoperability -- they allow you to share your data with other clouds & services, and that's their nirvana. But your implementation and custom logic is 100% proprietary. You might be able to get portable logic if you use a Model Driven Architecture approach that generates APEX code for you, but that's about the extent of today's possibilities.

Reuven was notably unhappy with this state of affairs. I chimed in and claimed that PaaS works well for departmental applications or vertical apps where "you just want it done and don't really care about customization lock-in". There are many, many examples of business applications in that category: witness every Microsoft Excel macro or Access database in the small, or any large-scale ERP, HR or CRM system. This is why Salesforce is a billion dollar company.

I know this will come as a blow to the "open uber alles" crowd, but sorry - enterprises care about reducing lock-in on their infrastructure. Not really in their applications. If enterprises start looking at IT more in application terms, and PaaS becomes "the way forward", we had better start dusting off our SOA, ETL, and EAI hats, because that's where the problem will always lie.

In fact, the biggest problem with Salesforce's stuff is not that it's locked-in to their software, it's that you can't choose to run it inside a private cloud. They're one of the stubborn ones against "private clouds", understandably. For this reason, I think Microsoft's approach to PaaS is probably going to be very successful. .NET is your platform. It's very popular, and will continue to be so because of Azure.

Standards ... If not today, when?

On the interoperability front, there was discussion about whether it was premature for cloud standards. And generally, the feeling was, yes it is -- BUT -- history has shown it's important to start the discussion early, and get people networking early, lest they go off and do their own interpretations of what people need & we wind up with a mess despite our best intensions. One of the analogies that Bob Marcus kept alluding to was the emergence of the Enterprise Service Bus in the SOA world, which emerged because even though we had the WS-* stack, things still weren't all that interoperable, and capabilities varied wildly, so a mediator became necesary. And the service buses themselves were all very different in their operation, so required specialized knowledge to install, use, and develop against.

A lot of the sessions are "here's what we're doing to help to get people to talk about standards", which is fine, but indicative of how "early days" all of this interoperability work really is. The general feeling can be summed up as:

a) Cloud Computing is still fuzzy, but has the potential to be great ;
b) Clouds are mostly closed today, and that's OK, but not great ;
c) A modicum of provider-level openness will be essential for the Federal community.

IOW, it's a huge mistake to assume that the EC2 API is a de facto open standard. I don't think anyone in the room had this illusion. Here's why, IMO: for one, it's a stretch to call it "open" -- it's under the control of a single company, and licensed by them. If they decide they dislike the software that implements that API, they can change the license for future versions, and shutdown old versions, making older versions basically useless. Secondly, there are a number of core cases it doesn't cover. The biggest is that it doesn't give you the ability to express "desired state" of a cloud as a document. It's just an API. Whereas enterprises seem to want to be able to reuse their configurations, store them, verify, certify, sign, and version control them, etc. Hence the interest in document standards like OVF or hypermedia formats like EDML and ECML.

The problem is that if these standards take so long to build, then we're going to have to invest in "cloud service buses" to enable portability and interoperability. In a prior post, I mentioned that this is what I believe will probably happen. There are too many cooks in the kitchen.

The substantive discussion on potential standards included:

Winston Bumpus (DMTF President)'s talk on OVF
Mark Carlson's talk on SNIA's XAM initiative

And from the vendor side, there was:

Enomaly's UCI
Sun's Cloud API
Elastra's Markup Languages

All which deserve a separate post.

My short takeaway was this: OVF is likely going to be very popular. We're going to regret its scope decisions eventually (i.e. a focus on install & deploy, and little else), and I think there's going to need to be proprietary extensions to enable its use in a "cloud" context, but as Winston called it, "it's the MP3 of the data centre".

Get enough people repeating that to themselves, and I think they'll have a marketing winner. If Woody Allen is right, and "80% of success is showing up", I'd say yes, DMTF currently is the leading candidate for becoming one of the premier cloud standards bodies.

I tend to think of interoperability as a gradient.

The old industry stalwart from the 1990's is what I'd call "runtime interoperability", wherein you could write a Java EE application, deploy it on a Java EE application server, and (with a questionable amount of tweaking), get it to operate. SQL was another attempt at this, with mixed success. The later CORBA standards tried too, with the Portable Object Adapter (POA). And clearly, the ISO/ANSI C runtime libraries have been successful, as have many other programming libraries.

The other angle of interoperability grew in the 2000's is what I'd call "protocol interoperability", an approach that, at first anyway, only a network engineer could love. Most of the *TP's on the internet take this approach, where the "network" is first, and dictates the pattern of interaction -- the "developer" and their desires or productivity is secondary.

With cloud computing, we're currently going through the age old discovery of "what form of interoperability makes sense?". Especially given that we're dealing with networked applications (indicating a need for "protocol interoperability") but also with complex configuration & resource dependencies for security, scalability, etc. (an area where "runtime interoperability" usually plays).

Starting Observation: Microsoft Has A Clue.

Windows Azure is trying to balance these approaches to interoperability. For example, .NET Access Control Services allow you to federate identity between your own Active Directory and Azure. This is all just Active Directory Federation Services (ADFS) and using the WS-Federation "standard"; something you could do with OpenSSO too, for example, for over a year. But they'll probably make it easier if you stick within the .NET / Windows world.

A similar case could be made with their .NET Service Bus,  as a way of enabling Windows Communication Foundation and Biztalk applications span Windows Azure and private deployment(s).  This isn't just a pipe dream, either, they're actively doing this with the early Azure releases.

The Scope of Interoperability

What makes this work is that .NET is already a widely used platform in private data centers, and that .NET is a single-source runtime.     Now, an astute observer may exclaim,  "but that's not interoperable!  Where is the multi-vendor ecosystem!?"  At which point we have to ask ourselves, what's the scope of desired interoperability?  

Is it :

- A vendor ecosystem of interoperable runtimes?  Ponder the success and market results of SQL, Java EE, etc.  before wishing for this.    Where they did make a difference?   (They did make a difference, but perhaps not where one would intuitively think.)


- The ability to enable multiple providers to host a single runtime and enable interoperable "services" (e.g. identity, data, process, etc.) across these runtimes?   

I suspect the latter is more readily attainable, and likely higher value, than the former. And note it doesn't preclude the existence of an ecosystem. It just suggests that enterprises are going to care more about cloud-spanning functionality in their "chosen car trunk" than wait for a common runtime to emerge.

What are the alternatives for a "hybrid cloud" platform to .NET and Azure?   

  1. APEX might work if they invested in private deployments -- not likely.     
  2. There's Java, though Sun, IBM and Oracle haven't been doing much there yet.  
  3. There's EngineYard starting down the Enterprise Ruby on Rails path.   
  4. Google perhaps heading down the Enterprise Python path (also not likely)
and of course, everyone's favorite...
  1. Infrastructure as a Service, where you could write your infrastructure in Erlang and OCaml for all your cloud provider cares (so long as you don't use multicast ;-)

In this last case, runtime interoperability would require a lot of "roll your own" configuration management, integration, and interoperability.     Or you could rely on...

  1. So-called "Cloud Servers" (e.g. CloudSwitch, 3Tera, Elastra, etc.)

Which give you ways to help craft models & designs & orchestrations that help you with configuration management, integration, policy, interoperability, and governance.  Which in essence is just like what the Hybrid PaaS guys are doing above:  constraining the problem space to gain some level of deployment flexibility.   The difference is that cloud servers boil the problem down to a (hard) configuration management problem, instead of building "a standard runtime to rule them all". 

Naturally, because I work at one of those "cloud server" vendors, you'd think this is my preferred model. But honestly, I'd be pretty happy for the industry if they agreed on either model. Time will tell.

My Predictions

a) I have serious doubts about a "new" cloud runtime portability standard.    The battle lines were drawn long ago, and while they'll blur, it likely will continue to look like " .NET vs. Java vs. everything else" for at least 2+ years.

b) One could argue it would be nice to build a standard protocol (using an architecture that fits how early adopters think) for Infrastructure as a Service to provision "Obvious Stuff" like storage, CPU, and network.   The DMTF CIM stuff is great but probably too low-level,  and too WS-* focused to be palatable to early adopters.   The DMTF OVF stuff is likewise great, but isn't focused on "lifecycle", i.e. what the heck happens to this deployment over time?   It's (thus far) focused on creating virtual appliance bundles.

Something RESTful would be nice to enhance our serendipity, but frankly the EC2 API isn't all that great of a starting point (for several reasons; different discussion though).    

Regardless of the architecture, the big win here would be that it would reduce the need for a "Cloud Service Bus" that mediates among different APIs.   I think this kind of standard will happen, but it will take 2+ years, thus being ratified just as early adopters have bought their shiny new Cloud Service Bus.....  ;-)

c) A wild card is where the "massively parallel processing uber alles" crowd will flock.   From what I can tell, four visible options:  
(i) Hadoop (i.e., Java; though I bet there's a .NET port coming), 
(ii) Open Grid Forum / Globus,
(iii) Parallel SQL Database (e.g. Vertica, ParAccel, Greenplum, etc.),
(iv) Proprietary Platform (e.g. Google)

And those cases are very clearly NOT going to be a likely candidate for hybrid-cloud interoperability below the service-API level, given the latency requirements and tight coupling inside those services.

d) Even in a world of a handful of interoperable Cloud Platforms, I suspect there's a going to still be a big configuration management and governance problem.

Where does Elastra's work fit into this?

Well, first, let's be clear: I have modest expectations for our work on EDML, ECML, etc. I don't expect them to become standards. I do hope to contribute the work we've put into this stuff into a standards effort, and that the industry really does adopt a RESTful linked data approach to describing IT. On the other hand, we've been at this for over a year, and I doubt there will be industry consensus on even 20% of the topics we're modeling. EDML, with its emphasis on resource reservation, allocation, packages and settings, sure, I could see value. ECML, however, is an architecture and policy description language; I suspect there's a lot more acrimony awaiting in there.

Secondly, I have no illusions about the ability for a startup or even a "community" of individual contributors to influence or fund a standard with this much industry attention. Large companies will get their way, invariably. Good ideas may survive through a combination of luck, serendipity, and maybe small doses of charisma and chutzpah among the evangelizers.

So, I will continue to show up at the occasional interoperability or standards meeting, post on mailing lists, etc., but otherwise I'm focused on our product suite. We built these languages to get our jobs done, and are happy to open them up when we have the time to complete the documentation for a wider audience. For now, I'll be presenting highlights at the OMG cloud interoperability meeting in March.

About this Archive

This page is an archive of entries from March 2009 listed from newest to oldest.

February 2009 is the previous archive.

May 2009 is the next archive.

Find recent content on the main index or look in the archives to find all content.

About Me
(C) 2003-2011 Stuart Charlton

Blogroll on Bloglines

Disclaimer: All opinions expressed in this blog are my own, and are not necessarily shared by my employer or any other organization I am affiliated with.