September 2008 Archives

On twitter

| 2 Comments | No TrackBacks

For those that use it, I've recently started posting more often to Twitter as "svrc" (my initials)... I also joined identi.ca but I don't really use it that often yet.

For those in the SF Bay Area next week, I will be speaking on a panel at the SDForum's Cloud Computing and Beyond: The Web Grows Up 1-Day conference in Santa Clara.

Cloud Computing Tech forum

| No Comments | No TrackBacks

If you're interested in the technology of Cloud Computing, and are tired of the antics on the Google Group, join us on the Cloud Computing Technology forum.

Black Swan: 5, Investment Banks: 0

| No Comments | No TrackBacks

No more investment banks.

This is f'd up, f'd up, indeed. I used to work as an architect & developer on Lehman Brothers' Whole Loan Trading system around 7 years ago. This is all a bit close to home.

(video courtesy my buddy Rob @ Openwave)

Given the news lately, there's a lot of speculation of what went wrong with the economy.

"Greed" is often cited as the reason for our economic failures. I disagree. It might have been a motivation, sure, but firstly, most motivations are mixed. Secondly, since the market system arguably is designed to channel greed and self interest to fulfill needs, it doesn't tell us as to how the system failed in this case, nor does it give us guidance on how to fix things, as people aren't just going to "stop being greedy". Greed isn't illegal, and not everyone believes it is a sin.

I think the failure basically comes down to "Hubris". The market system certainly IS designed to, in the long run, destroy those that exhibit it.

It's really painful, unfortunately, when an almost an entire society has bought into bad ideas pushed by hubris, to the point that the government has to halt portions of the market system to ensure some level of stability.

Corporate welfare is a dangerous path, one that sacrifices the long run for the short term. I don't doubt we need it for the current crisis, but I fear that this has not been a short term trend but something that's occurred for decades, going back to the Chrysler bailout in the 1980's, numerous airline bailouts, etc. It may be that the U.S. government is not willing to let a major corporation fail -- that they let Lehman Brothers fall is almost a token gesture that we haven't completely abandoned the market's ability to correct hubris.

There at least are five levels that hubris should be considered:

a) belief in managerial and economic principles that are obsolete
b) the primacy of shareholder value over true productivity
c) risk management / faith in statistics
d) technocracy
e) education, particularly in what it means to teach leaders

None of these authors are whack jobs; if anything, they're mainstream iconoclasts; Henry Mintzberg is a common Havard Business Review contributer (even though he is a critic of the school), Peter Drucker is the management guru of the 20th century, John Ralston Saul was the husband of Canada's former Governor-General Adrienne Clarkson, and Nicholas Taleb is the author of the NYTimes best seller, the Black Swan.

"I would hope that American managers--indeed, managers worldwide--continue to appreciate what I have been saying almost from day one: that management is so much more than exercising rank and privilege, that it is about so much more than "making deals." Management affects people and their lives."
-Peter Drucker

My critique of both a recent CACM article and one of the author's blog entries, regarding Cloud vs. Grid interfaces, was quite long, and I've been told, meandering.

Here is a pithy summary of my position:

1. The greatest leverage in system architecting is at the interfaces. The greatest dangers are also at the interfaces. [1]

2. When the components of a system are highly independent, operationally and managerially, the architecture of the system __is__ the interfaces. [1]

3. For networked applications, there are many different styles of interface, including APIs, to message exchange contracts, to hypermedia. These drive very different emergent properties in the resulting system.

4. Loosely coupled systems are assisted by the projection of both object state AND available state transitions. Hypermedia is a suitably general model to enable this.

[1] This is just a paraphrase of Maier & Rechtin regarding key heuristics for systems architecture. For a summary, see this paper.

VMWorld and Interop

| No Comments | No TrackBacks

For anyone attending, I'll be at VMWorld '08 in Vegas today through Wednesday (visit us at the Elastra booth), and on a Cloud Computing panel at Interop '08 in New York.

Modeling State, Ignoring Transitions

| 3 Comments | 1 TrackBack

Updated: Instead of reading my specific nits here, you can see a brief summary of my suggested guideline to designing a cloud interface.

Ian Foster, Savas Parastatidis, Paul Watson, and Mark Mckeown have an article in the latest Communications of the ACM comparing WS-ResourceFramework (WS-RF), WS-Transfer (W-T), RESTful HTTP, and "no convention" as approaches to modeling state.

Look Inside >>
September 2008

This article was particularly timely, as it is very similar in approach to my "Managing Data in an SOA" talk at Jazoon 2008. (A better formatted PDF copy of that presentation is available here.) Jazoon is primarily a Java conference, so I got the impression that about less than half of the room had even heard of the Web Services specifications I was talking about, let alone the tension among them, due to how much they overlap. Roy was in the crowd, offering the one audience comment at the end: "Given all of this, which approach would you recommend, or are you using, for your own work?". Much of Elastra's work in managing and provisioning clouds is based around a hypermedia architecture, with RESTful HTTP being our chosen framework. I'll be talking more about this at QCon San Francisco in November.

With regards to this article, I first want to commend the authors at trying (noticeably) hard to keep the discussion technical and respectful, and I hope my comments here will be also taken in that vein, even if I'm being negative. I've long respected their various blogs . The article is certainly a step up from entries such as this one, on "Web Fundamentalism".

My first comment is that the article might be misnamed. In a paper on "modeling state", I didn't see the word "transition" mentioned once. This seems to be a major limitation of this discussion. The authors clarify:

First, a few observations about what we mean by modeling state. The systems with which we want to interact may have simple or complex internal state. Various aspects of this state may be exposed so that external clients can engage in "management" operations....We are not suggesting these mechanisms provide direct access to the underlying state in its entirety. Rather, we are assuming the principles of encapsulation and data integrity/ownership are maintained.... It is unwieldy to keep talking about "modeling a projection of underlying system state," so we use the short- hand "modeling state." It is important to bear in mind, however, the reality of what could be going on behind the boundaries of a system with which an interaction takes place.

That helps to some degree, but it still seems to miss the major point of why it's useful to project state independently from the underlying system: to loosen the coupling of interactions in a distributed system. Interactions lead to state transitions. Besides the state data itself, the available state transitions also can be projected. REST's approach to this is "hypermedia". Object-oriented designs use the state pattern to project such interactions without requiring client coupling. As far as I can tell, the other approaches don't have a clear answer to this. The "No Convention" approach is the worst culprit: potential transitions are tightly intertwined in whatever domain-specific data identifiers, operations or model is being accessed. On the other hand, both WS-RF and WS-Transfer could enable a form of hypermedia with embedded WS-Addressing Endpoint References (EPRs), but this has a variety of problems that are discussed below (some have called this approach "REST on Crack").

I note that if all you want (or all you see) when contrasting these approaches are a system with consistent CRUD operations with an independent, interoperable data format, a lot of the debate between these approaches seems rather obtuse. But complex systems are better modeled as "state machines" than as a state space that can be "managed" with CRUD. This seems to be the major missing piece of the debate - a misunderstanding, underappreciation, or disinterest on modeling state transitions or lifecycles when accessing or manipulating state. Ian Foster's most recent blog entry claims that (in the context of grid vs. cloud):

No evidence is provided for this assertion that complex interfaces are the reason for the difficulties people have with grids. I argue that the issues are more complex. "We can argue whether we prefer REST or Web Services, or say Nimbus (a grid virtualization interface) or EC2 (a cloud virtualization interface), but the differences among these alternatives are not great."

This blog entry seems to demonstrate Mr. Foster really doesn't understand the differences between these interface types, and seems to be referencing this CACM article as a way of brushing this observation under the rug.

Speaking as someone who's designing and building cloud servers, I've made a snide comment or two on this topic this before (e.g. in one of Greg Pfister's presentations, see slide 2). I think there's a lot of thought value in Globus Grid specs, but the totality has a feeling of scientific computing that most IT people generally can't wrap their heads around. The WS-RF interfaces encourage early-adoption developers to give up because they can't easily call the APIs. Amazon EC2's success is largely due to its "retail experience", but also due to the ecosystem of tools and providers that has developed around it. This ecosystem would not have happened had its interfaces not been very, very simple. I'll admit that EC2's REST API doesn't quite use hypermedia the way that I am discussing here -- a future post (around the time of QConSF) will explain a more hypermedia-oriented approach to representing cloud resources.

The rest of this post gets into the gritty details of my issues with the article.

A couple of areas of agreement that I have with the article:


  1. The "no conventions" approach to state management has major drawbacks.


    While the article doesn't say this outright, it certainly hints at this position. In my opinion, if you MUST use WS-*, and you need a consistent CRUD model, then you probably want to look at WS-Management.

  2. The value of lifetime features, such as WS-ResourceLifetime, or subscription features.


    While I don't think lifetime management necessarily warrants new methods, I do think there's value in having a consistent way of specifying a resource's lifetime. This can be mapped on existing methods & response codes, if a media type were created for it. An expired HTTP resource could return 410 Gone, for example. HTTP-based subscriptions happen all the time, though I admit a more precise hypermedia model would be useful.

Unfortunately, there are many points of confusion or contention with the article:


  1. Misunderstanding what a uniform interface means.

    "According to REST, a small set of verbs/ operations with uniform semantics should be used to build hypermedia applications, with the Web being an example of such an application."

    "REST = verbs" is a red herring. We cleared this up recently in another episode of minor blog drama. Uniformity has to do with URIs, Resources vs. Representations, self-description, the general applicability of all methods to potentially all resources, and the big one, hypermedia. Or, in implementation terms, one has to look at more than HTTP to analyze REST -- there's URI, MIME, and media types like HTML and Atom.

  2. Conflating POST with "Create".

    POST means whatever the hypermedia specification that calls for its use says it should mean. In AtomPub, it means "create". In HTML, it means "process this form". It might be emulating GET. It might be appending something. You don't know unless you know the hypermedia context it was linked in.

    HTTP is not CRUD. REST is not just file storage.

  3. The utility of accessing and manipulating partial resource state.

    This is less of a point of confusion, and more of an additional critique against something like WS-ResourceProperties.

    Accessing partial state, like one can do with WS-ResourceProperties or (while it's not mentioned in the article) WS-ResourceTransfer with XPath fragments, makes the data model a lot less uniform, and a lot harder to work with. William explains this quite well.

    As an aside, I note that HTTP has the Range header for accessing partial state of a large representation; this wasn't discussed in the article.

  4. Suggesting that RESTful HTTP requires "convention over specification". (Aka. Ignoring the hypermedia constraint)

    "Note that whereas HTTP defines all the verbs used, the structure of the URIs and the format and semantics of the documents exchanged in order to implement the job service's operations are application specific. Thus, while the URIs appear to convey some semantic information based on their structure (for example, a /status at the end of a particular job URI may be interpreted by a human as the identifier of the status resource), this is an application-specific convention. "

    And again in the summary:

    Thus, when defining state management operations, the WS-RF and WS- Transfer approaches both use EPRs to refer to state components and to adopt conventions defined in the WS- RF and related specifications and in the WS-Transfer and related specifications, respectively. In contrast, the no-conventions and REST approaches adopt domain-specific encodings of operations, on top of SOAP and HTTP, respectively.

    This is a bad practice, and not indicative of robust REST approaches. Normallly one would use specifications like:


    to describe how a URI (or, very commonly, an entire representation!) is constructed. Arguably none of these are "application specific", or even "domain specific". They're "binding environment specific" (e.g. they each have a different underlying object model), because RESTful doesn't assume an XML Infoset binding model the way WS-RF or WS-RT do.

    One should think of form hypermedia as "just in time interface description". Instead of coding against an operation like in RPC, you're coding against a message destination (the POST target), whose intent is hopefully described by the surrounding metadata. In practice, this is normally no different than most reflection-assisted RPC (e.g. CORBA DII , Java RMI with reflection, dynamically invoked WSDL, etc.), though eventually if RDF takes off one might see even richer forms of reflection than just "symbol lookup".

  5. Confusing URIs with EPRs (and Skipping over WS-Management, which offers a resolution)

    One of the co-authors of this CACM article, Mark McKeown, describes in a 2007 blog entry the problem with EPRs:

    ...given an EPR that has ReferenceParameters you should NEVER share it with anyone else. You cannot know what those ReferenceParameters are for. They could be there for some identification purpose, in which case it would be OK to share them, but you cannot know that for sure. They could actually be for identifying a particular session, or client. Sharing EPRs with ReferenceParameters would be like sharing your HTTP cookies; you simply wouldn't do it. Now, imagine a Web were you were not able to share URIs.

    I wasn't involved with the standards work for WS-Addressing, WS-Resource, or WS-Management. But, as a potential consumer of these specfiications, here's what I understand of the mental journey:


    1. URIs are identifiers.
    2. EPRs are not identifiers. Quoth section 2.6 of the spec:

      The Architecture of the World Wide Web, Volume One [AoWWW] recommends [AoWWW, Section 2] the use of URIs to identify resources. Using abstract properties of an EPR other than [destination] to identify resources is contrary to this recommendation. In certain circumstances, such a use of additional properties may be convenient or beneficial; however, when building systems, the benefits or convenience of identifying a resource using reference parameters should be carefully weighed against the benefits of identifying a resource solely by URI as explained in [AoWWW, Section 2.1] of the Web Architecture.

    3. WS-Resource claims EPRs are identifiers, ignoring the advice of the WS-Addressing specification,
    4. WS-Management, the major spec that adopts WS-Transfer, introduced the wsman:ResourceURI reference parameter to remain consistent with the WS-Addressing recommendation, at the expense of admitting that they now have TWO URI's in the header, one for the "endpoint", and one for the "resource".

    The moral of the story: Most technical decisions are a reflection of the economic and political context they were made in.

  6. Confusing the relative sizes of deployment base

    Proponents of the HTTP/REST approach empha- size that it provides for more concise requests and permits the use of sim- pler client tooling than approaches based on Web services. Critics point out that the use of HTTP/REST means that users cannot leverage the signifi- cant investment in Web services tech- nologies and platforms for message- based interactions.

    While I agree there has been significant investment in Web services technologies,


    1. the vast majority of it does not use WS-Addressing,
    2. an even smaller fraction uses WS-RF or WS-Man (which leverages WS-Transfer)
    3. even without this, it is probably several orders of magnitude less than the amount of deployed HTTP-based hypermedia technologies.

    Now, granted, there is not much in the way of HTTP/REST tooling that's specifically targeted at advanced enterprise use of that technology (such as interaction or business process management), but given the context of this article is on "state management", this counter-argument doesn't really apply.

    WS-RF is a world onto itself, with no bridges to the World Wide Web. WS-Management at least tries to remain consistent with the web architecture by exposing a visible EPR Reference Parameter with wsman:ResourceURI that could be dereferenced via HTTP.

    In this debate, the real question is how "general" distributed hypermedia is. Can it be used as a general-purpose systems architecture? Is it, in many cases, a superior set of constraints to just "contracted message exchange"? That's the core of the debate, in my mind.


In summary, while I appreciate its tone of this article, I think the content was very selective in its omissions, and as such should not be seen as a definitive analysis of "modeling state" with these specifications. I really hope it wasn't written to drive a political debate underground, but its primary author seems to be using it in that way based on what I read of his recent blog entry. The result would lead me to interpret this article as well written sophistry. It should, rather, serve as a useful starting point for continued discussion and improvement of our technologies.

About this Archive

This page is an archive of entries from September 2008 listed from newest to oldest.

August 2008 is the previous archive.

October 2008 is the next archive.

Find recent content on the main index or look in the archives to find all content.

About Me
(C) 2003-2010 Stuart Charlton

Blogroll on Bloglines

Disclaimer: All opinions expressed in this blog are my own, and are not necessarily shared by my employer or any other organization I am affiliated with.