WS-REST Musings: Interoperability is Hard Work

| 2 Comments | No TrackBacks

Part of my continued series of reflections on the WS-REST workshop.

It may continue to amaze, but the most basic aspects of interoperability continue to have a common problem: the implementation almost always violates the protocol specs that it supports. And that violation often gets encoded into practice, and back into the spec. Sometimes this is for good reasons, sometimes not.

Sam drew from his experience with early SOAP interoperability to illustrate the problem. Apache SOAP, for example, a descendent of IBM SOAP4J, and the predecessor to Apache Axis, had a history of supporting the early SOAP spec, when it was a vendor-led W3C note. It had encoded data with the SOAP encoding section of the SOAP specification, because the XML Schema data type system was not due to be out until mid 2001.

Of course, early specifications often under-specify, and that leads to implementation leaking through. A classic borderline cases is representing "Infinity". XML Schema suggests that infinity should be represented as the characters "INF". Apache SOAP, predating this, used the java.util.Number class, whose toString() method generated "Infinity".

The fix is simple enough, but there were deployed systems out there already. The solution was to continue to produce "Infinity", but to accept both "INF" and "Infinity". Future SOAP stacks, like Axis, which were built around XSD in mind, both produced and consumed INF.

My first reaction to a lot of this was, "conform to the (damn) specification". If it were only so simple. Sometimes the spec is wrong or incomplete. Sometimes a boundary condition is missed by implementers. And organizational inertia makes it hard to upgrade. So, documenting what is implemented becomes the preference.

None of the above has really any difference with REST. One of the takeways: contracts matter.

I know that there's trepidation about the term "contract" among those in the REST community, in that it may envision a rigid interface. I think it's a price worth paying -- you cannot eliminate the notion of "contract" when interoperability is a concern. Contracts are not about how rigid or flexible the interface is. They are about ensuring complete, understood, and compliant expectations. And, frustratingly, they cannot be well-specified if there isn't any implementation experience.

One might argue that the Web has proven very good at composing contracts together -- transfer protocol, media types, identifiers, etc. It didn't eliminate the painstaking need for agreement, it depends on it. Those interested in philosophy might find this an interesting paradox, that the explosion of freedom & diversity of web content and applications requires strong control at the lower layers -- an idea Alexander Galloway explores in his book, Protocol: how control exists after decentralization. Practically speaking, the areas of the Web where there was lax specification and/or complex implementation compromises are the areas where there's not a lot of diversity - such as browsers, which have to deal with HTML, CSS, JS, etc. Servers haven't been immune from this, but they have somewhat more understandable specs to follow, with HTTP, MIME, URI, etc., though ambiguity with HTTP has led to a dearth of complete HTTP 1.1 deployments (most use a subset). This has led to the work on HTTPbis to clean up and clarify the spec based on over a decade of experience.

Considering today's new generation of RESTful APIs on social media & cloud platforms, there are some curious trends. JSON-based RESTful services, for example, do not specify dates, or formal decimals. XML (schemas) do have standardized simple types, but the syntax and infoset is off-putting due to its lack of fidelity with popular programming languages. Which is better for interoperability? Maybe the answer is because JSON is so easy to parse, it's ok that each endpoint needs to adapt to a particular set of quirks, like differing date/time formats. But that sounds like a very developer-centric argument, something that serves the lazy side of the brain. We can do better.

No TrackBacks

TrackBack URL: http://www.stucharlton.com/blog/mt-tb.cgi/141

2 Comments

JSON not only because it's easy to parse... but because of dynamic languages like Ruby, Groovy, and C# 4. And, lets we forget... JavaScript (which is a flavor du jour with things like NodeJS and Mongo letting you use JS on the server side).

It's easy to parse XML in those (slurper in Groovy, Hpricot Ruby, etc.) but the fact that you can open a class, and stuff it with the JSON data structure (Expando in Groovy, open classes in Ruby, dynamic type in C#).
JSON just makes serialization and deserialization that much simpler.
Not that you couldn't do everything with XML, just needs XML data that represents metadata as well as data itself.

What's next? OData-like data dictionaries for RESTful data services ;)?

Leave a comment

About this Entry

This page contains a single entry by Stu published on May 11, 2010 7:43 AM.

WS-REST musings: Put Down the Gun, Son was the previous entry in this blog.

Still Hunting for that Silver Lining is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

About Me
(C) 2003-2011 Stuart Charlton

Blogroll on Bloglines

Disclaimer: All opinions expressed in this blog are my own, and are not necessarily shared by my employer or any other organization I am affiliated with.