December 2006 Archives

What's a disruptive innovation?

On the Yahoo! SOA mailing list, I read the following quote...

REST is _not_ a silver bullet, remote invocation is _not_ a challenge and REST is only disruptive in that it stops people looking for the true disruption which will come when we consider remote invocation a true commodity.

And couldn't help myself but feeling that, in 1972, this might read as:
"Relations are _not_ a silver bullet, data management is _not_ a challenge, and relational databases are only disruptive in that they stop people looking for the true disruption which will come when we consider data management a true commodity."

I liken REST to relational databases as a disruptive innovation, with Fielding's thesis akin to Codd's paper in 1970. And yet relations still generate debate, confusion, and doubt to this day, with the weary, battle-scarred evangelists, tired after 30 years of debate, still trying to promote logic & clarity in the IT industry.

Ultimately, these debates won't change whether REST is or isn't a disruptive innovation -- the market determines that. I think in one sense, it's already proven to be so via the success of web. In the systems integration realm, it's mostly a matter of the market shaking out the right complements that are required to take REST effective, where established approaches rule. (WS-* is an established approach, btw, old message exchange wine in new XML bottles.)

RESTful security

Following the latest REST security discussions from Gunnar Peterson, Pete Lacey, Tim Bray.

Gunnar is set on convincing us that message-layer security is superior to transport-layer security in his Stephenson analogy:

Transport level security assumes good security on both endpoints in a point to point scenario and everything beyond those endpoints within the transaction span. Message level security lets the message traverse numerous business, organizational, and technical boundaries, with a modicum of security intact.

I think transport vs. message level security is a false dichotomy.

SSL/TLS traverses numerous business (network provider), organizational, and technical (multiple network technology) boundaries, and keeps security relatively intact. It's all next-hop routing -- when I send the message out, all I know is my default gateway, otherwise it's up to the infrastructure to figure out the path. Do I trust all of those network providers with my private information? Nay!

We also often tunnel over application-layer protocols when it suits our needs, such as how SOAP tunnels over HTTP. Both TLS and HTTP 1.1 are transport independent, they just expect a reliable lower layer. I don't believe there is anything preventing someone from building the TLS record & handshake protocols on top of an application layer internetwork, such as a chain of HTTP proxies / intermediaries. Both HTTP and TLS are independent of the underlying transport, which might be a tunnel. HTTP arguably already does this, to some degree, with HTTP CONNECT. Yes, I'm aware there are risks with HTTP CONNECT, but preventative measures are known.

I do, however, agree that HTTP is missing at least two key security features:

  • A multi-party secure conversation protocol , vs. SSL's two party model. This enables a client to "pick and choose" what intermediaries can be trusted for any particular representation transfer.

  • An extended HTTP digest authentication protocol with the ability to sign headers and/or the whole representation, similar to AWS' approach). Signatures can verify the integrity of a representation AND keep it visible to intermediaries. SSL/TLS can't.

  • The question is -- why did we need to build these in a SOAP/XML stack that broke the semantics of HTTP and treats all other forms of data as second-class citizens?

    I don't think XML is the centre of the web universe -- JSON is catching on like fire, and binary media types continue to grow in variety, etc. For some reason, people thought that all that businesses want is text data -- the binary stuff can be shoved into Base64 or MIME attachments. What happens when we need to apply our XML security specs on top of them? Oops! -- enter MTOM. Today, if I want to secure non-XML data within an XML-based security network, I have many layers of inert redundancy and complexity.

    The XML protocols have learned, slowly, that they need to play nicely with others, lest they remain a complex island to themselves. These specs have proven to be useful behind firewalls. They have had some, but limited, success outside of the firewall.

    The challenge I see is that because the XML protocols such as WS-* don't treat resource URIs, non-XML media types, or HTTP's semantics with great respect, they risk becoming yet another legacy technology before their prime. One that hinders businesses from consuming the new, cool, cost-reducing and revenue-opportunity webby things on the horizon.

    Does WS-* help the new systems built with AJAX/Comet, Mashups, Wikis, Blogs, Microformats, tags, etc? Not really. Yet that's where all the excitement in the consumer space is, and where new leaps of productivity in development are emerging. It would be great if we could salvage things from the XML camp for this realm -- SAML, for example, seems to be one that could thrive given the Browser profiles. But new specs will be created to work in and extend the webby world, and they'll overlap with WS-*. It looks more & more that we are going to have two incompatible world-views and protocol pillars.

    About this Archive

    This page is an archive of entries from December 2006 listed from newest to oldest.

    October 2006 is the previous archive.

    January 2007 is the next archive.

    Find recent content on the main index or look in the archives to find all content.

    About Me
    (C) 2003-2010 Stuart Charlton

    Blogroll on Bloglines

    Disclaimer: All opinions expressed in this blog are my own, and are not necessarily shared by my employer or any other organization I am affiliated with.