Following the latest REST security discussions from Gunnar Peterson, Pete Lacey, Tim Bray.
Gunnar is set on convincing us that message-layer security is superior to transport-layer security in his Stephenson analogy:
Transport level security assumes good security on both endpoints in a point to point scenario and everything beyond those endpoints within the transaction span. Message level security lets the message traverse numerous business, organizational, and technical boundaries, with a modicum of security intact.
I think transport vs. message level security is a false dichotomy.
SSL/TLS traverses numerous business (network provider), organizational, and technical (multiple network technology) boundaries, and keeps security relatively intact. It's all next-hop routing -- when I send the message out, all I know is my default gateway, otherwise it's up to the infrastructure to figure out the path. Do I trust all of those network providers with my private information? Nay!
We also often tunnel over application-layer protocols when it suits our needs, such as how SOAP tunnels over HTTP. Both TLS and HTTP 1.1 are transport independent, they just expect a reliable lower layer. I don't believe there is anything preventing someone from building the TLS record & handshake protocols on top of an application layer internetwork, such as a chain of HTTP proxies / intermediaries. Both HTTP and TLS are independent of the underlying transport, which might be a tunnel. HTTP arguably already does this, to some degree, with HTTP CONNECT. Yes, I'm aware there are risks with HTTP CONNECT, but preventative measures are known.
I do, however, agree that HTTP is missing at least two key security features:
A multi-party secure conversation protocol , vs. SSL's two party model. This enables a client to "pick and choose" what intermediaries can be trusted for any particular representation transfer.
An extended HTTP digest authentication protocol with the ability to sign headers and/or the whole representation, similar to AWS' approach). Signatures can verify the integrity of a representation AND keep it visible to intermediaries. SSL/TLS can't.
The question is -- why did we need to build these in a SOAP/XML stack that broke the semantics of HTTP and treats all other forms of data as second-class citizens?
I don't think XML is the centre of the web universe -- JSON is catching on like fire, and binary media types continue to grow in variety, etc. For some reason, people thought that all that businesses want is text data -- the binary stuff can be shoved into Base64 or MIME attachments. What happens when we need to apply our XML security specs on top of them? Oops! -- enter MTOM. Today, if I want to secure non-XML data within an XML-based security network, I have many layers of inert redundancy and complexity.
The XML protocols have learned, slowly, that they need to play nicely with others, lest they remain a complex island to themselves. These specs have proven to be useful behind firewalls. They have had some, but limited, success outside of the firewall.
The challenge I see is that because the XML protocols such as WS-* don't treat resource URIs, non-XML media types, or HTTP's semantics with great respect, they risk becoming yet another legacy technology before their prime. One that hinders businesses from consuming the new, cool, cost-reducing and revenue-opportunity webby things on the horizon.
Does WS-* help the new systems built with AJAX/Comet, Mashups, Wikis, Blogs, Microformats, tags, etc? Not really. Yet that's where all the excitement in the consumer space is, and where new leaps of productivity in development are emerging. It would be great if we could salvage things from the XML camp for this realm -- SAML, for example, seems to be one that could thrive given the Browser profiles. But new specs will be created to work in and extend the webby world, and they'll overlap with WS-*. It looks more & more that we are going to have two incompatible world-views and protocol pillars.