Benjamin Carlyle has been a very insightful blogger on topics such as REST, SOA, the semantic web, etc. His recent post on comparing SOA vs. REST best practices is an enlightening one, but I do find myself in slight disagreement.
Firstly, I don't think Ashwin Rao's list of SOA best practices, which is Ben's starting point, is at all what people tend to think of as core SOA best practices.
One of the legitimate complaints against HTTP-uber-alles is that there are real business challenges that HTTP cannot meet today without extension, but also don't really require HTTP's global scalability. Internet-scale pub/sub, for example, has had some interesting experiments, but nothing HTTP-based has really caught on. Perhaps waka will some day solve this globally, but for now, many need local solutions -- either HTTP protocol extensions or non-HTTP protocols (like IM protocols, or WS-* protocols, etc.)
Ben proceeds to make further observations on how SOA and REST are at odds. I do not believe this growing rift between services-orientation and resource-orientation does anyone justice. It belittles the work of SOA practicioners to date, and IMHO further isolates REST arguments from the mainstream, which is already problematic given the "hacker" vs. "enterprisey" thread that's common to many of these debates.
First I'll note that SOA != web services. It's an architectural style, and I believe it is a less constrained version of REST: it loosens the uniform interface constraint into what I call "governed interfaces". It does not mandate universal identifiers for relevent data resources, or universal operations on those resources. REST requires these idealized constraints for a global scalable SOA. But one cannot always follow these constraints for a variety of reasons, sometimes political, social, or even technical.
Ignoring REST does not mean one will fail in generating a scalable, profitable, successful networked application (witness instant messaging applications, World of Warcraft, etc). It means there are certain tradeoffs one must recognize when picking architectural styles, which will inhbit your ability to communicate at scale (where scale implies something beyond mere throughput, it is social scale and tolerance for diversity, heterogeneity).
Now, onto the comments:
Now, this implies a looser constraint than total uniformity of identifiers and operations, but it's still a constraint. I think some may not like it because governed interfaces are subject to a situation, business, politics, etc., and doesn't have the binary nature of Roy's architectural style constraints (either you ARE client/server or you aren't, you HAVE uniform interfaces, or you don't).
The reality is that there really is no such thing as total uniformity. You may be able to get uniformity in HTTP, but that's because the semantics of the operations have no business or political value. They have economic value only in terms of the network effects their promote -- there's no reason not to make them uniform, as one can't glean any competitive advantage from playing a competing standards game.
Standardization, governed interfaces, and uniformity of interface are all cases of Adam Smith's increasing returns through specialization: you constrain the inputs & outputs of a task to enable innovation & freedom in how the work gets done. This can be done in the small (within a firm) or in the large (a marketplace). One can have a universal uniform standard, or sets of competing standards.
Uniformity is a constraining ideal. The goal (and paradox) of protocol-driven control is this: to enable greater freedom of action in what can participate in a decentralized system, you must have universal agreements of interaction. One should realize that without universal acceptance of an operation and its semantics, you're compromising your ability to communicate and scale. In economic terms, the more universally accepted your interface, the bigger the potential for a market to form around it. Which sounds rather obvious, and to some, even undesirable.
The two litmus tests I apply as to whether something is services-oriented are: are the operations aligned to something relevant in the domain?, and are the operations constrained & governed to maintain relevance across more than one agency?
Even old crusty component software had this concept - Microsoft COM specified a slew of well-known interfaces in a variety of technical domains that you had to implement to be useful to others. SOA requires governed, constrained, relevant operations. But making things constrained, goverened and determinig relevance requires political work, even if it's technical in nature. It's often in the eye of the beholder. It's painful, it's costly, and it doesn't always work as intended. HTTP managed to pull it off because no one was looking. And it specifies a very technical domain (hypermedia document transfer).
On the other hand, the astounding results of the web shows shows the power of uniformity, even with technical semantics for the operations. I think the big design lesson that WS-* folks need to learn from REST is something like this: One generally shouldn't bother to constrain operations with social or business value unless you have lots of capital to throw at the problem. There is plenty of low-hanging fruit in technical domains where one can strive for common ground, even if the operations are not relevant to the higher level domains of the actual business problem. And HTTP has already done this for many use cases! Sure, SOA requires services to expose operations with business relevance, but likely these should remain as logical operations that can be decomposed into something more primitive and uniform, like a choregraphy of uniform operations on resources. Or else you're just going to re-invent, with questionable results, what others have already done.
Having said this, even today people disagree on the utility of the distinctions between HTTP operations such as PUT and DELETE, and they're not all proven in practice. But it worked pretty well for GET & POST.
I'll end this point with a rather obvious note, but one that needs emphasis: technologically speaking, REST has not solved the bigger challenge -- which is not in uniform operations, it's in resource description & interpreting content types. This is an area where neither the REST community nor the SOA community has any good answers to -- though the semantic web community is trying hard to solve this one. And, though I digress, while the current fashion here is for search terms, probabalistic results, folksonomy, and truth driven by reputation, there are many old lessons to remember in this area that I fear are being tossed aside in a manner similar to the WS-* vs. REST debate.
From another perspective, I highly suggest a look at Part IV of Eric Evans' book Domain Driven Design. He introduces a number of patterns that effectively conclude that at sufficiently large scale, any object-oriented system must become services-oriented. Domain models must be bounded within a particular context, communicate through a published language (representation or content-type), and maintain some uniformity through a ubiquitious language (uniform interface). Sounds a lot like REST to me.
In summary, SOA is not new, and many of the architectural constraints of REST are not particularly new -- Roy's genius was in the synthesis of them. SOA is evolving from two sources: a loose community of practitioners of networked business applications and a set of vendors pushing a new wave of infrastructure and applications. There's bound to be conflict and vagueness there, given the variety of backgrounds and vested interests. This was a similar case with OO back in the early 90's, but it seemed to survive.
My point is that uniformity, as REST's most important constraint, can be seen, in glimpses, throughout the history of networking software. But as an industry, the users of these technologies often don't grasp all of the implications and insights in their tools. Sometimes these technologies made poor archtiectural choices for expedience, which makes them unscalable or unwieldy. We often forget the lessons of our ancestors and have to fail several times before remembering them.
REST vs. WS-* seems to be another stab at this, and I hope the WS-* community eventually learns the lessons that REST embodies.
Beyond this, I hope the REST community learns the lesson that the many in SOA community take for granted: uniformity, as a technological constraint, is only possible in the context of social, poltiical, and economic circumstances. It's an ideal that so far is only is achievable in technological domains. HTTP, while applicable to a broad set of use cases, does not cover a significant number of other use cases that are critical to businesses. And HTTP over-relies on POST, which really pushes operational semantics into the content type, a requirement that is not in anybody's interest. In practice, we must identify business-relevant operations, constrain and govern interfaces to the extent that's possible in the current business, industry, and social circumstances, and attempt to map them to governed operations & identifiers -- whatever identifiers & operations are "standard enough" for your intended audience and scale.