Benjamin Carlyle has been a very insightful blogger on topics such as REST, SOA, the semantic web, etc. His recent post on comparing SOA vs. REST best practices is an enlightening one, but I do find myself in slight disagreement.
Firstly, I don't think Ashwin Rao's list of SOA best practices, which is Ben's starting point, is at all what people tend to think of as core SOA best practices.
- Coarse-grained services - This one is a common SOA practice, but it's pretty vague. Thinking in terms of resources absolutely helps here. It implies that services cannot be identified solely as activities in a process model (they may be shared, they decompose into further activities that have greater uniformity than the process model requires)
- Mostly asychronous interactions - I don't believe this is mainstream opinion. I believe that the starting assumption is that that there are many kinds of message exchange patterns (MEPs), with synchronous being the most prevalent. Asynchronous interactions being necessary at times, but should be constrained to well-understood and constrained asynchronous MEPs, like publish/subscribe, or parallel fan-out + join.
One of the legitimate complaints against HTTP-uber-alles is that there are real business challenges that HTTP cannot meet today without extension, but also don't really require HTTP's global scalability. Internet-scale pub/sub, for example, has had some interesting experiments, but nothing HTTP-based has really caught on. Perhaps waka will some day solve this globally, but for now, many need local solutions -- either HTTP protocol extensions or non-HTTP protocols (like IM protocols, or WS-* protocols, etc.)
- Conversational services - This one floored me. I've never, ever seen conversational as a best practice for SOA. SOA infrastructure is stateless, state is managed when necessary at the edge (the application).
- Reliable messaging - While it should be possible to have a variety of QoS, they are subject to huge performance and scalability trade-offs. It's a very dangerous "best practice" to mandate a default as expensive as this.
- Orchestrated - Agree with Benjamin on this. And why not choreographed services? In the case of orchestration, all one really needs is a programming language. Choreography may prove, long-term, to be more important, as it describes dynamic expectations.
- Registered & descovered - Yes, interfaces need to be registered and discovered, but, this is kind of Services 101. A best practice would be that interfaces should be "governed" - that is, constrained to ensure enhanced communication among more than one agencies.
Ben proceeds to make further observations on how SOA and REST are at odds. I do not believe this growing rift between services-orientation and resource-orientation does anyone justice. It belittles the work of SOA practicioners to date, and IMHO further isolates REST arguments from the mainstream, which is already problematic given the "hacker" vs. "enterprisey" thread that's common to many of these debates.
First I'll note that SOA != web services. It's an architectural style, and I believe it is a less constrained version of REST: it loosens the uniform interface constraint into what I call "governed interfaces". It does not mandate universal identifiers for relevent data resources, or universal operations on those resources. REST requires these idealized constraints for a global scalable SOA. But one cannot always follow these constraints for a variety of reasons, sometimes political, social, or even technical.
Ignoring REST does not mean one will fail in generating a scalable, profitable, successful networked application (witness instant messaging applications, World of Warcraft, etc). It means there are certain tradeoffs one must recognize when picking architectural styles, which will inhbit your ability to communicate at scale (where scale implies something beyond mere throughput, it is social scale and tolerance for diversity, heterogeneity).
Now, onto the comments:
- "SOA seems to be fundamentally about a message bus" - only if you're talking to a vendor that sells one. ;-) The difference between and ESB and an EAI broker, to me, is that an ESB is a decentralized tool for mediation. It's not at the centre. It's not required. I think it's needed as a way to mediate between non-uniform and uniform interfaces, and to transform between well-understood representations, and private ones. Maybe it should perform some monitoring & version management too. But you could do without one if you're so inclined.
- "Tim Bray is wrong when he talks about HTTP Verbs being a red herring." - I think Tim's half right, but perhaps there's a different interpretation of context. I think his comment is that the problem with REST, as implemented in HTTP today, is that it distinguishes between the semantics of verbs in very subtle and technical ways - idempotence update vs. non-idempotent update, side-effect-free vs. state changing, etc. It can get rather silly to argue about these differences beyond GET and POST because PUT and DELETE are used so rarely in practice. On the other hand, the HTTP POST operation is way too overloaded - in the past, Roy Fielding has even commented on the REST mailing list on the downsides of POST.
- SOA has no equvalent concept [to REST verbs] - I disagree, and think this is a crucial point. SOA is all about governing your network operations into as-uniform-as-possible interfaces, but recognizes that uniformity is often impossible when specifying operations with social, political or business value.
Now, this implies a looser constraint than total uniformity of identifiers and operations, but it's still a constraint. I think some may not like it because governed interfaces are subject to a situation, business, politics, etc., and doesn't have the binary nature of Roy's architectural style constraints (either you ARE client/server or you aren't, you HAVE uniform interfaces, or you don't).
The reality is that there really is no such thing as total uniformity. You may be able to get uniformity in HTTP, but that's because the semantics of the operations have no business or political value. They have economic value only in terms of the network effects their promote -- there's no reason not to make them uniform, as one can't glean any competitive advantage from playing a competing standards game.
Standardization, governed interfaces, and uniformity of interface are all cases of Adam Smith's increasing returns through specialization: you constrain the inputs & outputs of a task to enable innovation & freedom in how the work gets done. This can be done in the small (within a firm) or in the large (a marketplace). One can have a universal uniform standard, or sets of competing standards.
Uniformity is a constraining ideal. The goal (and paradox) of protocol-driven control is this: to enable greater freedom of action in what can participate in a decentralized system, you must have universal agreements of interaction. One should realize that without universal acceptance of an operation and its semantics, you're compromising your ability to communicate and scale. In economic terms, the more universally accepted your interface, the bigger the potential for a market to form around it. Which sounds rather obvious, and to some, even undesirable.
- "It seems to accept that there will be a linear growth in the set of WSDL files in line with the size of the network, cancelling the value of participation down to at best a linear curve." - That seems to be the naive approach to integration that many web services adherents are taking. But it isn't SOA.
The two litmus tests I apply as to whether something is services-oriented are: are the operations aligned to something relevant in the domain?, and are the operations constrained & governed to maintain relevance across more than one agency?
Even old crusty component software had this concept - Microsoft COM specified a slew of well-known interfaces in a variety of technical domains that you had to implement to be useful to others. SOA requires governed, constrained, relevant operations. But making things constrained, goverened and determinig relevance requires political work, even if it's technical in nature. It's often in the eye of the beholder. It's painful, it's costly, and it doesn't always work as intended. HTTP managed to pull it off because no one was looking. And it specifies a very technical domain (hypermedia document transfer).
On the other hand, the astounding results of the web shows shows the power of uniformity, even with technical semantics for the operations. I think the big design lesson that WS-* folks need to learn from REST is something like this: One generally shouldn't bother to constrain operations with social or business value unless you have lots of capital to throw at the problem. There is plenty of low-hanging fruit in technical domains where one can strive for common ground, even if the operations are not relevant to the higher level domains of the actual business problem. And HTTP has already done this for many use cases! Sure, SOA requires services to expose operations with business relevance, but likely these should remain as logical operations that can be decomposed into something more primitive and uniform, like a choregraphy of uniform operations on resources. Or else you're just going to re-invent, with questionable results, what others have already done.
Having said this, even today people disagree on the utility of the distinctions between HTTP operations such as PUT and DELETE, and they're not all proven in practice. But it worked pretty well for GET & POST.
I'll end this point with a rather obvious note, but one that needs emphasis: technologically speaking, REST has not solved the bigger challenge -- which is not in uniform operations, it's in resource description & interpreting content types. This is an area where neither the REST community nor the SOA community has any good answers to -- though the semantic web community is trying hard to solve this one. And, though I digress, while the current fashion here is for search terms, probabalistic results, folksonomy, and truth driven by reputation, there are many old lessons to remember in this area that I fear are being tossed aside in a manner similar to the WS-* vs. REST debate.
- Object-Orientation is known to work within a single version of a design controlled by a single agency, but across versions and across agencies it quickly breaks down. - This is contrary to my understanding of the design principles of the web. The web is absolutely object-oriented, with resources being the new term for objects. REST is merely calling for the old recognition that one requires uniform protocols to enable your objects to work well with others. In some languages, like Smalltalk, this was by convention. In others it was an abstract base class or an interface. New programmers rarely understood the value of this, and it's similar to what we're seeing with naive web services implementations with WSDL - a blooming of interfaces, without any real focus on enhancing broad, interoperable, networked communication. Experienced programmers knew how to use protocols & interfaces to make their system constrained and useful across many contexts and agencies. They're called class libraries! But I'll observe again the same thing as I do with HTTP verbs: the disctinction between operations and their semantics were technical in nature. If I create a common set of operations for a List, it has no political or business value, so there is no competition to make it uniform.
From another perspective, I highly suggest a look at Part IV of Eric Evans' book Domain Driven Design. He introduces a number of patterns that effectively conclude that at sufficiently large scale, any object-oriented system must become services-oriented. Domain models must be bounded within a particular context, communicate through a published language (representation or content-type), and maintain some uniformity through a ubiquitious language (uniform interface). Sounds a lot like REST to me.
In summary, SOA is not new, and many of the architectural constraints of REST are not particularly new -- Roy's genius was in the synthesis of them. SOA is evolving from two sources: a loose community of practitioners of networked business applications and a set of vendors pushing a new wave of infrastructure and applications. There's bound to be conflict and vagueness there, given the variety of backgrounds and vested interests. This was a similar case with OO back in the early 90's, but it seemed to survive.
My point is that uniformity, as REST's most important constraint, can be seen, in glimpses, throughout the history of networking software. But as an industry, the users of these technologies often don't grasp all of the implications and insights in their tools. Sometimes these technologies made poor archtiectural choices for expedience, which makes them unscalable or unwieldy. We often forget the lessons of our ancestors and have to fail several times before remembering them.
REST vs. WS-* seems to be another stab at this, and I hope the WS-* community eventually learns the lessons that REST embodies.
Beyond this, I hope the REST community learns the lesson that the many in SOA community take for granted: uniformity, as a technological constraint, is only possible in the context of social, poltiical, and economic circumstances. It's an ideal that so far is only is achievable in technological domains. HTTP, while applicable to a broad set of use cases, does not cover a significant number of other use cases that are critical to businesses. And HTTP over-relies on POST, which really pushes operational semantics into the content type, a requirement that is not in anybody's interest. In practice, we must identify business-relevant operations, constrain and govern interfaces to the extent that's possible in the current business, industry, and social circumstances, and attempt to map them to governed operations & identifiers -- whatever identifiers & operations are "standard enough" for your intended audience and scale.