A motivating paper for an OOPSLA 2007 Tutorial
The dream of a pervasive, utopian "Distributed Object" platform built atop the Web has been sought after for almost as long as the Web has existed. At one time or another, industry players such as Netscape, the Object Management Group, Apple, and Sun Microsystems, not only talked about the need for this platform, but committed millions of dollars to deploying software and standards which attempted to realize it. More recently, another attempt has been made under the name "Web services": although it has avoided associating itself with the term "distributed objects" because of the failures of its predecessors, the architectural similarity of Web services to distributed objects is unmistakeable.
Not everyone was convinced that this platform needed building though. Or more precisely, some believed it had already been built. In 1997, Dan Connolly  at the W3C, wrote an editorial for Web Apps Magazine  which began;
I must take this opportunity to dispell a myth that is all to pervasive in the scientific and product literature surrounding the Web: that distributed objects are something that can be, or some day will be, added to the Web. Distributed objects are the very heart of the Web, and have been since its invention.
So what's going on here exactly? Before we attempt to answer that question, some context ...
The study of the architecture of software systems has been a reasonably well understood field since at least Perry and Wolf's "Foundations for the study of Software Architecture"  paper published in 1992. Refinements by many of their contemporaries such as Shaw, Kazman, Clements, Fielding, and others, improved upon the model, but the fundamentals remained: that architectural properties are realized by the application of constraints on the relationship between components, connectors, and data. The architectural properties we need - sometimes affectionately referred to as the "ilities" (e.g. scalability, reliability, etc..) - are determined by our requirements, including those of the environment that our system will inhabit. So, for example, if we're building a system to be used by millions of users, we require scalability and therefore architectural constraints which induce a sufficient degree of it.
Unfortunately this model of reasoning about and evaluating software hasn't been used nearly as much as it probably should have been, both in the large - for distributed systems frameworks themselves such as CORBA or Web services - or in the small - for enterprise systems built atop those frameworks.
This behavior is arguably due to the influence of other professions of architecture, including general systems architecture and civil architecture, which have a focus on the architect as a "client's advocate": one who works jointly with the client and builder on problem and solution definition. In this view, reasoning and evaluating software from a structuralist perspective forces clients to adopt an unfamiliar language, and to trust unfamiliar analyses, and is disturbingly independent of a client's purpose. In the words of Mark Maier and Eberhardt Rechtin , "The development of requirements is part of architecting, not part of its preconditions".
In many contexts, however, this assumption is inaccurate. There are many cases, such as an enterprise's collection of information systems, the Internet, electrical power systems, where the overall system is not under central control, or even if it is, as in enterprise IT, there are active conflicts of interest and disagreements among the stakeholders. Continuing with Maier & Rechtin: "These systems are all collaborative in the sense that they are assembled and operate through the voluntary choices of the participants, not the dictates of an individual client. These systems are built and operated only through a collaborative process."
Thus, structural approaches to architectural evaluation are highly important in network-based software architectures, where there is no clearly defined client, and the architecture is attempting to introduce emergent properties, independent of the functions that comprise the components of the system.
With this in mind, it is possible to examine technology systems and frameworks with this model retroactively. What we discover when we do so is that frameworks like CORBA and Web services are largely unconstrained, and therefore largely devoid of desirable emergent architectural properties. This means that it's almost entirely up to the users of those frameworks to provide themselves the properties they need through the application of architectural constraints: a difficult task for all but the most expert architects. Is it any wonder that success with CORBA - and now, Web services - has been largely hit and miss?
We suggest that the Web itself is a distributed object based framework, like CORBA, but one which incorporates key architectural constraints which make it viable at very large scale, including between untrusted parties. Those constraints - as documented in Roy Fielding's dissertation  on REST (which used to be called the "HTTP Object Model") - are primarily: uniform interface, resource identification, and hypermedia as the engine of application state.
The resource identification and uniform interface constraints are what gives the Web most of its object-like nature. Resource identification requires that every thing (dare we say "object"?) in the system be given a standard identifier (URI), meaning that, for example, in a CRM application one might have resources such as individual customers, contacts, calls, leads, etc.., so each of those gets its own identifier. The uniform interface constraint then requires that these resources all be accessible through the same application interface. This might at first sound unintuitive, but consider that in some object oriented programming languages, objects derive from a common "root" object which has its own interface. For example, in Java, java.lang.Object has operations which all objects implement, such as toString(). On the Web, the uniform interface manifests itself in the HTTP application protocol, and the GET operation is actually quite similar to toString() in the sense that it asks the resource to return something which represents itself. More precisely, GET requests that the resource provide a representation of its current state. Similarly, an HTTP PUT request is a request for the identified resource to set its state to the state represented in the body of the request message.
The "hypermedia as the engine of application state" constraint is simply a very long-winded way of requiring that applications make progress via links (and associated information) provided the server. So when our CRM system records a call for a customer, a link is also formed between the call and the customer such that a representation of the state of the customer includes a direct link to the call resource within that representation. And perhaps even vice-versa. "Forms" are another form of hypermedia as they use not only a link, but additional information which the client can use to construct a follow-on request. For example, an HTML GET form instructs clients how to populate query parameters on a URI, while an HTML POST form describes the kind of content that can be posted to the URI. In all cases, the client takes action based on information provided by the server, so one of the things that this constraint prevents us from doing is, for example, to take a customer id (not a URI) and turning that into a link ourselves, as doing so reduces coupling, enabling client and server implementations to evolve independently.
The RESTful approach is the beginning of a new breed of network-based systems architectures, whose goal is to enable the serendipitous sharing of information, in the face of an evolving world with many organizational and trust boundaries. As we apply this approach in more situations, traditional distributed systems considerations will be increasingly incorporated and standardized with the web. Many promising standards already exist and are being evolved, from OpenID for digital identity, to the Atom Publishing Protocol for editing, to S/MIME for message integrity and confidentiality. Work remains to be done, but practical solutions already exist for many problems, and an understanding of how to design applications in a "resource-oriented" way continues to grow in popularity.
REST is clearly not the "end of history" with regards to network-based software architecture. At smaller scales, with somewhat different desirable properties, other styles, such as Remote Data Access, or Event Based Integration, continue to solve important organizational and technological challenges. Beyond the current iteration of the Web, future requirements will require a re-evaluation of REST's constraints and desirable properties for a new set of requirements. This may lead to relaxing some of REST's constraints, or an introduction of new constraints . Regardless of the direction this may take, a continued means to successful future systems architecture will be the discipline of objectively evaluating constraints, and the properties they induce, for the new generation of global scale, networked-based software systems.
[ 2 ] http://www.w3.org/People/Connolly/9703-web-apps-essay.html
[ 3 ] http://users.ece.utexas.edu/%7Eperry/work/papers/swa-sen.pdf
[ 4 ] Maier and Rechtin. The Art of Systems Architecting, Second Edition. CRC Press, 2000. ISBN 0849304407
[ 5 ] http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm
[ 6 ] http://www.ics.uci.edu/~rohit/ARRESTED-ICSE.pdf