January 2007 Archives
This originally was a comment on TechDirt. I've edited it and provided more context.
The Apple iPhone is causing a lot of buzz and seems to also have garnered the world record for "quickest digerati backlash" in recent history, perhaps ever. On Tuesday, the tech gadget world was wrapped in a collective orgasm: Steve buzz-jammed CES, the iPhone was everything one hoped for and more, the iPhone appears on the front page of several national dailies.
Then, the harsh reality sets in by week-end: it's a controlled platform, it's on Cingular, there's no tactile keyboard, "where's my 3G?", and Apple is going to be fighting the first of many intellectual property battles -- currently with Cisco, on trademarks, eventually on its large portfolio of patents it has amassed to protect the iPhone from poachers.
One concern is the issue of the closed nature of the iPhone platform, particularly its patents. Should Apple rely on these, or should it avoid patents & protection and just "innovate its way" through the competition?
This goes beyond Apple, and really is a classic problem and it's one posed in various guises by the Free Software Foundation and open source advocates, etc.: Should you restrict a one's freedom of use with your product? I think it's becoming clearer that the answer is "partially and temporarily".
So I'm going to use the iPhone hype to talk about the economics of openness when dealing with knowledge-intensive products -- like new gadgets, software, music, etc.
Continuous innovation is absolutely necessary in a growing economy, but it isn't sufficient. One requires a functioning marketplace in order to cover the costs of innovation today & tomorrow (through entrepreneurial profit). Schumpeter was one of the first that came up with a model of economic growth driven by entrepreneurship and innovation, one that implied that markets really weren't ones of "equilibrium", they were constantly changing in a disequilibrium of what we would now call "monopolistic competition".
There's a whole body of theory emerging from economic New Growth Theory, starting with Paul Romer around 1990, with his paper Endogenous Technological Change" about how innovation that involves new applied knowledge leads to growth.
According to Romer, "growth is driven fundamentally by the accumulation of a partially excludable, nonrival input". Non-rival goods are goods where more than one person can posses it, e.g. an idea, a copy of a MP3, software, etc. Goods that are partially excludable imply the creator of the good can prevent people from using it or re-producing it to a certain degree. This isn't just about law, by the way, it's about the theoretical ability for something to be excluded (whether through physical, technological, legal, or other means).
Knowledge -- one of several inputs into successful innovation -- is non-rival and at best partially excludable (it will eventually leak out). Intellectual property law is one way to enact this exclusion, though it's got problems (it doesn't typically stop copyright violation, for example.) DRM is another means of exclusion (which also has debatable effectiveness).
A fully non-rival, non-excludable good is a "public good", and exchange of such goods really isn't governable by a marketplace. Excludability is the death knell of market dynamics: If I can't prevent someone from using or reproducing something, I can't really exchange it in a market as there's no demand incentive. That means I can't cover my costs of today, and I can't cover my costs of tomorrow through profit, leading to no supply incentive. This dilemma is also known as the "free rider" problem.
The environment, for example, could be considered a public good. Any restrictions on the use of the environment can't really be settled well through a market, they have to be through legal means. One can invest in the environment but there's no market-oriented way to monetize the benefits, as others will 'free ride' on your investment.
So, the point is this -- if we feel that human capital & knowledge capital is valuable & scarce, or perhaps that we feel that time is scarce and time is a big factor in how much knowledge is generated in a new technology, we have two choices:
- have a public or private body invest in the knowledge-intensive good(s) and fund it via taxes, retained profits from other goods, or donations. The latter (donations) could even be viewed as a capital market where the outcome is a public good.. These are all solutions to the "free rider" issue when one chooses not to exclude their good from others.
- subject knowledge-intensive goods to a market of monopolistic competition where the good is partially excludable in some way, to avoid the 'free rider' problem. It's "monopolistic" because there is no standard equilibrium price-taking competition when exchanging non-rival goods.
Solution #1 has the benefit of working when the investment is directed at something that can create new customers, new markets and avenues for growth. Examples include the TCP/IP protocol suite and many open source projects. The big problem is that results are hard to measure. This approach has the potential of squandering tremendous resources because it's not directly subject to market demand. At an extreme, one can look to a government industrial monopoly. As a smaller example, witness the explosion of duplicate open source projects out there that all "scratch that itch" just a bit differently...
Solution #2 works too, is often more efficient at allocating resources at opportunities in demand, but by nature will lead to onerous temporary restrictions on freedom of use.
My view: I caution people from having delusions of one system being clearly superior to the other. In a market system, the trick is to find the balance between utility and restriction. Which shouldn't be surprising -- it's the same advice Hal Varian & Carl Shapiro give in their wonderful book Information Rules
There's a lot of innovation happening on the copyright front with software. Redhat, for example, absolutely takes advantage of partial excludability in its products: it offers limited avenues for accessing the software, it requires a paid subscription service for the most accessible channel, it has incentives to pay for complementary services (support, certification, etc.), all of which add up to a balancing act between choice #1 and #2. Not too many have pulled this off successfully at scale, and it's great that they're profitable doing so, to serve as a model to others.
Back to Apple and the iPhone. It strikes me the reason so many get angry with Apple is that we feel helpless -- the market has failed in generating enough suppliers that truly care about synthesizing a balance of features for broad appeal, while focusing on usability and aesthetics. So we almost feel dependent on Apple's monopoly on "cool, fun and usable".
Apple's major advantage and contribution to economic growth is in its ability to keep finding the right combination of new knowledge and integrating it into a mainstream digital consumer product. In that light, patents make sense to me. There's way too much opportunity for a free rider problem. The industry is littered with "me too" products, or products that are so specialized in their appeal that they don't garner much widespread interest. If too many take Apple's R&D and create a "me too" product, that undermines what makes Apple so valuable to the economy.
Similarly, if the iPhone is "too open" from day one to 3rd party developers, it may also suffer the death from a thousand cuts where quality and consistency suffers due to early release bugs. This too undermines Apple's speciality of delivering an integrated synthesis & balance of features. As a comparison, the Java platform was undermined at first, with a proliferation of varying quality implementations (e.g. Sun vs IBM vs Microsoft vs Netscape's) that doomed its potential on the desktop. It was through Sun's careful control over licensing that Java flourished on the server. Some may debate this and claim that open sourcing Java would have enabled a better outcome -- and they may be right, as it would have been one codebase at least. But the point is that some partial, temporary excludability is often very useful.
Does the patent system need reform to get rid of trivial or obvious patents? Yes. Does it make sense to reduce the number of years a patent is valid? Yes -- innovation cycles are quicker, markets respond faster, etc.
Should Apple avoid patents and let the innovation in the iPhone become more of a 'public good'? Probably not in the short run. It's not in the interests of their employees, shareholders, or customers, who want Apple to continue to invest in technological change as a way of increasing its capacity for creating wealth.
Sometimes it's better for all to 'spread the wealth' rather than keep it localized, but to me, it's not clear that this is such a case.
Pat Helland is one of my technology heroes. One of the leads of Tandem's TP monitor, and eventually Microsoft COM+, he knows transactions.
In the Microsoft PDC 2003's architecture symposium, I felt that Pat's talks were worth the price of admission on their own. He single handedly summarized why SOA was a good thing in practical, technical detail. He understood services, he understood their implications on data consistency, and it still is a testament to the dysfunction of our industry when we remain confused about SOA while Pat had it nailed back then and was communicating it in simple terms. I was so jazzed I even wrote an article back in early 2004 that was largely influenced by Pat Helland, fused with a bit of my own perspective and long-windedness.
Pat's overall theory was on the nature of data and interoperability at scale. One couldn't use distributed transactions at scale as it implied a level of trust one couldn't give in a multi-agent system (you don't hand your lock manager to a 3rd party in Taipei when you're in Brussels). He's had a number of metaphors for the same idea over the years: fortresses v. emissaries, service-agents vs. service-masters. Retrospectively, when viewed in context of Roy Fielding's work, this is clearly user-agent vs. origin-server.
In terms of "data elements", Pat suggested a distinction between resources v. activity data (and reference data transferred between them), and now, in this recent paper, entities vs. activity data. (link via Mark Baker, via Mark McKeown)
So, while the two Marks are suggesting Pat's reached REST the hard way, I would suggest this is something he's been saying for years, which is why I've never seen SOA at odds with REST. In 2003 , here was Microsoft's lead architecture guru suggesting all of this WS activity would culminate with this new architectural view of scalable interoperability. Then he left MS in late 2004, and people seemed to ignore him.
Anyway, in REST terms, reference data is representations, entity data is a resource (keyed by a resource identifier), and the set of representations as seen by a user agent is activity data. This latest paper seems to have added the importance of keys/identifiers for the entities (the resource identifier in REST or URI in HTTP).
Rather than being "REST the hard way", this is exactly the kind of paper that people in this debate need to see, understand, and debate. It talks about a topic that's often said to be a reason why HTTP is not enough, and why WS-* protocols are needed -- data consistency and reliable messaging. It also closes an implicit loop in REST when dealing with machine-to-machine interoperability -- origin servers can also be user agents, managing a set of known representations (activity data). That's the point of "hypermedia as the engine of application state". Which may be obvious if you've understood Roy's thesis for years, but it's less obvious to those that come from a distributed objects or transaction processing background.