Is Web 2.0 anything more than just postmodernism?

By Dr Paddy Byers

In a post several weeks ago on the AppExchange devnet blog, parallels were drawn between Web 2.0 and the postmodernist philosophy. Put simply, postmodernism is the prominence of chance over design, anarchy over hierarchy, participation over distance. The commentary in that post asserted that Web 2.0 is to Web 1.0 as postmodernism is to modernism; and we can learn lessons from history and be on the lookout for the eventual backlash that will occur if the anarchic element becomes a barrier to effective exploitation of the fruits of community effort.

In fact, none of these parallels with postmodernism are new. In 1990 an HP executive likened the “appliance culture” that HP had been so successful in cultivating to postmodernism, in contrast to centralised procurement and administration of technology of earlier years. The runaway success of the fax machine, for example, took the phone companies completely by surprise, purchased as an information appliance by individual consumers and office managers under delegated budgets originally intended for stationery. There was no phone directory for fax; the modernist, centralised and controlled systems of the telecommunications industry had been bypassed by individual participation. Other technology developments over the years have been successful based on distinctively postmodernist principles – PGP, and the PC itself, for example.

Web services and WSDL are an excellent example of the power of this. In the modernist era there were substantial, centrally coordinated standardisation efforts that attempted to define formats and semantics for data interchange to support B2B data transfers of all types. These efforts were based on the view that a centrally coordinated definition was the only effective way to achieve a common standard that met all known requirements and guaranteed interoperability. This agenda foundered by failing on all fronts – the standards defined were too narrowly focussed, too complex, and too late to be useful. The postmodernist approach taken by web services and WSDL shattered that entire world view – any unilateral definition is good enough provided that it can be defined unambiguously, generic mechanisms exist to arrange for interoperability between competing unilateral efforts ex-post, and a (centrally provided) mechanism exists to allow namespace and other issues to be dealt with in a decentralised way. By doing this, the real problem owners can make the specifications, ensure they actually deal with the problem in hand, and are timely.

So what’s the difference between that and Web 2.0? Postmodernism, in technology, is characterised by unilateralism and decentralisation of architecture and specification; de facto instead of de jure; taking unilateral action where there is no central provision of what’s needed; and recognition that diversity of requirements needs to be embraced and catered for, not stifled. However, Web 2.0 is more than that; its distinctive characteristic is not unilateralism but collectivism – the way that collective effort and intelligence are harnessed.

Much of the discussion of Web 2.0 confuses these two philosophies, swept up in the excitement of the power of collectivism and failing to spot that the postmodernist idea has been changing the landscape for a lot longer. You could argue that anywhere there’s a URI, there’s a postmodernist principle underlying what’s going on.

So, which of the Web 2.0 “principles” below to which philosophy?

Harnessing collective intelligence: collectivism

Data is the next Intel inside: collectivism

Meeting the needs of the long tail: postmodernism

Postcasting, narrowcasting: postmodernism

The perpetual beta: (arguably) postmodernism

Why is any of this relevant? Because now, when faced with a problem, you have to ask two questions:

- what is the postmodernist solution to this problem; and

- what is the collectivist solution to this problem?

When these questions have different answers, you then have to decide which one is right. It’s not necessarily the case that the collectivist answer is the right one. (What happens if neither is right?) Nick Carr’s amorality of Web 2.0 is a provocative take on it that makes the difference quite stark.

Mobile phones features: who is the customer for service enablers?

Dean Bubley, in his Disruptive Wireless blog, recently discussed the tension between operators as subsidisers of mobile phones and the manufacturers with their need to include differentiating and value-adding features. His point was about the effective subsidy of features that are incidental to the primary (operator-sponsored) capabilities of the phone.

However, this issue is about to become much more serious as operators migrate away from the creation of specific service offerings and towards the broadband business model as mere providers of IP and other basic data services. This is starting to happen, even for the big brand operators; and it is only a matter of time before off-portal services, such as for gaming, will have a significant or dominant share of business that was once an operator monopoly. Vodafone, for example, has started to face up to the reality that some operator-sponsored services have been dismal failures. It’s only a short step from there to a strategy of enabling off-portal services and capitalising on the uptake of lower level data services that should result.

One consequence of this shift is that the operator will no longer be (directly) incentivised to demand the required technology enablers for service delivery in their handsets. Today, the operator is effectively a gate to the introduction of all new functionality and technology into handsets – typically, no new feature that has any impact on the bill of materials gets incorporated by the manufacturer unless it is explicitly demanded by the operator. However, the operators’ continuous push to launch new services has historically driven the inclusion of the underpinning technical enablers – WAP2 and IP, browser capabilities, multimedia, java, MMS, etc. The operator was able to justify the increase in handset cost based on an investment appraisal that took account of the specific services that would be enabled as a result. So, for example, we have JSR184 (3D API for java) being specified by operators, based on the push to create higher value games for purchase via the operator portal. Other relevant technology examples are advanced audio (eg 3D positioning and advanced formats like XMF), Ajax, Flash, DRM, barcode reading, etc.

As soon as the operator de-prioritises its specific services in favour of raw data transport, there is no longer any sponsor for inclusion of the technology enablers within the handset. For the majority of device manufacturers, the result will be quite simple – if the operator hasn’t asked for it, it doesn’t get included.

So who pays? It’s not the device manufacturer because he typically doesn’t participate in data services business conducted using his handsets. It’s not the service providers – at least not directly – because each one in the “long tail” will typically only ever deliver service to a tiny fraction of the handsets enabled with the relevant technology. And it’s not the operator, because he no longer derives any directly related service revenue and is much less likely to attempt to predict the technology that third party providers might want to exploit.

If nobody is prepared to pay, the features simply won’t be there at the point of manufacture. What happens after that? A small proportion of phones will have open OS and will permit the addition of enabling technology (eg as installable libraries); but most phones don’t have open OS, and even if there is an open OS it’s just not possible in many cases to add the technology as an aftermarket download. Even when it is possible, the technology providers will need to construct business models with service providers so that they share service revenues to recoup technology investment.

So what do we expect to happen? The more forward-looking device manufacturers, who are prepared to invest in their future brand potential, have a significant opportunity – to take control of the software technology agenda that they were previously prevented from doing by the operator. Those who succeed in this will be those that have sufficient market footprint to represent a significant targetable population to attract service providers to their platform. Perhaps the introduction of technology only happens hand-in-hand with the creation of services by the device manufacturers themselves – see the Nokia Next Generation Mobile Gaming services, for example. There is also an opportunity for the more forward-looking operators who are prepared to predict the relevant technologies and demand them in their handsets.

As for the rest – the lower tier manufacturers and operators – we may well see the introduction of service-enabling technologies stall, or become fragmented with limited interoperability and features. Far from being the liberating development that all off-portal service providers hoped for, withdrawal of the operators from portal services could have the opposite effect resulting in service limitations, poor footprint and uptake, interoperability problems and technical workarounds. If this does happen, it won’t be the result of any shortage of technology, but a breakdown in the value chain that sees enabling technology through to profitable commercial deployment.

What do you think?