Excellent techcrunch article: For The Future Of The Media Industry, Look In The App Store

This is an excellent article from Techcrunch:

Source: For The Future Of The Media Industry, Look In The App Store

Sections I liked are:

a) Media scarcity is dead. In the future my son will have a flash drive that he will pay $29 for that will have the capacity to hold all movies and music ever released by a major label, studio or tv/cable network.

b) But the entertainment industry has a vested interest in the success of this new type of convergence, as within it lies the secret to its continuing prosperity. The only way to block the incredible ease of pirating any content a media company can generate is to couple said experiences with extensions that live in the cloud and enhance that experience for consumers. Not just for some fancy DRM but for real value creation.

c) They must begin to create a product that is not simply a static digital file that can be easily copied and distributed, but rather view media as a dynamic “application” with extensions via the web.

d) Even today if you look in the iTunes App Store you will see a myriad range of “Apps” that are just evolved ways to package media. While the traditional part of iTunes still mirrors the product taxonomy of a Tower Records, the App Store is creating a folksonomy of media products. It is where new ideas evolve, thrive and go instinctively based on market power. The App Store is where the action is. This is where evolution is unfolding as direct consumer spending spurs media development.

e) In preparing this post, Erick asked me, “Is Apple is a media company?” I thought about that and the answer is really that Apple is what media companies are missing. The missing part of the puzzle is what made media conglomerates such juggernauts in the past. Namely, distribution.

f) The internet is stripping them of their control over the how their products are distributed. Media companies used to be able to create scarcity merely by delaying the distribution of their products across different channels–theaters, pay-per-view, DVD, cable channels, network TV, and so on. The internet disrupts this ability to create media scarcity. It is such a huge disruption, in fact, that it threatens the fundamental profit engine of the media business.

g) If you are a media exec and you look at your product and at the end of the day it’s a digital file that can be copied, then you have a serious problem with your format. Think of your product like a pie chart of the value you are giving the consumer. If 100% of the value is in that file, it is not a sound approach for defending the future of your business. However, if a portion of the experience is derived thorough an integration with a Web component that will yield additional value in functionality or social elements, then it will be more sustainable. There are many such examples emerging in the app store (I am T-Pain, TapTap and many more). Applications that let consumers interact with the media. Create things and share them with their friends. These will not only make the consumer the one who markets your product, but also create an unprecedented level of engagement. That level of engagement will directly map to reduction in piracy as consumers will pay for this experience and wont be able to copy it.

h) Sell access and experiences, not media files.

Do we need a global back channel of volunteers to facilitate issue based collaboration?

Collaboration.JPG

Synopsis and Background

This year, I have been nominated to the World Economic Forum’s Global agenda for the future of the Internet. This article is based on a discussion paper I created for the World Economic Forum. Ideas and comments welcome. If you want to email me, please contact me at ajit.jaokar at futuretext.com.

This discussion paper explores the need to create a global back channel of volunteers to facilitate issue based collaboration. In diplomacy, the idea of a back channel i.e. an unofficial channel of communication between states or other political entities, is well known. As the threats and opportunities for the Internet become global, there is a potentially a need to create a co-responding global network of volunteers and experts who are experts in their own subject matter. The Internet itself was built on these ideas at a lower layer of the stack. In that sense, it is a matter of extending the collaborative ethos of the Internet from the network layer to the service layer. The Web makes such collaboration feasible

Introduction

The Global agenda on the future of the Internet aims to look at the international structures of cooperation and how they should be improved and whether there is space to make them more effective. The main difference of this initiative from other ones is that it aims to develop policy recommendations that have an international component. The goal of the GRI report is to highlight areas where there is potentially an opportunity to address global challenges in a systemic way and identify medium to long-term solutions.

On a broader level, 3 dimensions of ideas relevant for drafting a discussion paper were identified:

1. Foundational dimension: what does the Internet stand for on a principles level?

2. Challenges in scoping out the principles.

3. Recommendations for the future

The Challenges

The current Internet has been built upon the End to end principle which advocates that whenever possible, communications protocol operations should be defined to occur at the end-points of a communications system, or as close as possible to the resource being controlled i.e. The nodes are smart (intelligence shifts to the edges of the network) and the network itself is agnostic.

This simplicity of the network has led to its phenomenal growth. Any technology that leads to be widely adopted comes under the scrutiny of control elements. The Internet is no different. Ironically, the very success has led to some issues that could not have been foreseen and the Internet is now seen to be under threat from real or perceived dangers.

The Challenge now becomes how to preserve the ethos of the Internet (i.e. the end to end principle which has led to its growth) and at the same time overcome some of the challenges facing it. We need to also overcome the secondary challenges ex: Mobility, threats to generativity, incorporation of intelligent devices etc

Note that the Internet itself offers no guarantees across the stack – from the packet level (i.e. no guarantee that a packet will be delivered) to the service level. Inspite of this, it clearly ‘works’ because the ethos of the Internet (based on collaboration, meritocracy and co-operation) is its core strength. This ethos has been shown to work at a network layer (for instance the management of Internet protocols) and the platform layer (open source collaboration models).

However, crucially: the ethos of the Internet and its underlying spirit of co-operation is not well understood at the service/user/application layer. We all take for granted the workings of the Internet without appreciating the underlying conversations (often between volunteers) which keeps this mechanism growing in a scale free manner

Technology transforms us from a hierarchy to a Network

Technology raises fundamental questions about the way our societies and economies work.

Specifically,

• The Internet transforms us from a hierarchy to a network.

• The network model is pervasive and natural (brain, neurons, ant colonies etc). It has stood the test of time in nature and is proving resilient on the Internet.

• Intelligence, the capacity to manage and also the responsibility shifts to the edge – and that means to the people! Users can share files, content, data, but also computing and storage resources. A shift of value to the edge of the network empowers the individual – and hence is a sacrosanct principle

• Convergence is here. We are going from the Internet of computers, to the Internet of people (web 2.0) to the Internet of things. This convergence will ‘talk’ many languages: through open source, private clouds, Twitter, Skype, LTE.

Collaboration

• The technological mechanisms underlying the Internet are understood but not so it’s collaboration mechanisms. Technology is good at connecting – but connections are not collaboration. Thus, we need ‘something else’i.e. the missing social/collaborative overlay on top of the Internet.

• In a network structure, Self-organization is the paradigm. I.e. a spontaneous creation of a globally coherent pattern out of local interactions. Which is why collaboration is the key.

• On first impressions, individuals acting independently could be seen as a mechanism to destroy limited resources. In 1968 Garrett Hardinan the article “The Tragedy of the Commons”. The article describes a dilemma in which multiple individuals, acting independently according to their self-interest, ultimately destroy a shared limited resource (commons).However, when economists began to look at ecosystems of commonly managed resources he discovered that often they work quite well.

• Natural selection (for instance in Ant colonies, wolves, Geese etc), favours cooperation but under specific conditions. In the paper “A Simple Rule for the Evolution of Cooperation on Graphs and Social Networks” Ohtsukiet al demonstrate that: if the benefit of the altruistic act ‘b’divided by the cost ‘c’, exceed the average number of neighbors, k. cooperation is a consequence of social (and net) viscosity.

• We are witnessing a Cambrian explosion in new ideas, enterprises and concepts globally.

• In addition to mass collaboration, the Internet takes us from ‘mass to niche’. It creates new business models like carpools and couch surfing which are based on collaboration at niche levels

Recommendations and ideas for discussion

If everyone us separated by just 6 links (six degrees of separation), how can we leverage this property going forward?(for trust, Identity etc).

In keeping with the emphasis on collaboration, here are some thoughts to consider to extend the ideas of collaboration to a global scale:

• Creation of a global ‘back channel’ i.e. collaborative channel to overcome issues like SPAM etc. These mechanisms of collaboration are the foundation of the Internet and already exist in diplomacy and other circles. These could be extended to issues. The Web makes such collaboration feasible.

• A study and taxonomy of collaboration models and their applicability to the issues facing the Internet today

• A greater understanding of Peer to Peer and it’s role in the future of the Internet

• A creation of a ‘best practices’ document for collaboration and lessons learnt from it

• Creating an awareness of the ethos of the Internet and how the same sprit can apply at the service layer of the stack

• A blueprint of a social overlay on top of the Internet which would incorporate Trust and Identity

• Creating an awareness of the threats to the Internet including threats to the generative nature of the Internet

Conclusions

In diplomacy, the idea of a back channel i.e. an unofficial channel of communication between states or other political entities is well known. As the threats and opportunities for the Internet become global, there is a potentially a need to create a co-responding global network of volunteers and experts who are experts in their own subject matter. The Internet itself was built on these ideas at a lower layer of the stack. In that sense, it is a matter of extending the collaborative ethos of the Internet from the network layer to the service layer. The Web makes such collaboration feasible

Comments and ideas welcome! ajit.jaokar at futuretext.com.

Image source: http://ayanthianandagoda.org/

Is the uptake of casual games/games for women a myth?

Here is a thought ..

A few years ago, Casual gaming was supposed to be a big beneficiary in the near future.

Same with games for women (i.e. not games for the typical male audience)

Now .. the iPhone is a successful platform across the sexes i.e. we see many women who use the iPhone

But the top iPhone games of all times shows mainly male oriented games and / or old favourites

There does not seem to be an uptake of casual games or games for women – even when the platform(iPhone) is adopted and used by a wider segment

Hence, is the uptake of casual games/games for women a myth?

Update

Many people have said that there is some evidence in terms of comments on games from women etc – but my point is the iPhone offers many data points -ex we know how many women have bought phones – how many games have been downloaded from those phones – etc etc. Yet, there does not seem to be any direct emperical study. There is some good analysis about casual games and the iPhone ex: from venturebeat

Eric Schmidt, Magic of Cloud computing and mobile – and my blog as the phone becomes a magic wand ..

I once had a blog called The phone becomes a magic wand to the cloud services: Mobile sensor based interface to the cloud to jump start the Internet of things .. .. Hence interesting to see a brief video from Eric Schmidt and coverage from Mike Arrington on the magic of Cloud computing and mobile .. which I totally agree .. The magic is just beginning!

The blog again ..

The phone becomes a magic wand to the cloud services: Mobile sensor based interface to the cloud to jump start the Internet of things ..

Mobile Monday Barcelona on Mobile Innovation

Mobile Monday Barcelona has an event on Mobile Innovation. Mobile innovation is definately picking up speed and may be some interesting insights at this event considering it also runs along with the Innovation festival in Barcelona

What are the possible end user centric abstractions for Cloud computing ..

The definition of cloud computing is nebulous ..(excuse the pun ..) but in general .. it is framed in commercial/ vendor language .. i.e. We could say that Cloud computing is a way to consume hardware and software as a service(the classic Capex v.s. Opex arguement)

This is good .. but it is not end user centric .. i.e. makes sense as a vendor but as a end user it needs to appeal to me in more concrete terms .. not just capex vs opex ..

I saw this video about What is Cloud computing and although there are some whacky definitions .. Kevin Marks(Google) ideas are interesting ..

Essentially .. the cloud comes from the ‘network diagram’ model where the network was the Cloud. And in that case, everything is a ‘message’ – You send a message to the cloud and receive a message from the other side based on the processing in the cloud. This makes sense as an end user.

Hence, we can see the Cloud is an ‘abstraction’. For example: The Internet is an abstraction of the Cloud around packets. The Web is an abstraction of the Cloud around documents etc ..

Thus, different abstractions are possible – for instance the social abstraction(social layer), Privacy abstraction(Privacy cloud), Regional abstraction(EU cloud), National abstraction(National cloud), The secure cloud, The Mobile cloud and so on .. Each of which will have different principles and best practices ..

This is interesting and end user centric ..

How many such abstractions could be possible? And then .. maybe each could have best practices?

You could take this idea to many levels .. ex – Best practises for a Local government Cloud, Best practices for a University cloud etc. All of which would be based on a collaboration paradigm ..

That’s all I could think of .. so far .. comments welcome ..

Fraunhofer FOKUS event – IMS 2009 – Open APIs panel

fokus 2009.jpg

On Nov 11-12, I will be chairing a session at the Fraunhofer FOKUS IMS workshop in Berlin organised by Dr Thomas Magedanz, Niklas Blum and team at Fraunhofer FOKUS.

Fraunhofer FOKUS events are always informative and are a part of my yearly agenda which I very much look forward to. This year my session about APIs and I am trying to create a discussion about the topic here.

A couple of weeks ago, when I chaired the mobile appstores event at CTIA , there was a discussion about APIs based on the talk by Mike lurye of Amdocs and others. The discussion raised some important questions which I list here from my notes.

Why the sudden emphasis on APIs?

APIs are about leveraging Operator assets and they have been discussed for some time now. There has been an attempt at standardization from the GSMA Open APIs initiative which I have blogged about before. So, what has changed?

In a nutshell, it is apps .. I.e. apps provide a use case for APIs.

Some notes about synergies between apps and APIs which I hope to discuss in Berlin(these are derived from the discussions at CTIA)

a) Operator assets include access to network resources which include Billing, Customer insights and location

b) For the Operator, the goal is to reduce churn, attract customers and gain revenue from their resources. Given a choice more than 80% of customers will use Operator billing(Nokia)

c) Apps provide the customer with a multi sided business model and the facets of that model include Marketers, consumers, carriers, devices and developers

d) The idea of a smart pipe is related to the idea of monetization of the API. However, Some APIs are becoming commoditised or free. Also, the same function can be implemented at different layers of the stack. Ex: Location APIs can be implemented at the application layer(as opposed to the network layer) as cell id databases. This implementation may be imperfect(for instance the network layer implementation of location may be more accurate) but it is cheaper(or worse – free). So, the monetization of APIs is an issue

e) Various business models could be possible: Freemium(free until a certain point), subscriber pays, a la carte – pay as needed, pre pay(API access bundles), post pay among others

f) APIs can add value through providing usage history, personalization, recommendations, reviews etc for apps.

I will blog more about device side APIs as well – but seek views about APIs, their renewed significance and emphasis for the session in Berlin.

If you are at this event, happy to meet up! I very much recommend this event

LTE Americas is on Nov 4-5 in Dallas ..

LTE Americas.JPG

LTE Americas is on Nov 4-5 in Dallas .. Although I cannot attend this, I believe it will be a great event considering how successful LTE Berlin was earlier this year. US Operators – especially Verizon – are taking the lead with LTE and I believe there will be many interesting insights here ..

If you are attending, happy to cross post your views on this blog. Please contact me at ajit.jaokar at futuretext.com

Why LTE is needed next year ..

LTE.JPG

LTE is needed next year claims a study as reported in fiercewireless

My view is: I think LTE is needed next year onwards but not for the reasons listed in the report

I give my thoughts below why I believe that is the case.

Background

Firstly, let us clarify something:

1) We all know that ‘LTE deployed’ has a more complex meaning (devices available etc).

2) Secondly, Operators will run multiple radio networks (HSDPA, 3G and LTE) simultaneously. Some will run HSDPA+, some will run LTE in hotspots or even rural areas.

3) LTE handsets will be delayed. As will chipsets from optimistic projections

4) Consumers buying phones may not notice much difference between LTE vs. non LTE devices

5) Operators will raise the spectre of spectrum scarcity for political and commercial reasons.

6) Backhaul complicates matters i.e. it is not possible to say if the scarcity is due to backhaul issues or due to the radio network itself.

7) Over and above all this, vendors will claim LTE is needed because they want to sell to Operators.

Against this background, let us consider some more aspects:

a) Operators have been complaining about Smartphone traffic .. recently.

b) Data consumption from smartphones is increasing. The Nokia N95 user consumes 10 times data than the average user, the iPhone user 30 times and the HSDPA (laptop) user consumes 100 times data from the average user

c) As at March 2009, facebook doubled its size from the last 8 months

d) In an era of fixed rate pricing and net neutrality, Operators will have to embrace services where the network truly offers a unique advantage to the Web. Machine to Machine computing and secure transactions offer a way to do this

e) Operators have to distinguish themselves. In spite of opinions, Operators are innovative and in some cases like Verizon in the USA are pushing for innovative services

Would we buy mobile broadband from a network that cannot handle smarphones?

When I see Operators complain about Smartphone traffic, I have to ask the question:

Would we buy mobile broadband from a network that cannot handle smarphones? I.e. If an Operator cannot handle the iPhone, their mobile broadband will be even more poor?

If a specific Operator complains about Smartphone traffic, it will be an alarm bell in the minds of customers and will cause them to churn from those networks to ones who can handle data.

Apparently, it turns out, that’s not the case ..

There are two components to the traffic load: One is the raw data (this comes from laptops, direct consumption etc) and the second is signalling resource. Signalling traffic is caused by a lot of ‘bursty’ traffic from social networks, updates, lookups etc. Raw data offloading can be done by various means like WiFi, femtocells etc.

However, signalling is the real issue. And as we see from the rise of social networks and the Mobile Web, signalling traffic will only increase.

Signalling

Martin Sauter explains the issues arising from signalling in a blog Continuous Packet Connectivity (CPC) Is Not Sexy – Part 1

Release 7 introduces Continuous Packet Connectivity and the section below is simplified from Martin’s blog

With HSPA (HSDPA and HSUPA), mobile devices now have a multi megabit data bearer to both send and receive their data. As devices do not send data all the time, the device is managed through various states. During Short Periods of Inactivity (< around 10s): The network keeps the high speed channels in both uplink and downlink direction in place so the mobile can resume transferring data without delay. Keeping the high speed channels in place means that the mobile has to keep transmitting radio layer control information to the network which has a negative impact on battery life and also decreases the bandwidth for other devices in the cell. 10 seconds is certainly a compromise which is not always ideal since during a web browsing session, for example, it takes the user longer in many cases than this time to click on a new link.

Continuous Packet Connectivity aims at reducing the shortcomings described above by introducing enhancements to keep a device on the high speed channels (i.e. in active state) as long as possible while no data transfer is ongoing by reducing the negative effects of this, i.e. reducing power consumption and reducing the bandwidth requirements for radio layer signaling during that time.

Tweaks are possible but may not be enough ..

My contention is: tweaks are possible (ex CPC enhancements) but may not be enough considering the sheer volume of growth in social networking applications which leads to bursty traffic and to signalling problems. LTE is needed to overcome the signalling issue and the projections on the social use of mobile data make LTE imperative. LTE is not mandatory but the trends in data growth are very high and that provides an impetus for network upgrades

Richer applications are possible

Besides the reason mentioned above (which is defensive), the more offensive(revenue earning) reason is to exploit new applications. These involve innovative use of the network but will also lead to bursty machine to machine traffic which will need LTE

Mobile health is very interesting. Lots of Operator focus as you can from a forthcoming conference on mobile health in London . Secure cloud computing – for example initiatives from Verizon and Smart Grids

Investments in these applications will help Operators to distinguish themselves and avail revenue from services which cannot be fulfilled by the Web.

To conclude:

So, to conclude LTE will be needed next year ..

The mistake lies in extrapolating the past into the future i.e. thinking that LTE phones may be most important drivers ..

The trends for bursty traffic driven by the social use of mobile data are exponential. This is the main defensive driver. Some tweaks are possible but will not be enough. The biggest opportunities for LTE may be in non consumer areas like Smart Grids, Mobile health and Cloud computing.

In an era of fixed rate pricing and net neutrality, smart Operators will embrace services where the network truly offers a unique advantage in comparison to the Web. Machine to Machine computing and secure transactions offer a way to do this

Acknowledgements

I clarified some of my thinking from a series of discussions I have been having at forumoxford www.forumoxford.com and many thanks to Todd Spraggins, Vladimir Dimitroff, Dean Bubley, Martin Sauter , Zigurd Mednieks and Chetan Sharma for their insights and feedback

Tips to overcome the postal strike ..

yousendit.JPG

As the postal strike starts today, many small businesses will be affected. Here is a tip. Try using a service like Yousendit .We have used it for a while and its great. You can send 100M for free and beyond that for a nominal fee.

This works well – and for the first time we used the paid service from yousendit instead of the post

I have no commercial affiliations with this company. But postal services will soon realise that the Internet gives people more options than posting paper!