The mirage of the home screen apps – Why the home screen application may never be mass market ..

I conducted a workshop on Mobile Web and Mobile apps strategies at ICE Amsterdam which was successful with great participation from the participants

I was discussing home screen apps in response from an insightful question from  Maartje van derleij and I proposed that the home screen application may never be mass market ..

I have covered home screen applications here before especially in context of hiplogic and before that, on device portals and surfkitchen where I said that  the strategy has some merits but with caveats – ex in the case of hiplogic, if I dont want an ‘astrology’ application can it be removed, the surfkitchen application had an overhead  .. etc etc ..

But in Amsterdam .. I thought of this in response to Maartje’s question:

Who REALLY owns the first screen?

The operators long thought that they controlled the first screen

The handset vendors thought the same

And then came ‘home screen replacement’ platforms like hiplogic and others who also wanted to ‘manage’ the first screen often on behalf of the handset vendor or the Operator

But guess who the screen REALLY belongs to?

A vast majority of people have a picture of their child, boyfriend, girlfriend or other loved one ..

Its hard for anyone to compete against that ..

If icons clutter up a picture of your child, guess who will stay and who will go ..

Thus, I said, home screeen applications may always be a niche service since a vast majority of people are always going to use a picture of their loved one and no commercial entity can win against that!

Hence, the ‘mirage’ of the home screen apps ..

Image source gotoknow

Am I the only one who is cautiously optimistic about the Nokia – MSFT deal?

Much has been said about the Nokia – MSFT deal now that we know its for real

But the strategy could work if the combined entity managed to orchestrate a segment of the ecosystem

Here is a question:

If you were a Nokia user, when did you switch away from Nokia and why?

I was a Nokia user .. from 1998 until about 2006 when I switched to Blackberry and then later to a second Android phone in addition to Blackberry

Why?

There was nothing wrong with Nokia. It was simply that there was much more choice and my needs were very specific

I discussed this yesterday when I said: Nokia’s problems and INQ facebook phone on android

That probably sums up the issue for Nokia ..

Another way to put it is: What is the key differentiator for the customer?

Is it OS(symbian)
Is it the browser?
Is it content?
Is it look and feel?
Is it apps?

It could be all or none of the above

But these responses are very simplistic

You could answer this in two ways:

Either customers buy a SPECIFIC device to solve a problem(ex I use BB for business)

OR

They buy a GENERIC device to solve many problems.

This explains my own choice in getting a Blackberry and the Android since it helps to use Skype, Chat etc etc.

Most people will subconsciously follow this argument in one way or the other.

This means, the market leader will not be the one with the largest number of devices, but rather the one who the customers choose on the above analysis

By that reckoning, Nokia basically fell somewhere in the middle and stood for nothing.

By that I mean, if you liked content you got an iPhone.

If you liked business communications, you got a Blackberry.

If you wanted an Open mobile phone, you got an Android device.

There was no real compelling reason to buy Nokia

So, how to be a market leader?

Number of devices, OS, browsers and many specific hardware features are a red herring.

To be a market leader, you would have to unify and orchestrate the ecosystem around your product

Apple unified three elements of the ecosystem:
Customers(vastly superior product),
Operators(differentiation) and
third party developers(appstore)

Google also unified three seperate elements of the ecosystem
Customers(open device),
handset vendors(including new handset vendors like Dell) and
Operators(who did not have the iPhone)

Now, which elements of the ecosystem can the combined Nokia – MSFT offering orchestrate?

There is the next generation home (where MSFT has leverage), You could have the business ecosystem. Or even an emerging market ecosystem

The point is – it has to be SOME ecosystem which the customer values AND the Nokia/MSFT products are dominant

My bet would be the home content ecosystem but that needs more thought.

Thus, I am cautiously optimistic although I can understand the short term pain especially in the developer community

So, this was the EASY bit .. BUT it was necessary

The HARD work comes now .. in painting the vision and getting the backing of the customers, Operators and the developers

We know the vision is not based on Symbian, Qt, Meego.

In that sense, there is a golden lining for today because developers know what not to do (and this benefits Android and the iPhone ofcourse in the short term)

But the qs is: Is there space for a third ecosystem (after iPhone and Android)

I think so

How that ecosystem shapes up, and more specifically, which segment it dominates, is the real question

Comments welcome.

If you at MWC, I am speaking see below for my speaking schedule

Mobile world congress – my talks and schedule next week ..

Image source komonews

Nokia’s problems and INQ facebook phone on Android ..

In response to the Nokia memo, I am glad that finally reality seems to have dawned at Nokia .

While Tomi Ahonen and others have launched a passionate denial/ contradiction of the ideas in the memo, the reality is very simple ..

I do not mind if the memo is genuine, a blog, a rumour or a combination thereof.

The point is: The memo (if you can call it that ..) does encapsulate the problem .. which is

a) Lack of ecosystem management and dominance.

Google orchestrates the Android ecosystem.

Apple owns the iPhone ecosystem.

Nokia sells very large number of devices but it is no longer enough to JUST sell a large number of devices

What ecosystem does Nokia manage/dominate?

If not, can it be called a market leader?

b) The rate of change

c) Taking emerging markets for granted

So, related to above, the questions then become:

a) Can Nokia develop and dominate an ecosystem? and / or leverage existing ecosystems

b) Forget iPhone, consider Samsung which has (so far) managed the rate of change much better. Someone called it a ‘fast follower’. No matter what we call Samsung, they are managing to leverage mindshare

c) Refusal to acknowlegde the new manufacturers in India and China and hoping that these markets will be loyal for ever i.e. ignoring spice mobilility micromax, Olive telecom and others.

Now consider that today INQ announced the facebook phone .. on ADROID

lets read the back story behind this ..and the irony of a facebook phone on Google’s Android .. then think of the rate of change ..

I remember going to a Nokia booth at MWC last year and a woman showing me an idea of new Nokia services.

It was a ‘green’ service for travellers(reducing carbon footprint). As someone who travels extensively and a heavy user of mobile devices, in theory, I am an ideal target user of the service

But to get it, I had to get a Nokia id and a nokia phone.

I told her that there are OTHER ways to get that service and I added that as a traveller the most imp site for me is time and date but she rattled on on a pre scripted manner about how great this new green service was if ONLY I switched to Nokia ..

That’s basically completely missing the point(that I, as a customer have an option and that the reducing carbon footprint service can be obtained from many different ways)

And in my view, apps are a long tail service .. and most customers are now defining what they want in a much more granular way

Now this bring us to facebook phone ..

when I talked of mobile web 2.0 I often said that it should be called ‘web mobile 2.0′ i.e. web drives the agenda

Thats why facebook phone is more important.

Its the service which customers want ..

So, finally glad that someone(at the top) in Nokia has woken up to a new reality that the dynamics of the market itself have changed completely! and old style strategies and approaches will not work

In April 2008, I posted a blog based on a talk called The ASUS effect : Mobile innovation triggered by open source, long tail devices and a shift in the device value chain

That has been highly prophetic .. although I framed it in context of Linux and not android .. the principles are the same ..

Open source introduces a MUCH higher rate of change .. that explains Android success and Facebook phone on Android .. and on the other hand we have the iPhone

Think about it: A young person(a traditional Nokia demographic) goes to a phone shop

They have two choices – a Nokia phone OR a Facebook Android phone

Which will they choose?

Will it matter that its an Android phone? Its the same analogy with me at the MWC Nokia booth ..

In any case, lets wait and see what happens now ..

Could the facebook app be responsible for high phone bills?

I am surprised by the default options of the facebook app on blackberry
- Sync with photos
- Check every hour
- Check for new facebook updates etc etc.

Imagine syncing all photos from your contacts when roaminng!

Or if a new facebook update is pushed to you when you are roaming.

There does not appear to be a way to prevent these updates. I have shut off all faceboook updates now. But I can see why this could cause high bills for people.

We need apps that are ‘bearer aware’ and fb fails to do this!

There is no option to prevent it from updating when roaming. (there is an option to prevent all data when roaming but thats not what I want. Ex I want to browse the web but I don’t want to receive facebook updates and update latest pictures of all my facebook friends or get the latest update from facebook)

Image source : Blackberry cool

Update


Dean Bubley has a nice post on this subject as well to which I agree. See Why developers need to take responsibility and create more network-aware applications

Is Wikileaks (releasing the source information and allowing us to make the analysis) the future of journalism?

On a day when Julian Assange is arrested, it is interesting to think beyond the arrest and to the wider issues it represents

Firstly, would we need wikileaks if our ‘journalists’ did a good job in the first place?

Here are some thoughts:

1) If we break down ‘news’ – News involves
- Acquire the content
- Editing the content
- Providing independent analysis
- Packaging

The source needs authority to be accepted.

2) Now, we don’t need journalists to acquire news. There are many alternate sources. Most of my ‘latest news’ I get from Twitter. We don’t need journalists or newspapers to edit the story and provide analysis (and add their bias!). As readers, we want to make our own decisions. The packaging is done by the Web(for free). News is designed often to ‘sell newspapers’. We (as readers) don’t have any desire to support the coffers of media companies.

3) This leaves the case of ‘authority’. Is CNN authoritative? yes and no. How can they (and other sources) be ‘independent’ if they are ‘embedded in military assignments’ like in Iraq? How ‘unbiased’ can newspapers be when the likes of Rupert Murdoch ‘direct’ their newspapers to support political parties in the UK . So, we got a very tame, watered down and consistent view from ALL the traditional media

4) Wikileaks gives us the SOURCE and then leaves US(or in many cases the Old media journalists) analysing it which leads us to the curious position of the mainstream media reporting on wikileaks (when they should be earning their salary by going to the source themselves). Going to the source is not hard. When the Swine flu threat was on, I often saw the CDC web site (Centres for Disease control and prevention) – and I often knew the same ‘news’ way before it appeared on the media .. since the media were simply repackaging the CDC information..

So, wikileaks model of ‘releasing the source’ could well be the future of media .. which empowers citizens but questions the need and relevance of old media practices

Policy for US Smart Grid Interoperability – A Discussion with George Arnold – Dr George Arnold, NIST, Ajit Jaokar of Futuretext and Kevin Doran – University of Colorado

I have been a bit busy and could not post this before but its very insightful. I hope you gain some insights from this discussion. Comments welcome

Policy for US Smart Grid Interoperability – A Discussion with George Arnold, US National Coordinator for Smart Grids

Transcript – Policy for US Smart Grid Interoperability – A Discussion with George Arnold – Dr George Arnold, NIST, Ajit Jaokar of Futuretext and Kevin Doran

Why did Gartner fail to spot 77 million devices and what it means for prediction methodologies ..

In a recent analysis, Gartner failed to spot 77 million devices from emerging Indian and Chinese manufacturers

Unverified sources in the Guardian article point to a leaning towards: preserve the growth rates; to hell with the actual numbers. and . Thing is, real executives got real compensation based on our numbers ..

Then again, over at Fortune, Philip Elmer-DeWitt was contacted by someone who says they used to work at Gartner, and analysed the PC market – “but the methodolgy is the same for phones”. (You can find their comment below the main article, timed at 11.29; the comments run in reverse chronological order.)

S/he says, inter alia:

“So, in 3Q98, I analyzed the “choke points,” those parts of the supply chain where the channel narrowed enough to get a definitive count. At the time, it was OS, processor, graphics, and hard drive. As I recall, I found 20 million processors with no homes. The market at the time was about 100 million, so this was a 20% discrepancy.

“The process that ensued was a marvel of obfuscation. The leader of the Tracker team figured out a way to rationalize away all the extra units (e.g., multiprocessor servers, inventory, speculation, etc.). It was politically impossible to force the extra units on the regions because it would introduce gross distortions to the historical trends.

So, the mantra became, preserve the growth rates; to hell with the actual numbers. Even the growth rates are fiction. The fudge is in the “others” category, which is used as a plug to make the numbers work out. In fairness, we did do survey work, calling around, and attending white box conferences and venues to try to get a feel for that market, but in the end, the process was political. I used to tell customers which parts of the data they could trust, essentially the major vendors by form factor and region. The rest was garbage.

The industry itself was aware of these issues, but agreed to maintain the fiction because it was convenient. Most vendors kept their own numbers, but referred to IDC for public purposes. Thing is, real executives got real compensation based on our numbers. There were other games played, but that’s for another time.”

This is familiar territory. The analyst forecasts for Location based services in early 2000s are now laughable

“Successful plays in mobile data will ultimately exploit that which makes wireless unique. There is an element inherent to wireless that wired networks, by definition, will never possess – untethered mobility. Mobility, and hence location, is therefore a critical attribute to be exploited by all involved in the wireless value chain,” said Cliff Raskind, Sr. Industry Analyst with Strategy Analytics.

My view is:

Analysts can work on incremental trends

These are nice to ‘model’

These models have their accompanying assumptions

Mainly to protect the analysts anatomy ..

But ..

Disruptive trends are not incremental ..

The cannot be modelled by applying an incremental formula to a historical trend

Thats why LBS trends were so wrong(Google, foursquare etc are the big winners for LBS) and thats why the unconfirmed report in the Guardian “So, the mantra became, preserve the growth rates; to hell with the actual numbers” about Gartner’s methodology is so interesting .. more so because much of the industry analysis is geared towards what the industry(in this case Operators) want to hear

But its not only Gartner ..

The process which analysts use to ‘predict’ works only under incremental conditions where it has limited utility for sure, but the methodology fails to detect disruptive trends

See also: With the warp speeds of android, can Klingons win or do we need faster features?

Image: rajkumar1220 on Flickr

Broadband penetration – the social dimension: The statistics hide the human element

This is the text of my talk at OxOnLine 2010 – Connecting Oxfordshire - How can Oxfordshire provide the best digital infrastructure for the future?.

Thanks to David Doughty Chief Executive at Oxfordshire Economic Partnership for inviting me and Prof Steven Cowley for hosting the event and ofcourse to MEP James Elles for recommending me to speak at this event

PERSPECTIVE

- Perspective of a practitioner i.e. someone who has direct experience of social media

- Also that of an analyst

BACKGROUND

• Broadband Internet access – is a high data rate Internet access—64 kbit/s up to 2.0 Mbit/s typically contrasted with dial-up access using a56k modem

• In 2002, Gartner Dataquest said that the impact of ubiquitous broadband in the U.S. could total as much as $500 billion worth of goods and services produced over a span of ten years. But it also said the estimate is based on what it calls “true” broadband, defined as 10 megabytes per-second data transmission speeds.

• In 2010, Broadband celebrates its tenth birthday in Britain.

• A recent study by Said business school ranked UK as 31st out of 66. The study found that the average global download speed was 4.75Mbps (megabits per second). It is estimated that countries will need an average download speed of 11.25Mbps to handle future apps

• Top 5 : South Korea, Japan, Hong Kong, Sweden , Switzerland

• Sweden has the highest quality broadband in Europe. Sweden is the most successful country in closing the broadband quality gap with residents outside the most populated cities enjoying better quality than those in the cities. Sweden which reached an average download speed of 12.8Mbps. It has also made the pledge that 90 per cent of its population will be able to get speeds of 100Mbps by 2020, with 40 per cent already having it by 2015

• The South Korean government recently promised universal speeds of up to 1Gbps (gigabit per second) by 2012.

• Brazil has a goal of 40 million broadband households by 2014.

QUESTIONS
A provocative question: Does Broadband REALLY make a difference? Networks are only potential applications. Does it make a difference to the GDP if rural Scotland can download videos?

THE STATISTICS HIDE THE HUMAN ELEMENT

1) The old – The 90 year old blogger – Phyllis Greene, who is in hospice care in Ohio, talks about why she decided to start a blog at the age of 90 and how technology has brought a new dimension to her life. Year ago 90 year old blogger.

‘heavens knows I have time on my hand’

‘Blogging has been a life saver’

Sunday, September 19, 2010: All of you wonderful readers have me me realize that. even though I was assuming that I was writing for my own amusement and pleasure, there are people all over the world who now will search out this simple little blog.

I’m about to take a nap– after all, those of us in the public eye :-) need to get our rest. But first, I want to post this on my Facebook page-

I hope I they think of me fondly when I am gone

link at BBC for video

2) The young – Arrowes
Arrowes – how my son uses technology and Google

SUPPORT AND COMMUNITY
Communities are not uniform. They are niche. There is a community for the mainstream(babycenter) but also for your specific situation – from catholic mothers online to single parent fun

BabyCentre
Catholicmothersonline
Singleparentfun

3) Society, Human rights and liberty
- Saudi Arabia – bloggers now need a license

- News has become a social experience: Pew Internet research study shows that an overwhelming majority of Americans — more than 90 percent — use multiple platforms to get their daily news, that the days of loyalty to a specific news outlet or brand are gone and that news has “become a social experience.”

4) Facebook, Twitter – conversations and a mirror of society
- Facebook and Bebo abuse – if a crime was committed by post – would we call it the ‘Royal mail killer?’

- Quit facebook campaign – only 15,000 pledged(not left!). vs 500 million active accounts and 150,000 new members per day

- Zuckerberg (FB founder) richer than Rupert Murdoch(79)

5) Jobs
3D printing could revolutionize manufacturing and lead to local manufacturing: A 3-D printer, which has nothing to do with paper printers, creates an object by stacking one layer of material — typically plastic or metal — on top of another, much the same way a pastry chef makes baklava with sheets of phyllo dough.

CONCLUSION
Its all about conversations. And Emergence. ie spontaneous collaboration with no central structure.

That’s how humanity has always worked!

My top 20 mobile trends for the next decade for Africa ..

Our friend @Rudy De Waele asked me to give 5 mobile trends for Africa over the next decade for his talk at Mobile Web Africas. I have long supported mobile web in Africa(long before this blog became famous) see The mobile Internet will do more for Africa than live 8! and also the Mobile Web Africas conference created by @Matthew Dawes. So, I want to be particularly radical.

Here are my top five trends for Africa over the next decade – you can add some more below or send them to Rudy

1) Mobile commerce creates efficient economies

2) The Mobile phone enables knowledge so repressive regimes are changed to democracy

3) wildlife is protected more(less poaching due to sensors)

4) Companies emerge FROM Africa to EXPORT mobile expertise to the west!

5) An augmented reality application is developed that maps Genes to regions. Thus, visitors to Africa are able to see where their ancestors lived by a probability of gene pool as they travel. This is based on the Out of Africa theory on study of mitochondrial DNA

AT&T, Easyjet and fixed rate pricing: Be careful what you wish for – you just might get it

wishing well easyjet fixed tiered pricing.jpg

Earlier this week, I was speaking to a telecoms exec who said that the biggest problem in the industry is lack of spectrum, the data tsunami etc etc etc .. (Same old story – seen mainly from the Operator perspective .. ). At which point, I interrupted him and asked him:. Did he not mean that it was LAST week’s biggest problem?

In other words, now that the industry has accepted tiered pricing last week with AT&T (and I don’t have any objections to it as long as it is transparent and specific applications are not banned) .. The data tsunami problem may be behind us ..

So, NOW, the onus is on the Operators to deliver .. Since the tiered pricing structures exist. (Or will soon exist) everywhere.

Ironically, this means that there will be MORE customer and regulatory pressures on Operators to be transparent and clear about their pricing

All networks have a very basic problem i.e. it is hard to price a network service since it does not follow a basic ‘cost plus’ model.

That simple reality is painfully apparent in another type if network.. Airlines ..

On a recent trip to Berlin, I realized that the airline easyjet charges 19 pounds for a single piece of checkin baggage (i.e.. the first bag we check in). This is very ridiculous and adhoc and there is no real justification for that price (The same idea applies to pricing tickets very expensively close to the time of departure i.e. the pricing is independent of the cost of the ticket and also to complex tariffs on mobile devices)

In fact, that’s why fixed rate pricing worked in the first place i.e. customers understood it (the option was pricing which customers never understood and that meant they never used the services)

Now that we have tiered pricing, we have the following effects:

a) Operators will see profits and returns from their network investments

b) But on the same token, they have now lost the excuse of the bandwidth hogs

c) This means, they need to be more responsible for their shortcomings(there is no one to blame)

d) Operators will need to be more transparent with their customer since there is an element of complexity

One effect of this scenario is: Operator may ACCEPT the ‘golden pipe model’(a money making pipe!).

In other words, they will see a steady income from the tiered pricing structure and not be motivated to compete on the services front (which they will leave to handset vendors and developers since carriage will be more profitable). They will EXTEND the network to domains where the network itself has an advantage since once the network investment is made, the incremental costs are zero. This means areas like Machine to Machine – Internet of things/ Smart Grids/Secure Cloud/Mobile healthcare/ Digital signage/ Smart cars etc will be a priority

Ultimately, this is good for the industry and also good for the Operator itself.

But now that tiered pricing is being accepted and it has implications as I indicate above, the Operators may well be reminded of that old adage: Be careful what you wish for – you just might get it :)

Image: A wishing well Source Stonehillgraphics