Global Forum 2013 – DRIVING THE DIGITAL FUTURE, Opportunities for Citizens and Businesses

 

 

 

 

 

I was looking forward to speaking at this event but I have to be in the USA next year.

This 22nd edition of the Global Forum is co-organized with the Foundation Stock Weinberg and the Sophia Antipolis Foundation

Operating since 1992, the Global Forum/Shaping the Future is as an independent, high profile, international, non-for-profit think-tank dedicated to Business and Policy issues affecting the successful evolution of the Digital Society. Evolving agenda HERE

Among the topics this year are Incentive for Investment, Cross-Boundaries Services Challenges, Broadband/4G Infrastructures,  Evolving Mobile Technologies. These are relevant considering the the Digital single market. Here are some thoughts which balance Open systems, infrastructure investments , innovation and growth.

The goals of  the single market are : “In the face of the deep crisis affecting its economy and society, Europe needs to tap into new sources of growth in areas that will reinforce its competitiveness, drive innovation and create new job opportunities.”

So, it’s a question of balancing investments (public and private) to create a viable ecosystem to create growth

An analogy is sustainable forest management

The stewardship and use of forests and forest lands in a way, and at a rate, that maintains their biodiversity, productivity, regeneration capacity, vitality and their potential to fulfill, now and in the future, relevant ecological, economic and social functions, at local, national, and global levels, and that does not cause damage to other ecosystems.

In simpler terms, the concept can be described as the attainment of balance – balance between society’s increasing demands for forest products and benefits, and the preservation of forest health and diversity. This balance is critical to the survival of forests, and to the prosperity of forest-dependent communities.

So, if we take an ecosystem perspective to achieve a balance for infrastructure, investment, innovation and growth – we have to consider that any finite resource – whether forestry, spectrum, capital investment etc would behave the same.

So, considering a Pan European perspective to create investment, growth and jobs for the Telecoms sector – we need to compare markets where investments have worked.

According to the CTIA  – Since 2000, wireless providers invested more than $296 billion, not including the more than $35 billion in spectrum auction revenues paid to the U.S. government.

So, if spectrum is considered as the limited resource to drive investments, growth and jobs – the question is – how to encourage additional investment (beyond the cost of the spectrum) to create more growth and jobs

Comparing to the American market, we need

a)      More flexibility to reduce the market fragmentation. This means fluidity in the secondary markets (allowing trading, aggregating etc)

b)      Harmonizing spectrum by creating a more homogenous footprints across European markets

c)       Within guidelines – encourage long term ownership for companies who have proven that they see the buying of spectrum only as a first step. To compare to the CTIA stats – the follow on investment of 296 Billion$ is more than 8 times the license (35billion $). This shows commitment beyond the fee (and discourages short term speculators

In other words, any long term investments have some basic fundamental truths – and they apply to forestry and spectrum in the same way

I will probably blog about this event findings after I am back from the USA

 

Using the Raspberry Pi for STEM education by interconnecting STEM domains ..

STEM education is a big topic (A U.S. Makeover for STEM Education: What It Means for NSF and the Education Department) .. But many of the proponents of STEM education take a policy view(aka STEM education is necessary) but not a practitioner view (How exactly do we foster STEM education).

At my edtech start-up Feynlabs – we take a Computer Science approach which naturally leads to STEM education because Computer Science relates to applying Computing to other Scientific and technical domains.

The question is – How can we practically make a difference?

My thinking is – platforms like the Raspberry Pi offer interesting opportunities for interconnecting STEM domains  

Here is an interesting paper that provides some background – Why STEM Topics are Interrelated: The Importance of Interdisciplinary Studies in K-12 Education (pdf) by David D. Thornburg, PhD

 1)  The difference between science and engineering At a high level, it is useful to think of science as the study of the “found,” and engineering is  the study of the “made.” Scientists concern themselves with the advancement of knowledge in the realm of natural phenomena. Even the most abstract theoretical scientists are concerned (at their core) with the explanation of natural phenomena that might be observed under the proper conditions. Engineers, on the other hand, use scientific knowledge for another purpose: the design and fabrication of objects for the advancement of mankind. Whether it is the design of a new telescope, or crafting a more flexible space suit, engineers generally have a specific goal in mind when they start their projects: a goal that relates to having something fabricated (rather than discovered as naturally occurring).

 2)  At the core, science involves the “scientific method,” a process of hypothesis formulation and verification that is taught to students at multiple grade levels. Engineering, on the other hand, has at its core the more flexible notions of creativity and innovation – attributes that are harder to quantify and teach, but that are essential in the engineering domain nonetheless. The creative process can be nurtured, but it takes a special effort and classroom climate to stimulate creativity.

 3) Computers are technology, but technology is more than computers In the K-12 world, our tendency is to think of “technology” and “computers” as synonymous. While it is true that personal networked computers are powerful technologies, there are myriad other technologies of benefit to education. Some of these (e.g., telescopes) are high-tech marvels, and others (e.g., duct tape) are not. The point is that they are all technologies. It is essential, when thinking about the development of STEM skills, to be sure that “technology” is not restricted to computers, but, in fact, expanded to include all kinds of devices, instruments, and tools that can be applied in both domains of science and engineering.

 4) And most importantly .. This brief look at the interrelationships among the four STEM topics reveals something of great power: they all reinforce each other in support of the overall growth of each topic

 So, when it comes to the Pi – how does it play out?

Firstly, because the Pi allows us so much freedom to explore Computing, it allows us the freedom to apply Computing capabilities to different domains

When it comes to STEM, there are many examples of Engineering domains – for example Pi in the Sky and Ocean /sharks monitoring and Chemistry (airpi)

But we can also see slowly applications into science itself (as described above)

For example – using the Pi to build a super computer and exploring Maths (Wrong result with log10 math function in armv6 on Raspberry Pi)

The Pi also allows you much deeper exploration into the hardware stack – for example tweeting the CPU temperature by using the vcgencmd commands

However, the most significant area where the Pi can be applied for STEM is simply the possibility to create interconnections between the disciplines and to explore across the stack which will highlight the interplay between the STEM domains (science, technology, engineering and mathematics)

 

 

 

 

 

 

 

 

Comments welcome

Image source: Why STEM Topics are Interrelated: The Importance of Interdisciplinary Studies in K-12 Education (pdf) by David D. Thornburg, PhD

Raswik by Ciseco – wireless inventors kit for the Raspberry Pi

 Here is a review of an interesting product called the Wireless Inventors Kit for Raspberry Pi or it’s much shorter Raswik by Ciseco

Ciseco is well known in the geek/hacker community for their radios and for simplifying IOT technology.

I have been following them in context of my work with my edtech startup feynlabs

As a matter of transparency, I have no commercial relationship with Ciseco – but I like their ethos (Open/Hacker oriented).

I met Ciseco CTO and co-founder Miles Hodkinson at an event organised by Rob von Kranenburg founder of the Internet of Things council in London and I invited Miles to speak at an event in London on Nov 22 which I am co-chairing with Rob ( Accelerating the Open Source IOT ecosystem).

PS – The event is free and you are welcome to attend – especially if you have an interest in IOT/ Open technologies.

So, back to Raswik ..

The kit contains the following components .. which are designed to run a series of Raspberry Pi  experiments based on sensors and actuators like the temperature sensor, light sensor etc

 1 x Ciseco Slice of Radio,  1 x Ciseco XinoRF development board,  1 x 4Gb SD card with Pi OS and sample software,  1 x USB cable,  1 x Small breadboard,  5 x Red LED,  5 x Yellow LED,  5 x Green LED, 1xBlue LED, 1 x Transistor, 1 x Diode,  10 x 10K Resistor,  20 x 470R Resistor,  1 x Light Dependant Resistor (light sensor), 1 x Thermistor (temperature sensor),  1 x Piezo sounder,  3 x Push buttons,  Jump wire (assorted colours), Length of hook up wire.

 The two significant components are the Slice of Radio   and  XinoRF development board

Essentially the two components enable the Raspberry Pi and the Arduino to work together and it’s important to understand why this is significant and how it compares

The Raspberry Pi and the Arduino

The Arduino and Raspberry Pi are both inexpensive, small electronics boards, that’s where the similarity ends. The major difference is technology, the Pi is a computer and the Arduino is a microcontroller. A microcontroller is a much lower powered and simpler device than a computer. You find them all around us, from your microwave, washing machine, car ABS sytem to your TV or DVD player etc.

 The Pi is powered by a 700Mhz 32 bit processor that’s similar to what drives most smart phones, the Arduino by a 16Mhz 8 bit processor has roughly the  equivalent processing power to an 80’s Sinclair spectrum. The Pi has an operating system where the Arduino does not.

Why chose on over the other?

A microcontroller is the perfect tool for doing a single task very well, with utmost reliability for the entire life of the product,  the Pi has a whole operating system to run so is impossible to pare down to just a single process (there are tens to hundreds of processes running even when idle). The Pi would not  for example be the best choice for calculator (single task, low power) but a microcontroller is perfect. The Pi by having a full operating system has support for sound,  video, a keyboard, mouse and networking. It makes the perfect decision engine and user interface. And the Arduino makes for the perfect end node.

Having now established that the Pi and the Arduino are beneficial working together – there are several ways in which we could connect them together – which brings us to the Raswik approach ..

Using radio to communicate between the Pi and Arduino

Raswik uses a radio approach to enable the Pi and the Arduino to speak to each other.

Raswik has two components which make this radio communication possible

The Slice of Radio wireless RF trans receiver for the Raspberry Pi - At the Pi end – the Slice of Radio is two way RF transceiver for the Raspberry Pi. It comes as   a pre built module and it utilises the Raspberry Pi’s on board serial port (UART @ 9600bps) for communication and hence needs no driver.

The Xinorf 100 arduino uno r3 based dev board with radio transciever - At the Arduino end, the Xinorf 100 is a digital electronics development board composed of a hybrid of the Arduino UNO R3 and a wireless module called SRF-U. The combination provides a built in wireless (which means you don’t need an XBee shield plus radio module or similar)

Analysis

This approach of using radio to communicate between the Pi and Arduino is interesting. It reduces complexity (no need to install drivers). It provides accessories, sensors and actuators in a box – which means you can quickly start doing real physical measurements like temperature sensing etc and others ex  a series of Raspberry Pi  experiments based on sensors and actuators

There are other ways to connect the Pi and Arduino for example over the serial GPIO interface or over a USB cable  or using a stackable Arduino clone like alamode

Also, the two platforms are each a moving goal post. The Arduino Due is much in line with the capabilities of the Pi and the Intel partnership with Arduino Due makes it interesting

So, while the platforms will evolve, more complexity will be introduced.

Ironically, that means there will be a greater need for simplicity and to get it all working together in a simple way for learners –

And therein lies  the value of the Raswik kit. We got it to work quite easily .. and quickly started working with sensors. The code is open source as well. So good for learning.

We tested it out and got our temperature graph working with a Raspberry Pi

Also, the code for the Python GUIs and all the Arduino sketches is open sourced and available at this link to download

 

Using the Raspberry Pi and a web cam to tweet images

 

 

 

 

 

 

 

 

I managed to get code to work to tweet from the Pi we followed exactly this strategy

The code below

Notes and how to run it (first to tweet from the Pi)

To run

python feyn_tweet.py ‘hello from Aditya’

1) feyn_tweet.py is our program

2) ‘hello from Aditya’ is our tweet

3) Our tweets below(we can tag people)

4) XXXX is twitter keys for you (see the link above for it)

5) Uses a library called [Twython which is on github](https://twython.readthedocs.org/en/latest/)

Our tweets

[email protected] @tonyfish – tweeting from the RPi very cool!_

_hello @leeomar from Rpi_

_hello from Aditya_

_https://twitter.com/feynlabs_

code below

#!/usr/bin/env python
import sys
from twython import Twython
CONSUMER_KEY = ‘XXXXX’
CONSUMER_SECRET = ‘XXXXX’
ACCESS_KEY = ‘XXXXX’
ACCESS_SECRET = ‘XXXXX’
api = Twython(CONSUMER_KEY,CONSUMER_SECRET,ACCESS_KEY,ACCESS_SECRET)
api.update_status(status=sys.argv[1])

Now, the code for the webcam ..
This takes a picture and tweets it
our webcam tweet from the Raspberry Pi  HERE 

To run ..
**python feyn_tweet_camera.py (where feyn_tweet_camera.py is our file which we made into an executable)**

#!/usr/bin/env python
import sys
import os
import pygame
import pygame.camera
from pygame.locals import *

from twython import Twython
CONSUMER_KEY = ‘XXXXX’
CONSUMER_SECRET = ‘XXXXX’
ACCESS_KEY = ‘XXXXX’
ACCESS_SECRET = ‘XXXXX’
api = Twython(CONSUMER_KEY,CONSUMER_SECRET,ACCESS_KEY,ACCESS_SECRET)
pygame.init()
pygame.camera.init()
cam = pygame.camera.Camera(“/dev/video0″,(640,480))
cam.start()
image = cam.get_image()
pygame.image.save(image,’webcam.jpg’)
photo = open(‘webcam.jpg’,'rb’)
api.update_status_with_media(media=photo, status=’My RPi be tweeting images now => ‘)

A temperature graph using the Raspberry Pi and Arduino

Using Raswsik  we (my son Aditya and I) interfaced the Pi to Arduino using a one meter radio link (through Raswck) and then used a temperature sensor to create a temp graph. This is all fascinating stuff  Its amazing how much you can learn and teach .. and how much I have learnt myselves through experimentation We will be demoing this at the feynlabs launch in Miami

The circuit was easy to hook up using a breadboard also

 

 

 

 

 

 

 

 

 

Looking for demo participants/speakers for free event Accelerating the Open Source IOT ecosystem

 

 

 

 

 

 

 

Morning all

The event is coming along great and I am looking for demo participants/speakers for free event Accelerating the Open Source IOT ecosystem  22 November 2013 from 08:30 to 15:30

The highlights

1) The event is Free at a great venue Campus London  

2) All attendees get a free Pi (with the webinos evaluation kit)

3) We already have some Great speakers

4) I am also looking for speakers/ Demos partners

If you want to attend just register HERE for free 

If you want to speak/demo – contact me at ajit.jaokar at futuretext.com

 

 

Webinos and IOT – To boldly go where no node.js has gone before

Accelerating the Open IOT ecosystem

On Nov 22, I am co-moderating an event the event Accelerating the Open Source IOT ecosystem. The event brings together thought leaders and practitioners who are passionate about Open source and IOT.

Here, we are primarily speaking of Royalty free, non proprietary, Open source software for the Internet of Things. Of course, that does not exclude other software paradigms – which are a part of the ecosystem.

In this longish blog post, I will discuss how the webinos project fits in with Open source IOT especially in the context of its role for node.js

I have been leading the webinos IOT hub efforts – the blog comprises of insights and contributions from others at webinos especially  Dr Paddy Byers, Dr Nick Allott, Dave Raggett(W3C) and Giuseppe la Torre

It’s hard to describe webinos .. and I once jokingly said applying a Star Trek analogy that ‘webinos boldly takes node.js where no node.js has gone before!’

So, I will use the paradigm of node.js to explain these ideas

Node.js and webinos

Essentially, in webinos we embed an agent into a device that allows them to be part of the Personal Zone of devices managed by a person. The agent is implemented with Node.js and it enables secure mutual authentication of devices in the Zone. Thus, webinos extends the traditional web runtime with a suite of APIs for discovery, messaging etc.

The analogy of an email server is applicable here. Like an email server, messages are stored in the ‘cloud’ but can be accessed by local devices. But webinos also adds distributed functionality i.e.  services owned by one person can be shared with others (under policy limitations). In an IOT sense, that means a sensor owned by one user can be discovered and shared with another user

Webinos has the following characteristics:

  • Non-proprietary
  • Cross-device
  • Secure
  • Distributed
  • Privacy enabling i.e. which helps users in re-establishing control over your devices and personal data.

webinos can be applied to many industries and applications and is initially focussed on four specific areas or gateways: TV, Automotive, Health and Home Automation Gateways. Note that – this blog and discussion relates only to the Home automation/ IOT areas of webinos.

The description of webinos (non-proprietary, cross-device, secure, distributed platform which helps in re-establishing control over your devices and personal data) sounds daunting but in practice, it means :

a)      Devices you own can be translated into a service that can be discovered and shared with others (based on policy settings) and

b)      Similarly, devices owned by others can be discovered by you as a service and can be accessed (again subject to policy approval).

This has implications for IOT/Home automation/Smart cities

Consider a Smart city scenario:

One department of the city has deployed pollution sensors and temperature sensors. Another department of the city wants access to the same real time information. Indeed, considering Open Data principles, it could be any person – for example developers running a hackathon. In this scenario, the department which owns the sensors can grant access to the sensors to third parties based on Policy scenarios. Indeed, these sensors could become ‘discoverable’ and could be accessed by any third party as needed.

This is achieved through three ways:

  • Open technologies(specifically node.js)
  • Implementation of Personal zones and
  • The webinos Dashboard

Significance of Node.js

(this section – acknowledgements to Dr Paddy Byers)

node.js, or just node, is a runtime environment based on Javascript (JS). It uses the V8 JS engine from Google – the same one as in Chrome – and exposes a series of APIs needed to build networking applications. Libraries include basic things like filesystem and network access, but also HTTP, crypto, SSL, streams – all of the building blocks for apps that either serve or consume network services.

Most important of all, though, is the ability to build apps using external modules – not built in to the core – provided by third parties. There is a very active ecosystem of developers of these node “modules” which gives you access to a massive catalogue of libraries and frameworks. By having this structure, node can concentrate on maintaining a focussed, high performance, stable and common core, and the module ecosystem can provide huge diversity in libraries and frameworks. Unlike other environments – say Ruby with Rails – there isn’t a single framework architecture that becomes an encumbrance or constrains how things are built. There is diversity in the ecosystem and it isn’t held back by centralised coordination or the need for a single view on how things are done.

Node was one of the first projects whose community engagement was fuelled by Github and that mindset – free, decentralised, and open – has been the core ethos of the developer community for node’s core, the module ecosystem and end-user developers. Although node is now owned by Joyent which has its own commercial mission, node remains open and sees contributions from many individuals and organisations.

node is primarily used for building the “front end” for web sites (i.e. the part that directly handles incoming requests and sends responses). Some organisations use it just for the front end but many sites are built top-to-bottom with node.

Node has a number of key advantages.

1)      The principal advantage is scalability. node is based on JS; it is event-driven and single-threaded. While this might at first sight seem to be a disadvantage – running an inherently parallel service on a single-threaded runtime – it turns out to be its key advantage. The reason is that the cost of handling each new request, and in particular the cost of each outstanding request, is very small compared with systems that spawn threads or processes to handle each request. Each request is handled – processing the request, resolving the request path and parameters, triggering database or other IO – but then instead of waiting the system then returns to the idle state ready to handle a new request. The resources occupied by the pending request are simply a few objects and buffers, so many thousands of requests can then be pending on a single server. Secondly, state is easier to share between requests, which minimises the state that needs to be persisted somewhere. A single server can therefore handle tens of thousands of connections and concurrent requests.

2)      The next key feature is its accessibility. node is small at its core – which means a small learning curve to get started – but has a rich ecosystem of modules that enable you to add functionality quickly. The openness of the platform and modules, the support available from the community, and the sheer diversity of things being created, mean that you’re rarely on your own when trying something new. If you look through the various testimonials on the nodejs.org site you see multiple organisations using node to power their mobile apps or mobile sites. There are several reasons why it is well-suited to this.

3)      Suitability for mobile apps – First, these mobile backends – whether serving html or APIs – require huge scale. Any mobile app with even modest adoption can generate hundreds or thousands of requests a second. node allows these services to scale to this level much more readily and cheaply than with competing platforms. Many organisations, even though they have an existing backend for their mainstream website, will take a “clean sheet” approach to building their mobile platform and node is then a natural choice.

4)      Further, mobile apps are increasingly dependent on realtime connections where data can be pushed from the server to the device (egg with long polling or websocket connections), rather than being solely conventional sites or http APIs. node provides ready support for realtime connections (either directly or with helpers such as socket.io) and realtime push-dependent systems can be built far more easily than would be possible with Rails or PHP, say. LinkedIn, for example, have built their entire mobile backend in node and you can see other examples on the node.js site

5)      You can also run node on the mobile itself. node is inherently portable – V8 supports multiple CPU architectures and Chrome itself obviously runs on ARM and MIPS and other CPUs as well as x86. node’s footprint on the OS API surface is small – networking, filesystem and events essentially – so it makes it readily portable to multiple environments. There is a port of node to Android and a framework that allows you to build Android apps with node, and there is also an experimental port to iOS.

As devices grow ever more connected, they will increasingly be simultaneously both clients and servers for network services. That doesn’t necessarily mean they will be serving web apps, but your phone has a wide range of data sources that are interesting to exploit – location, camera, proximity via Bluetooth, say, as well as the personal information in contacts, etc. node is a framework that allows you to create servers very quickly for all sorts of functionality.

 You would think performance is an issue, but it’s not; modern devices are so powerful that they have plenty of processing power for the kinds of services you would think of. Anywhere you can run a browser you can also run node.

Having services that are always on is an issue for battery life. There needs to be a way of ensuring that an idle service is really idle and doesn’t drain the battery.

Intermittent network connectivity is obviously an issue – services won’t always be reachable.

Webinos and node.js

Webinos has gone further than most other projects in exploring node.js in different platforms. Specifically, it is addressing two separate issues:

a)      How to expose device functionality as network-accessible services, and

b)      How to create a portable application environment based on JS.

These have implications for IOT
The main technical contribution of Webinos has been that of privacy and access control for services exposed by a device such as a phone or car or TV. Webinos has the idea that a “personal cloud” can be augmented by devices and the services they can each expose; and has created a framework for access to those services, both peer-to-peer and via the cloud.

This is similar to a distributed “plug and play” for personal services; it’s not just about enabling discovery and access, but enabling the owner of the device to give access selectively and to set policies for access. Webinos addresses the range of trust scenarios on which that access might be based – social network relationships, physical proximity, etc.
Webinos is itself built with node and you can download the specifications from the webinos site and from github for webinos

 

Webinos technology

An overview of webinos technology

Within this context now, it’s easier to understand the significance of webinos for IOT

  •     Today companies provide services, but require centralization of personal data over which you have little control, making it hard to switch companies
  • Personal Zones provide an architecture for reclaiming control
  • You decide what/when to share with 3rd parties
  • This facilitates intent based smart search
  • Your data is managed within your zone, by the services you install
  • This works well for IoT devices

 webinos Personal Zone Hub (PZH)

The Personal Zone is a conceptual construct, that is implemented on a distributed basis from a single Personal Zone Hub (PZH) and multiple Personal Zone Proxy (PZP)s

The critical functions that a Personal Zone hub provides are:

  • An fixed entity to which all requests and messages can be sent to and routed on – a personal postbox as it were
  • A fixed entity on the web through which requests and messages can be issued, for security and optimisation reasons.
  • An authoritative master copy of a number or critical data elements that are to synced between Personal Zone Proxy (PZP)s and Personal Zone Hub (PZH), specifically
    • Certificates for Personal Zone Hub (PZH), Personal Zone Hub (PZH) mutual authentication
    • All policy rules, for distributed policy enforcement
    • All relevant context data
  • The functions therefore that a Personal Zone Hub (PZH) can support are
    • User authentication service
    • Personal Zone Proxy (PZP) secure session creation for transport of messages and synchronisation
  • A webinos service host: a Personal Zone Hub (PZH) can host directly Services/APIs that other applications can make use of.
  • Context sync: the Personal Zone Hub (PZH) should act as the master repository for all context data
  • A webinos executable host: a Personal Zone Hub (PZH) will be able to run a server resident webinos applications (these will be JavaScript program files wrapped in a webinos application package)

webinos Personal Zone Proxy (PZP)

  • The webinos Personal zone satellite proxy, acts in place of the Personal Zone hub, when there is no internet access to the central server.
  • In order to act in its place, certain information needs to be synchronised between the satellites and the central hub.
  • This information has already been listed above.
  • The Personal Zone Proxy (PZP) fulfils most, if not all of the above functions described above, when there is not Personal Zone Hub (PZH) access
  • In addition to the Personal Zone Hub (PZH) proxy function, the Personal Zone Proxy (PZP) is responsible for all discovery using local hardware based bearers (bluetooth, zigbee , NFC etc)
  • Unlike the PZH, the PZH does not issue certificates and identities.
  • For optimisation reasons PZPs are capable of talking directly PZP-PZP, without routing messages through the PZH

webinos Application

  • A webinos application runs “on device” (where that device could also be internet addressable i.e. a server).
  • A webinos application is packaged, as per packaging specifications, and executes within the WRT.
  • A webinos application has its access to security sensitive capabilities, mediated by the active policy.
  • A webinos application can expose some or all of its capability as a webinos service

webinos Service

A webinos service is a collection of functions and events, that are accessible by an webinos application

These functions and events are always presented to the application developer as a sets of JavaScript functions, no matter where the implementation resides.

An webinos service must take note of the following parts of the webinos specifications

  • Discovery: a service must be discoverable and be able to describe itself to the application in accordance with the discovery specification
  • Messaging : a service must be able to receive and respond to incoming RPC messages

Local Connections

One of the critical innovations of webinos, is the virtual overlay network that allows different applications and services to talk to each other over many different interconnect technologies. Not only are the interconnect technologies for local messaging, there are three different scenarios in which this communication can take place. These are highlighted in the diagram below.

Connecting to a full smart device, that hosts both a PZP (therefore can host native APIs presented as services) and a WRT (so can host webinos applications exposing webinos services)

  1. Connecting to a dumb device, it hosts a PZP but not a WRT. This means that it can expose only native APIs, not webinos applications
  2. Connecting to a super-dumb device, it hosts neither a PZP nor a WRT, but can expose webinos services – if the client PZP hosts a customised driver

Two other aspects complete the webinos vision – the microPZP and the Dashboard

 

 

 

 

 

 

 

 

 

 

MicroPZP is an implementation of the PZP when the device is too low spec to deploy a full PZP. A device supporting a MicroPZP has a target to 2mb device range

Dashboard

 

 

 

 

 

 

(acknowledgements to Giuseppe la Torre for this section)

 

The dashboard brings it all together for the user. In the near future, our houses will be “populated” of several “smart objects” which can be remotely controlled by users. Some efforts will be necessary to create a common platform for the “physical object virtualization”. Webinos provides support for the IoT domain, defining and implementing APIs for generic sensors and actuators.

Webinos provides drivers for Arduino boards, OBD electronic control units, ANT health sensors.

One of the most important feature we will expect from the iot ecosystem is the physical mashup.

The webinos home controller is a web application which, relying on the webinos platform,

allows users to

i) Create customizable UI to display information from user’s sensors. Using the drag&drop paradigm the user can create its own user interface with charts, gauges, text label, and so on. And display information about all the sensors which belong to his personal zone. This UI can be saved and then displayed in each kind of user’s device (TV, in-car, tablet). This part of the app could be easily extended, recently improved with the possibility to display user’s position (a webinos service) on a Google map.

 

ii) Add “logic” among the the smart objects by means the definition of rules – This is a good example of physical mashup: sensors and actuators (but theoretically each type of webinos services) can be used together to create logic rules of type: if CONDITION then TRIGGER.

 

Using the drag&drop paradigm, user can move on the UI

-) Input elements (sensors, user input textfields)

-) Condition elements (<,>,AND,OR)

-) Output elements (actuators)

 

An important webinos feature which has been integrated into the home controller application is theExplorer.

 

The explorer is a common interface for webinos applications which allows them to get access to the services exposed by user’s devices inside the personal zone.

In the case of home controller app, the explorer allows users to pick services (sensors or actuators) among those inside his personal zone or those belonging to a friend’s personal zone.

Conclusions

The webinos project takes node.js to different platforms – including IOT. But it does a lot more. We will be discussing these platforms (webinos, OSIOT, hypercat, web of things etc) at the accelerating the open source IOT ecosystem event

A big picture of webinos is as below

Contributions

Many thanks to Webinos project  esp. Dr Nick Allott, Dr Paddy Byers, Dave Raggett(W3C) and Giuseppe la Torre

 

 

 

 

 

Policy news for September 2013

 

 

 

 

 

Considering my work in contributing to the EIF Digital world in 2030 report  and the impact of Big Data insights – we focus these newsletters on Data  with a Policy slant.

Data affects us all and it will continue to impact many policy matters in future.  I have been tracking Big Data trends on social media – especially Twitter. I then provide a perspective/edited view for policy matters

Tomorrow’s cities: How big data is changing the world

 Let’s start  with Smart cities. A trend which brings many other trends together.

Should happiness become a general measurement of city life? The Hedonometer project sets out to map happiness levels in cities across the US using data from Twitter.

Using 37 million geolocated tweets from more than 180,000 people in the US, the team from the Advanced Computing Centre at the University of Vermont rated words as either happy or sad.

“Cities looking to understand changes in the behaviour of their citizens, for example to locate ads for public health programmes, can look to social media for real-time information,” said Chris Danforth, one of the project leaders.

The article also provides some interesting data points for policy makets

In 2013 internet data, mostly user-contributed, will account for 1,000 exabytes. An exabyte is a unit of information equal to one quintillion bytes

Open weather data collected by the National Oceanic and Atmospheric Association has an annual estimated value of $10bn

Every day we create 2.5 quintillion bytes of data

90% of the data in the world today has been created in the past two years

Every minute 100,000 tweets are sent globally

Back in 2010 Google chief executive Eric Schmidt noted that the amount of data collected since the dawn of humanity until 2003 was the equivalent to the volume we now produce every two days.

In Norway, more than 40,000 bus stops are tweeting, allowing passengers to leave messages about their experiences, and in London the mayor’s office has just begun a project to tag trees so that people can learn about their history.

Supermarket chain Tesco is installing sensors across its stores to reduce heating and lighting costs.The records of the fridge systems in one store alone produce 70 million data points a year.

Vancouver is making sense of data using a 3D visualisation of the city

Computer-aided design company Autodesk has been working with San Francisco, Vancouver and

Bamberg, in southern Germany, to build 3D visualisations over which government can overlay data sets to see how a city is performing at any time.

Presenting data in new ways has had surprising consequences for example In Germany the model was used to show people what the impact of a new railway line would be.

 And finally the quote: “We are basically building a digital copy of our physical world and that is having profound consequences.”

 To Go from Big Data to Big Insight, Start with a Visual

The Harvard business review which asks if is data visualization actionable by looking at a large data set

How big? Massive: We are documenting every tweet, retweet, and click on every shortened URL from Twitter and Facebook that points back to New York Times content, and then combining that with the browsing logs of what those users do when they land at the Times.

 3 Inconvenient Truths About Big Data In Security Analysis

HD Moore at UNITED Security Conference predicts: “We’ll see a large breach from one of the analytics providers in the next 12 months”

 Big Data That’s Good for the Public

The DOPA project … funny sounding name but doing something very serious .. (from MIT sloan review)

 A program funded by the EU promises to semantically link open data like never before.

Facts: 900 million. Active sources: more than 100,000. Data sets: 30,000, with 200 million time-series and 1.5 billion fact values.

Link all these data sources together and what do you get? Timely, if not crucial, contextual information about markets, trends, competitors, products and consumer opinions.

This is the promise of DOPA, a project funded under the umbrella of the European Union’s Seventh Framework (a made-for-HBO series title if I’ve ever heard one) implemented to further European research and economic development.

DOPA’s goal is to semantically link massive amounts of open economic and financial data — quantitative, qualitative, structured, unstructured and polystructured (as in audio, video, images, free-form text, tables and XML files) — and make it available through a framework that standardizes data sets. Its hoped-for outcomes include a bevy of innovations based on new ways of looking at publicly available data.

 Big Data without good analytics can lead to bad decisions

Experts warn that the temptation to let the computers do it all, without the human element, can lead to trouble

 Clinical data analytics next big thing

Population health needs clinical analytics solutions, new report finds

The clinical data analytics market is about to get red hot. With the shift toward new payment models and the sheer amount of clinical data contained in electronic health records, more and more healthcare groups are looking to analytics solutions for population health management, according to a new report released Tuesday.

 Predictive Analytics: Harnessing the Power of Big Data

From a blog for the book - Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die – comes a range of areas affected by Big Data

 This learning process discovers and builds on insightful gems such as:

Early retirement decreases your life expectancy

Online daters more consistently rated as attractive receive less interest

Vegetarians miss fewer flights

Local crime increases after public sporting events

 This man thinks big data and privacy can co-exist, and here’s his plan

 Dr Alexander Dix – Berlin’s privacy chief proposes a compromise for Big Data and Privacy

 That is also something for the governments to support and finance — business models or research, for instance, to improve the tools for self-protection for the internet user, and possibly to develop a kind of European cloud model which is less [vulnerable] to detection by the intelligence services. There could also be acompetitive advantage for European businesses.

 From Malaysia(PULSATE Announces Big Data Collaboration with Dell, Intel and Revolution Analytics) to Peru(IBM Opens New Cloud Data Center in Peru to Meet Demand for Big Data Analytics) – there is emphasis on Big Data.

Impact on marketing and sales

Sales Of Public Data To Marketers Can Mean Big $$ For Governments

The Rise Of Big Data and Predictive Analytics in Marketing

Visualization: The Simple Way to Simplify Big Data

 And finally, the new jobs – always an interest for policy makers

Where Do Data Scientists Come From?

Data experts aren’t uniformly distributed around the globe. Prepare to be surprised at some of the best countries and cities for data expertise.

In growing field of big data, jobs go unfilled

 To conclude, also here is more about my course on Big Data and Algorithms for Smart cities – an effort to bring about education in algorithms and algorithm transparency

 

 

My course on Big Data algorithms for Smart cities at City science – UPM (Madrid)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Background

I have blogged before about the need for algorithm transparency for Big Data algorithms for Smart cities . The same sentiment is expressed in Rage Against the Algorithms – How can we know the biases of a piece of software? By reverse engineering it, of course

After a year or so, I have made some progress on the idea of Big Data algorithms for Smart cities and I will try and elaborate here in this longish blog post which you can download also as a pdf. In addition to my Oxford university course on Big Data for Telecoms, from Jan 2014  onwards I am pleased to be also teaching a course about Big Data Algorithms for Smart Cities. This also includes IOT, Mobile and M2M data.

At the newly launched City Sciences program at UPM – Technical University of Madrid – Universidad Politécnica de Madrid, I will be teaching about applying Big Data algorithms (specifically Mahout, Real time algorithms like Twitter storm, Predictive algorithms and Machine learning algorithms) to Telecoms, IOT and Smart cities

I am excited about this and always wanted to do this!

Also, I don’t see many other places where this is being done .. so it’s truly pioneering.  Spain is a hotbed of Smart city and mobile activity especially with initiatives like Smart Santander and the GSMA Connected Living initiatives

One of the reasons for this blog post is to reach out to companies and other researchers who are working in this space (ex IBM Smarter planet, SAP, GE(Industrial Internet)  are all doing some interesting work in this space – as are research institutes like fraunhofer FOKUS ).  I am already doing some interesting work in this space especially at Liverpool Smart cities projects – Connected Liverpool  - so we are already looking at real world applications

Another way to look at it is to think of the role of a Data Scientist for a city -  The Harvard business review says that the role of the Data scientist will be one of the hottest roles going forward

Here are some idea s about my thinking

Note that this is not the actual syllabus – it shows more my thought process

Approach

My approach involves applying insights from one domain(Big Data algorithms) to data from Smart cities, Mobile and IOT.

So, we start with the maths – for example

Differential calculus,

Discrete maths,

Probablity theory,

Linear algebra

 and then techniques such as

Decision trees,

nearest neighbour,

unsupervised learning,

Probabilistic modelling pdf,

 Bayesian learning

Predictive analysis techniques (Predicting the future,  What is predictive analytics – Part 1, Predicting the future,  What is predictive analytics – Part 2),

Machine learning algorithms
Real time algorithms like Twitter storm

Apache Mahout etc

we then apply these to optimization problems based on data streams from Smart city verticals(like transportation), IOT, Mobile data and Open Data streams all within the context of the R programming language – albeit there is some great work on Python as well ex scikit learn

 Why now?

Both IOT and Open Data are maturing .. many new initiatives will make IOT data increasingly common. Apart from mobile phones,  apps and sensors – we also have initiatives like alljoyn, IFTTT and webinos  for IOT and Operators like Telefonica using Open Data in innovative ways in partnership with the Open Data institute

So, soon we will be presented with an abundance of Data. How to optimise it to get real insights will be the next challenge. Hence the algorithms.

This also brings us to Data. I was trying to find a taxonomy of mobile data. The closest I came to was this paper. Although from 2007, the principles still apply Towards a Taxonomy of mobile applications(pdf)

Mobile data streams

Candidate dimensions for a mobile taxonomy

Temporal dimension. (Synchronous: user and application interact in real time, Asynchronous: user and application interact in non-real time)

Communication dimension. ( Informational,  Reporting, Interactional)

Transaction dimension: (Transactional, non-transactional)

Public dimension: (Public, Private)

Multiplicity (or participation) dimension:  (Individual, Group)

Location dimension: Some mobile applications provided customized information or functionality based on the users

location, whereas other applications do not depend on where the user is located.

The identity dimension relates to whether the identity of the user is used to modify the application based on the user’s identity.

Categorization of Sample Mobile Applications

Purchasing location-based contents (local information, routing, etc.):

Mobile inventory management for a company:

Product location and tracking for individuals (e.g., searching for a certain plasma TV in a given city):

Mobile auctions:

Mobile games:

Mobile financial services (mobile banking):

Mobile advertisement (both user-specific and location-specific):

Mobile entertainment services (stored contents-on-demand, live events):

Mobile personal services (mobile dating):

Mobile distance education (synchronous and asynchronous versions):

Mobile product recommendation systems:

Wireless patient monitoring:

Mobile telemedicine:

So, potentially, all these applications (and many more from apps) could provide mobile data. We also need a taxonomy of city data

A taxonomy of City Data

Domains like Transportation will be early providers of City data – but in the blog
Big data for Smart cities – How do we go from Open Data to Big Data for Smart cities – I listed many more

Environmental data (particulate matter, CO2, pollen)
Markets (weekly, flea, Christmas markets)
events (festivals, concerts, long night of …, sports events)
Disposal (appointment in my street, recycling centers, container sites, hazardous waste)
infrastructure (cycle paths, toilets, mailboxes, ATMs, telephones)
Traffic (construction sites, traffic jams, road closures)
transport (delays, cancellations, special trips)
opening times (libraries, museums, exhibitions)
Management (Forms, responsibilities, authorities, opening times)
consumer advice, debt counselling
Family (parental allowance, day nurseries, kindergartens)
Education (schools, community colleges, colleges and universities)
Housing (housing benefit, rent prices, real estate, land prices)
health (hospitals, pharmacies, emergency services, specialist counselling services, Blood donation)
Pets (veterinarians, animal shelter, animal care)
Control (bathing, food, restaurants, prices)
Legal (laws, regulations, guidance, arbitrator, evaluator)
Police Online (current events, investigation, crime Atlas)
City Planning (zoning, construction, transport, airports)
Population (number, regional distribution, demographics, purchasing power,
Employment / unemployment, children)

 And ofcourse wearable mobile data technology could create its own data streams

 What makes a city Smart?

How do we bring this all together?

The ex Chinese Premier Wen Jiabo once said “Internet + Internet of things = Wisdom of the earth”

Indeed the Internet of Things revolution promises to transform many domains ..

As the term Internet of Things implies (IOT) – IOT is about Smart objects

For an object (say a chair) to be ‘smart’ it must have three things

-       An Identity (to be uniquely identifiable – via iPv6)

-       A communication mechanism(i.e. a radio) and

-       A set of sensors / actuators

For example – the chair may have a pressure sensor indicating that it is occupied

Now, if it is able to know who is sitting – it could co-relate more data by connecting to the person’s profile

If it is in a cafe, whole new data sets can be co-related (about the venue, about who else is there etc)

Thus, IOT is all about Data ..

By 2020, we are expected to have 50 billion connected devices

To put in context:

The first commercial citywide cellular network was launched in Japan by NTT in 1979

The milestone of 1 billion mobile phone connections was reached in      2002

The 2 billion mobile phone connections milestone was reached in 2005

The 3 billion mobile phone connections milestone was reached in 2007

The 4 billion mobile phone connections milestone was reached in February 2009.

So, 50 billion by 2020 is a large number

 Smart cities can be seen as an application domain of IOT

In 2008, for the first time in history, more than half of the world’s population will be living in towns and cities. By 2030 this number will swell to almost 5 billion, with urban growth concentrated in Africa and Asia with many mega-cities(10 million + inhabitants). By 2050, 70% of humanity will live in cities.

That’s a profound change and will lead to a different management approach than what is possible today

   Also, economic wealth of a nation could be seen as – Energy + Entrepreneurship + Connectivity                                                                             (sensor level + network level + application level)

Hence, if IOT is seen as a part of a network, then it is a core component of GDP.

 So, what makes a city ‘smart’?

Building upon the previous discussion, my view is a Smart city is a city that behaves like the Internet i.e. is a platform/enabler for its citizens. Thus, the citizens make the city ‘smart’ by adding knowledge, value, data etc. This is a part of a wider socio economic trend to go from ‘mass production’ to ‘smaller individualized services’ – ex in music, in urban farming, in the Bristol pound, in local sourcing of food etc.

Holy grail – improved services

In conclusion, the payoff for a city is improved services. This is already happening for instance in a far of place like  Abidjan (AllAboard: a system for exploring mobility and optimizing transport in developing countries using cellphone data) and in healthcare  and we are seeing many new forms of radios like Cell dot from Ericsson and Internet connected super highways using white space

We could thus see a new value chain of sensor – Data – Algorithms – visualization

If you are a Vendor – company –researcher working in this space – happy to discuss solutions, joint papers etc. Pls contact me at ajit.jaokar at futuretext.com

Image – shutterstock

Hack the curriculum – I will be attending this event ..

I heard about Hack the curriculum from a teacher @Laura Dixon.

Laura is a part of group of dedicated teachers with the objective of creating an inclusive opportunity for learning computer science regardless of gender, race, socio-economic status, SEN or disabilities.

I will be attending this event as a developer considering my interest with feynlabs

More details below

hosted by #include

in partnership with the University of Warwick

9 November 2013 10.00 – 17.00

This is no ordinary hack – instead of creating a piece of software,
the aim is to create resources for use in the teaching of Computer
Science in the classroom.

Teachers, developers and academics will team up to tackle the new
curriculum, sharing their expertise to produce interesting learning
opportunities. We want to support diversity so the resources should
aim to be inclusive for as many students as possible.

£8 per ticket including lunch and refreshments

Programme (tbc)

9.45Registration begins
10.15Welcome and explanation of the day – Laura Dixon and Carrie Anne Philbin
10.30Keynote presentation – Anastasia Beaumont-Bott of TOTKO
10.45Keynote presentation – Mark Dorling
11.15Coffee and team organisation

Hack the Curriculum
13.30Lunch served
16.00Presentations and judging
17.00Close

We look forward to seeing you. Please bring your own laptop or tablet.

Room details and directions will be confirmed via email nearer the event.

Registration page - Hack the curriculum