Implementing Enterprise AI course – new batch – April 2017

 

The January batch of the Implementing AI course is completely sold out!

I am pleased to announce a new cohort for Implementing Enterprise AI course – starting April 24 2017. We are accepting places now. As usual, numbers are limited and we have an early bird discount

Implementing Enterprise AI is a unique and limited edition course that is focussed on AI Engineering / AI for the Enterprise.

Created in partnership with H2O.ai , the course uses Open Source technology to work with AI use cases. Successful participants will receive a certificate of completion and also validation of their project from H2O.ai.

 To sign up or learn more, email info@futuretext.com

The course covers

  • Design of Enterprise AI
  • Technology foundations of Enterprise AI systems
  • Specific AI use cases
  • Development of AI services
  • Deployment and Business models

 The course targets developers and Architects who want to transition their career to Enterprise AI. The course correlates the new AI ideas with familiar concepts like ERP, Data warehousing etc and helps to make the transition easier.

The implementation / development for the course is done using the H2O APIs for R, Python & Spark. 

 

Duration:
Starting April 2017 approximately six months (3 months for the content and up to three months for the Project)
Course includes a certificate of completion and also validation of the project from H2O.ai. (Projects will be created in a team)

 Course Logistics:

Offered Online  and Offline ( London and Berlin)

When:    April 2017
Duration: Approximately six months (including project)
Fees:      contact us

To sign up or learn more, email info@futuretext.com

 Outline

April – May 2017

 

  • Understanding the Enterprise AI layer
  • Introduction to Machine Learning
  • Unsupervised Learning
  • Supervised Learning
  • Generalized Linear Modeling
  • Gradient Boosting Machine
  • Ensembles
  • Random Forest
  • Programming foundations(see notes below)

 

June

  • Introduction to Deep Learning
  • Multilayer Perception
  • Auto encoders
  • Deep Convolutional Networks
  • Recurrent Neural Networks
  • Reinforcement learning
  • Programming foundations(see notes below)

 

July 2017

 

  • Natural language processing
  • Basics of Text Analytics
  • POS Tagging
  • Sentiment Analysis
  • Text Classification
  • Intelligent bots
  • Programming foundations(see notes below)

 

Aug – Oct 2017 – Projects and deployment

 

  • Deploying Enterprise AI
  • Acquiring Data and Training the Algorithm
  • Processing and hardware considerations
  • Business Models – High Performance Computing – Scaling and AI system
  • Costing an AI system
  • Creating a competitive advantage from AI
  • Industry Barriers for AI

 

Implementation of Enterprise AI use cases (in groups)

 

  • Healthcare
  • Insurance
  • Adtech
  • Fraud detection
  • Anomaly detection
  • Churn, classification
  • Customer analytics
  • Natural Language Processing, Bots and Virtual Assistants

 

Notes

  • The course covers Design of Enterprise AI, Technology foundations of Enterprise AI systems, Specific AI use cases, Development of AI services and Deployment and Business models
  • The implementation / development for the course is done using R, Python and Spark using the H2O APIs
  • For Deep learning, we also work with GPUs, tensoflow, Mxnet and Caffe
  • We focus on large scale problems
  • Notes on Programming foundations: We assume that you have significant Programming knowledge. However, we do not assume that you are familiar with Python, R or Spark.
  • The course provides you background in these languages over the first three months. You will then use this knowledge to work on the use cases in the Project phase Certification of completion is based on completion of quiz related to modules.
  • Project certification (validated by H2O.ai) is based on Projects working in groups
  • Note that the syllabus is subject to change

Project certification by h2o.ai

Twelve types of Artificial Intelligence (AI) problems

Background – How many cats does it take to identify a

Cat?

In this article, I cover the 12 types of AI problems i.e. I address the question : in which scenarios should you use Artificial Intelligence (AI)?  We cover this space in the  Enterprise AI course

Some background:

Recently, I conducted a strategy workshop for a group of senior executives running a large multi national. In the workshop, one person asked the question: How many cats does it need to identify a Cat?

This question is in reference to Andrew Ng’s famous paper on Deep Learning where he was correctly able to identify images of Cats from YouTube videos. On one level, the answer is very clear: because Andrew Ng lists that number in his paper. That number is 10 million images .. But the answer is incomplete because the question itself is limiting since there are a lot more details in the implementation – for example training on a cluster with 1,000 machines (16,000 cores) for three days. I wanted to present a more detailed response to the question. Also, many problems can be solved using traditional Machine Learning algorithms – as per an excellent post from Brandon Rohrer – which algorithm family can answer my question. So, in this post I discuss problems that can be uniquely addressed through AI. This is not an exact taxonomy but I believe it is comprehensive. I have intentionally emphasized Enterprise AI problems because I believe AI will affect many mainstream applications – although a lot of media attention goes to the more esoteric applications.

 

What problem does Deep Learning address?

What is Deep Learning?

Firstly, let us explore what is Deep Learning

 

Deep learning refers to artificial neural networks that are composed of many layers. The ‘Deep’ refers to multiple layers. In contrast, many other machine learning algorithms like SVM are shallow because they do not have a Deep architecture through multiple layers. The Deep architecture allows subsequent computations to build upon previous ones. We currently have deep learning networks with 10+ and even 100+ layers.

 

The presence of multiple layers allows the network to learn more abstract features. Thus, the higher layers of the network can learn more abstract features building on the inputs from the lower layers.  A Deep Learning network can be seen as a Feature extraction layer with a Classification layer on top. The power of deep learning is not in its classification skills, but rather in its feature extraction skills. Feature extraction is automatic (without human intervention) and multi-layered.

 

The network is trained by exposing it to a large number of labelled examples. Errors are detected and the weights of the connections between the neurons adjusted to improve results. The optimisation process is repeated to create a tuned network. Once deployed, unlabelled images can be assessed based on the tuned network.

 

Feature engineering involves finding connections between variables and packaging them into a new single variable is called. Deep Learning performs automated feature engineering. Automated feature engineering is the defining characteristic of Deep Learning especially for unstructured data such as images. This matters because the alternative is engineering features by hand. This is slow, cumbersome and depends on the  domain knowledge of the people/person performing the Engineering

 

Deep Learning suits problems where the target function is complex and datasets are large but with examples of positive and negative cases.  Deep Learning also suits problems that involve Hierarchy and Abstraction.

Abstraction is a conceptual process by which general rules and concepts are derived from the usage and classification of specific examples. We can think of an abstraction as the creation of a ‘super-category’ which comprises of the common features that describe the examples for a specific purpose but ignores the ‘local changes’ in each example.  For example, the abstraction of a ‘Cat’ would comprise fur, whiskers etc. For Deep Learning, each layer is involved with detection of one characteristic and subsequent layers build upon previous ones.  Hence, Deep Learning is used in situations where the problem domain comprises abstract and hierarchical concepts. Image recognition falls in this category. In contrast, a Spam detection problem that can be modelled neatly as a spreadsheet probably is not a complex problem to warrant Deep Learning

A more detailed explanation of this question can be found in THIS Quora thread.

AI vs. Deep Learning vs. Machine Learning

Before we explore types of AI applications, we need to also discuss the differences between the three terms AI vs. Deep Learning vs. Machine Learning.

The term Artificial Intelligence (AI) implies a machine that can Reason. A more complete list or AI characteristics (source David Kelnar) is

  1. Reasoning: the ability to solve problems through logical deduction
  2. Knowledge: the ability to represent knowledge about the world (the understanding that there are certain entities, events and situations in the world; those elements have properties; and those elements can be categorised.)
  3. Planningthe ability to set and achieve goals (there is a specific future state of the world that is desirable, and sequences of actions can be undertaken that will effect progress towards it)
  4. Communication: the ability to understand written and spoken language.
  5. Perception: the ability to deduce things about the world from visual images, sounds and other sensory inputs.

 

The holy grail of AI is artificial general intelligence (aka like Terminator!) that allows machines to function independently in a normal human environment. What we see today is mostly narrow AI (ex like the NEST thermostat). AI is evolving rapidly. A range of technologies drive AI currently. These include: image recognition and auto labelling, facial recognition, text to speech, speech to text, auto translation, sentiment analysis, and emotion analytics in image, video, text, and speech. Source: Bill Vorhies  AI Apps  have also reached accuracies of 99% in contrast to 95% just a few years back.

 

Improvements in Deep Learning algorithms drive AI.  Deep Learning algorithms can detect patterns without the prior definition of features or characteristics. They can be seen as a hybrid form of supervised learning because you must still train the network with a large number of examples but without the requirement for predefining the characteristics of the examples (features). Deep Learning networks have made vast improvements both due to the algorithms themselves but also due to better hardware(specifically GPUs)

 

Finally, in a broad sense, the term Machine Learning means the application of any algorithm that can be applied against a dataset to find a pattern in the data. This includes algorithms like supervised, unsupervised, segmentation, classification, or regression. Despite their popularity, there are many reasons why Deep learning algorithms will not make other Machine Learning algor…

12 types of AI problems

With this background, we now discuss the twelve types of AI problems.

1) Domain expert: Problems which involve Reasoning based on a complex body of knowledge

This includes tasks which are based on learning a body of knowledge like Legal, financial etc. and then formulating a process where the machine can simulate an expert in the field

2) Domain extension: Problems which involve extending a complex body of Knowledge

Here, the machine learns a complex body of knowledge like information about existing medication etc. and then can suggest new insights to the domain itself – for example new drugs to cure diseases.

3) Complex Planner: Tasks which involve Planning

Many logistics and scheduling tasks can be done by current (non AI) algorithms. But increasingly, as the optimization becomes complex AI could help. One example is the use of AI techniques in IoT for Sparse datasets  AI techniques help on this case because we have large and complex datasets where human beings cannot detect patterns but a machine can do so easily.

4) Better communicator: Tasks which involve improving existing communication

AI and Deep Learning benefit many communication modes such as automatic translation,  intelligent agents etc

5) New Perception: Tasks which involve Perception

AI and Deep Learning  enable newer forms of Perception which enables new services such as autonomous vehicles

6) Enterprise AI: AI meets Re-engineering the corporation!

While autonomous vehicles etc get a lot of media attention, AI will be deployed in almost all sectors of the economy. In each case, the same principles apply i.e. AI will be used to create new insights from automatic feature detection via Deep Learning – which in turn help to optimize, improve or change a business process (over and above what can be done with traditional machine learning). I outlined some of these processes in financial services in a previous blog: Enterprise AI insights from the AI Europe event in London. In a wider sense, you could view this as Re-engineering the Corporation meets AI/ Artificial Intelligence. This is very much part of the Enterprise AI course

 

7) Enterprise AI adding unstructured data and Cognitive capabilities to ERP and Datawarehousing

For reasons listed above, unstructured data offers a huge opportunity for Deep Learning and hence AI.  As per Bernard Marr writing in Forbes:  “The vast majority of the data available to most organizations is unstructured – call logs, emails, transcripts, video and audio data which, while full of valuable insights, can’t easily be universally formatted into rows and columns to make quantitative analysis straightforward. With advances in fields such as image recognition, sentiment analysis and natural language processing, this information is starting to give up its secrets, and mining it will become increasingly big business in 2017.” I very much agree to this. In practise, this will mean enhancing the features of ERP and Datawarehousing systems through Cognitive systems.

8) Problems which impact domains due to second order consequences of AI

David Kelnar says in The fourth industrial revolution a primer on artificial intelligenc…

“The second-order consequences of machine learning will exceed its immediate impact. Deep learning has improved computer vision, for example, to the point that autonomous vehicles (cars and trucks) are viable. But what will be their impact? Today, 90% of people and 80% of freight are transported via road in the UK. Autonomous vehicles alone will impact: safety (90% of accidents are caused by driver inattention) employment (2.2 million people work in the UK haulage and logistics industry, receiving an estimated £57B in annual salaries) insurance (Autonomous Research anticipates a 63% fall in UK car insurance premiums over time) sector economics (consumers are likely to use on-demand transportation services in place of car ownership); vehicle throughput; urban planning; regulation and more. “

 

9) Problems in the near future that could benefit from improved algorithms

A catch-all category for things which were not possible in the past, could be possible in the near future due to better algorithms or better hardware.  For example, in Speech recognition, improvements continue to be made and currently, the abilities of the machine equal that of a human. From 2012, Google used LSTMs to power the speech recognition system in Android. Just six weeks ago, Microsoft engineers reported that their system reached a word error rate of 5.9% — a figure roughly equal to that of human abilities for the first time in history.  The goal-post continues to be moved rapidly .. for example loom.ai is building an avatar that can capture your personality

10) Evolution of Expert systems

Expert systems have been around for a long time.  Much of the vision of Expert systems could be implemented in AI/Deep Learning algorithms in the near future. If you study the architecture of IBM Watson, you can see that the Watson strategy leads to an Expert system vision. Of course, the same ideas can be implemented independently of Watson today.

 

11) Super Long sequence pattern recognition

This domain is of personal interest to me due to my background with IoT see my course at Oxford University Data Science for Internet of Things. I got this title from a slide from Uber’s head of Deep Learning who I met at the AI Europe event in London. The application of AI techniques to sequential pattern recognition is still an early stage domain(and does not yet get the kind of attention as CNNs for example) – but in my view, this will be a rapidly expanding space. For some background see this thesis from Technische Universitat Munchen (TUM) Deep Learning For Sequential P…  and also this blog by Jakob Aungiers   LSTM Neural Network for Time Series Prediction

 

12) Extending Sentiment Analysis using AI

The interplay between AI and Sentiment analysis is also a new area. There are already many synergies between AI and Sentiment analysis because many functions of AI apps need sentiment analysis features.

“The common interest areas where Artificial Intelligence (AI) meets sentiment analysis can be viewed from four aspects of the problem and the aspects can be grouped as Object identification, Feature extraction, Orientation classification and Integration. The existing reported solutions or available systems are still far from being perfect or fail to meet the satisfaction level of the end users. The main issue may be that there are many conceptual rules that govern sentiment and there are even more clues (possibly unlimited) that can convey these concepts from realization to verbalization of a human being.” source: SAAIP

Notes: the post The fourth industrial revolution a primer on artificial intelligenc…  also offers a good insight on AI domains also see #AI application areas – a paper review of AI applications (pdf)

 

Conclusion

To conclude, AI is a rapidly evolving space. Although AI is more than Deep Learning, Advances in Deep Learning drive AI. Automatic feature learning is the key feature of AI. AI needs many detailed and pragmatic strategies which I have not yet covered here. A good AI Designer should be able to suggest more complex strategies like Pre-training or AI Transfer Learning

AI is not a panacea. AI comes with a cost (skills, development, and architecture) but provides an exponential increase in performance. Hence, AI is ultimately a rich company’s game. But AI is also a ‘winner takes all’ game and hence provides a competitive advantage. The winners in AI will take an exponential view addressing very large scale problems i.e. what is possible with AI which is not possible now?

We cover this space in the  Enterprise AI course 

Protected: PredictionV

This post is password protected. To view it please enter your password below:

Meet me at AI-europe in London (Uber, Nvidia, Kayak, UBS, Bell Labs + others speaking)

I am at Ai-europe next week. It should be a great event where Uber, Nvidia, Kayak, UBS, Bell Labs  are speaking

I am very much looking forward to the Nvidia talk (I work with Nvidia for my course which I teach at Oxford University Data Science for Internet of thing)

Also, look forward to the following talks. I believe that the event is almost full but there are only a last few places left. see more at Ai-europe 

  •  Opening Speech De-mystifying AI Terry JONES Founding Chairman KAYAK
  • AI as a game–changer for every industry: disruption now and perspectives for 2025 Robin BORDOLI Chief Executive Officer CROWDFLOWER
  • Serge PALARIC Vice President EMEA Embedded & OEMs NVIDIA – DEPLOYING DEEP LEARNING EVERYWHERE: CUTTING-EDGE RESEARCH TEAMS, HYPER-SCALE DATA CENTERS, ENTERPRISES USING AI
  • CONTACT CENTERS: HOW ARTIFICIAL INTELLIGENCE IS REVOLUTIONIZING THE CUSTOMER EXPERIENCE  Dr Nicola J. MILLARD Head of Customer Insight & Futures BT GLOBAL SERVICES
  • Banking: why UBS is interested in AI and other fintech innovations Annika SCHRÖDER Director UBS Group Innovation UBS AG
  • Health: the value of integrating deep learning – Use case: applying deep learning in devices to diagnose cancer – Carlos JAIME – Head of Health & Medical Equipment Division SAMSUNG ELECTRONICS FRANCE
  • BRINGING MACHINE LEARNING TO EVERY CORNER OF YOUR BUSINESSLuming WANG Head of Deep Learning UBER
  • Augmented Reality Danny Lopez COO Blippar
  • Virtual assistants: their impacts on  the Internet and the society – Why AI-based digital assistants will contribute to revolutionize the Internet and place technology at the service of humans Julien HOBEIKA – Juliedesk
  • IMAGE ANALYSIS: RESEARCH AND ITS APPLICATIONS IN THE REAL WORLD – Miriam Redi – Bell Labs

 

 

 

Implementing Enterprise AI course

 


 

 

Implementing Enterprise AI course

 

Implementing Enterprise AI is a unique and limited edition course that is focussed on AI Engineering / AI for the Enterprise.

The course is launched for the first time and has limited spaces.

Created in partnership with H2O.ai , the course uses Open Source technology to work with AI use cases. Successful participants will receive a certificate of completion and also validation of their project from H2O.ai.

 

The course covers

  • Design of Enterprise AI
  • Technology foundations of Enterprise AI systems
  • Specific AI use cases
  • Development of AI services
  • Deployment and Business models

 

The course targets developers and Architects who want to transition their career to Enterprise AI. The course correlates the new AI ideas with familiar concepts like ERP, Data warehousing etc and helps to make the transition easier. The course is based on a logical concept called an  ‘Enterprise AI layer’. This AI layer is focused on solving domain specific problems for an Enterprise.  We could see such a layer as an extension to the Data Warehouse or the ERP system (an Intelligent Data Warehouse/ Cognitive ERP system). Thus, the approach provides tangible and practical benefits for the Enterprise with a clear business model. The implementation / development for the course is done using the H2O APIs for R, Python & Spark. 

 

 

The course covers the following Enterprise AI Use Cases

 

  • Healthcare
  • Insurance
  • Adtech
  • Fraud detection
  • Anomaly detection
  • Churn, classification
  • Customer analytics
  • Natural Language Processing, Bots and Virtual Assistants

 

The course comprises three parts

 

Section One: Implementing Enterprise AI

  • Introduction
  • Machine learning
  • neural net and deep net
  • NLP
  • Reinforcement learning
  • Bots
  • Implementation in H2O of the use cases above

 

Section Two: Deploying Enterprise AI
Here, we cover the actual deployment issues for Enterprise AI including

  • Acquiring Data and Training the Algorithm
  • Processing and hardware considerations
  • Business Models – High Performance Computing – Scaling and AI system
  • Costing  an AI system
  • Creating a competitive advantage from AI
  • Industry Barriers for AI

 

Section Three: Projects  

Enterprise AI project created for AI use cases in teams.

 

Duration:

Starting Jan 2017 Approximately six months (3 months for the content and upto three months for the Project)

Course includes a certificate of completion and also validation of the project from H2O.ai. (Projects will be created in a team)

 

Course Logistics:

Offered Online  and Offline( London and Berlin)

When:  Jan 2017

Duration: Approximately six months (including Project)

Fees:         please contact us

Contact :  info@futuretext.com


 

 

 

 

Please contact us to sign up or to know more info@futuretext.com

Enterprise AI Data Scientist: Implementing Enterprise AI Course

Overview

Introduction

Enterprise AI Data Scientist is a niche course that targets developers who want to transition their career towards Enterprise AI

The course covers:

  • Design of Enterprise AI
  • Technology foundations of Enterprise AI systems
  • Specific AI use cases
  • Development of AI services
  • Deployment and Business models

The course targets developers and Architects who want to transition their career to AI. The course correlates the new AI ideas with familiar concepts like ERP, Data warehousing etc and helps to make the transition easier,

 

According to Deloitte: by the “end of 2016 more than 80 of the world’s 100 largest enterprise software companies by revenues will have integrated cognitive technologies into their products”. Gartner also predicts that 40 percent of the new investment made by enterprises will be in predictive analytics by 2020. AI is moving fast into the Enterprise and AI developments can create value for the Enterprise.

The Enterprise AI Layer

The course is based on a logical concept called an  ‘Enterprise AI layer’. This AI layer is focussed on solving relatively mundane problems which are domain specific for an Enterprise.  While this is not as ‘sexy’ as the original vision of AI, it provides tangible and practical benefits to companies. We could see such a layer as an extension to the Data Warehouse or the ERP system(an Intelligent Data Warehouse/ Cognitive ERP system). Thus, the approach provides tangible and practical benefits for the Enterprise with a clear business model. For instance,  an organization would transcribe call centre agents’ interactions with customers create a more intelligent workflow, bot etc using Deep learning algorithms.

 

So, if we imagine such a conceptual AI layer for the enterprise, what does it mean in terms of new services that can be offered by an Enterprise?  Here are some examples

  • Bots : Bots are a great example of the use of AI to automate repetitive tasks like scheduling meetings. Bots are often the starting point of engagement for AI especially in Retail and Financial services
  • Inferring from textual/voice narrative:  Security applications to detect suspicious behaviour, Algorithms that  can draw connections between how patients describe their symptoms etc
  • Detecting patterns from vast amounts of data: Using log files to predict future failures, predicting cyberseurity attacks etc
  • Creating a knowledge base from large datasets: for example an AI program that can read all of Wikipedia or Github.
  • Creating content on scale: Using Robots to replace Writers or even to compose Pop songs
  • Predicting future workflows: Using existing patterns to predict future workflows
  • Mass personalization:  in advertising
  • Video and image analytics: Collision Avoidance for Drones, Autonomous vehicles, Agricultural Crop Health Analysis etc

 

These  applications provide competitive advantage, Differentiation, Customer loyalty and  mass personalization for any Enterprise. They have simple business models (such as deployed as premium features /new products /cost reduction )

 

Course Outline

AI – A conceptual Overview

In this section, we cover the basics of AI and Deep learning. We start with machine learning concepts and relate how Deep Learning/AI fits with them.  We explore the workings of Algorithms and the various technologies underpinning AI. AI enables computers to do some things better than humans especially when it comes to finding insights from large amounts of Unstructured or semi-structured data. Technologies like Machine learning , Natural language processing (NLP) , Speech recognition, and computer vision drive the AI layer. More specifically, AI applies to an algorithm which is learning on its own. We explore the design and principles behind these Algorithms.

 

Understanding the Enterprise AI Technology Landscape

In this section, we focus on various implementations of Machine Learning and Deep Learning including   : linear models(GLM) , Ensembles (ex: Random Forest etc), Clustering(k-means), Deep neural networks(Autoencodes, CNNs, RNNs), Dimensionality reduction(PCA). We are cover the various Deep learning libraries i.e. TensorFlow, Caffe, mxnet, Theano, We also discuss the ancillary technologies like  Natural Language Processing Computer Vision

Enterprise AI Use Cases

Here, we discuss the following use cases

  • Healthcare
  • Insurance
  • Adtech
  • Fraud detection
  • Anomaly detection
  • Churn, classification
  • Customer analytics
  • Natural Language Processing, Bots and Virtual Assistants

Implementing Enterprise AI

Building on the above, we discuss the implementations of the use cases.

Deploying Enterprise AI

Here, we cover the actual deployment issues for Enterprise AI including

  • Acquiring Data and Training the Algorithm
  • Processing and hardware considerations
  • Business Models
  • High Performance Computing – Scaling and AI system
  • Costing  an AI system
  • Creating a competitive advantage from AI
  • Industry Barriers for AI

 

Course Logistics

Where:  London

When:  Jan 2017

Duration: Approximately six months

Online?:  Yes . Please contacts us

Contact :  info@futuretext.com

Fees:         contact us

The AI layer for the Enterprise and the role of IoT

Introduction 

According to Deloitte: by the “end of 2016 more than 80 of the world’s 100 largest enterprise software companies by revenues will have integrated cognitive technologies into their products”. Gartner also predicts that 40 percent of the new investment made by enterprises will be in predictive analytics by 2020. AI is moving fast into the Enterprise and AI developments can create value for the Enterprise. This value can be captured/visualized by considering an ‘Enterprise AI layer’. This AI layer is focussed on solving relatively mundane problems which are domain specific.  While this is not as ‘sexy’ as the original vision of AI, it provides tangible benefits to companies.

 

In this brief article, we proposed a logical concept called the AI layer for the Enterprise.  We could see such a layer as an extension to the Data Warehouse or the ERP system. This has tangible and practical benefits for the Enterprise with a clear business model. The AI layer could also incorporate the IoT datasets and unite the disparate ecosystem. The Enterprise AI layer theme is a key part of the Data Science for Internet of Things course. Only a last few places remain for this course!.

 

Enterprise AI – an Intelligent Data Warehouse/ERP system?

AI enables computers to do some things better than humans especially when it comes to finding insights from large amounts of Unstructured or semi-structured data. Technologies like Machine learning , Natural language processing (NLP) , Speech recognition, and computer vision drive the AI layer. More specifically, AI applies to an algorithm which is learning on its own.

 

To understand this, we have to ask ourselves: How do we train a Big Data algorithm?  

There are two ways:

  • Start with the Rules and apply them to Data (Top down) OR
  • Start with the data and find the rules from the Data (Bottom up)

 

The Top-down approach involved writing enough rules for all possible circumstances.  But this approach is obviously limited by the number of rules and by its finite rules base. The Bottom-up approach applies for two cases. Firstly, when rules can be derived from instances of positive and negative examples(SPAM /NO SPAN). This is traditional machine learning when the Algorithm can  be trained.  But, the more extreme case is : Where there are no examples to train the algorithm.

 

What do we mean by ‘no examples’?

 

a)      There is no schema

b)      Linearity(sequence) and hierarchy is not known

c)      The  output is not known(non-deterministic)

d)     Problem domain is not finite

 

Hence, this is not an easy problem to solve. However, there is a payoff in the enterprise if AI algorithms can be created to learn and self-train manual, repetitive tasks – especially when the tasks involve both structured and unstructured data.

 

How can we visualize the AI layer?

One simple way is to think of it as an ‘Intelligent Data warehouse’ i.e. an extension to either the Data warehouse or the ERP system

 

For instance,  an organization would transcribe call centre agents’ interactions with customers create a more intelligent workflow, bot etc using Deep learning algorithms.

Enterprise AI layer – What it mean to the Enterprise

So, if we imagine such a conceptual AI layer for the enterprise, what does it mean in terms of new services that can be offered?  Here are some examples

  • Bots : Bots are a great example of the use of AI to automate repetitive tasks like scheduling meetings. Bots are often the starting point of engagement for AI especially in Retail and Financial services
  • Inferring from textual/voice narrative:  Security applications to detect suspicious behaviour, Algorithms that  can draw connections between how patients describe their symptoms etc
  • Detecting patterns from vast amounts of data: Using log files to predict future failures, predicting cyberseurity attacks etc
  • Creating a knowledge base from large datasets: for example an AI program that can read all of Wikipedia or Github.
  • Creating content on scale: Using Robots to replace Writers or even to compose Pop songs
  • Predicting future workflows: Using existing patterns to predict future workflows
  • Mass personalization:  in advertising
  • Video and image analytics: Collision Avoidance for Drones, Autonomous vehicles, Agricultural Crop Health Analysis etc

 

These  applications provide competitive advantage, Differentiation, Customer loyalty and  mass personalization. They have simple business models (such as deployed as premium features /new products /cost reduction )

 

The Enterprise AI layer and IoT

 

So, the final question is: What does the Enterprise layer mean for IoT?

 

IoT has tremendous potential but faces an inherent problem. Currently, IoT is implemented in verticals/ silos and these silos do not talk to each other. To realize the full potential of IoT, an over-arching layer above individual verticals could ‘connect the dots’. Coming from the Telco industry, these ideas are not new i.e. the winners of the mobile/Telco ecosystem were iPhone and Android – which succeeded in doing exactly that.

 

Firstly, the AI layer could help in deriving actionable insights from billions of data points which come from IoT devices across verticals. This is the obvious benefit as IoT data from various verticals can act as an input to the AI layer.  Deep learning algorithms play an important role in IoT analytics because Machine data is sparse and / or has a temporal element to it. Devices may behave differently at different conditions. Hence, capturing all scenarios for data pre-processing/training stage of an algorithm is difficult. Deep learning algorithms can help to mitigate these risks by enabling algorithms to learn on their own. This concept of machines learning on their own can be extended to ‘machines teaching other machines’. This idea is not so far-fetched and is already happening, A Fanuc robot teaches itself to perform a task overnight by observation and through reinforcement learning. Fanuc’s robot uses reinforcement learning to train itself. After eight hours or so it gets to 90 percent accuracy or above, which is almost the same as if an expert were to program it. The process can be accelerated if several robots work in parallel and then share what they have learned. This form of distributed learning is called cloud robotics

 

We can extend the idea of ‘machines teaching other machines’ more generically within the Enterprise. Any entity in an enterprise can train other ‘peer’ entities in the Enterprise. That could be buildings learning from other buildings – or planes or oil rigs.  We see early examples of this approach in Salesforce.com and Einstein. Longer term, Reinforcement learning is the key technology that drives IoT and AI layer for the Enterprise – but initially any technologies that implement self learning algorithms would help for this task

Conclusion

In this brief article, we proposed a logical concept called the AI layer for the Enterprise.  We could see such a layer as an extension to the Data Warehouse or the ERP system. This has tangible and practical benefits for the Enterprise with a clear business model. The AI layer could also incorporate the IoT datasets and unite the disparate ecosystem.  This will not be easy. But it is worth it because the payoffs for creating such an AI layer around the Enterprise are huge! The Enterprise AI layer theme is a key part of the Data Science for Internet of Things course. Only a last few places remain for this course!.

Data Science for Internet of Things course – Strategic foundation for decision makers

Data Science for Internet of Things course - Strategic foundation for decision makers 

To sign up or learn more email info@futuretext.com The course starts in Sep 2016

We have had a great response to the Data Science for Internet of Things course. The course takes a technological focus aiming enabling you to become a Data Scientist for the Internet of Things. I also had many requests for a Strategic version of the Data Science for Internet of Things Course for decision makers.

Today, we launch special edition of the course only for decision makers.

The course is based on an open problem solving methodology for IoT analytics which we are developing within the course.

 Why do we need a methodology for Data Science for IoT?

 

IoT will create huge volumes of Data making the discovery of insights more critical. Often, the analytics process will need to be automated. By establishing a formal process for extracting knowledge from IoT applications by IoT vertical, we capture best practise.

This saves implementation time and cost. The methodology is more than Data Mining (i.e. application of algorithms) – but rather, it leans more to KDDM (Knowledge Discovery and Data Mining) principles. It is thus concerned with the entire end-to-end Knowledge extraction process for IoT analytics.

This includes developing scalable algorithms that can be used to analyze massive datasets, interpreting and visualizing results and modelling the engagement between humans and the machine. The main motivation for Knowledge Discovery models is to ensure that the end product will be useful to the user.

Thus, the methodology includes aspects of IoT analytics such as validity, novelty, usefulness, and understandability of the results(by IoT vertical). The methodology builds on a series of interdependent steps with milestones. The steps often include loops and iterations and cover all the processes end to end (including KPIs, Business case, project management). We explore Data Science for IoT analytics at multiple levels including Process level, Workflow level and Systems level.

The concept of a KDDM process model was discussed in 1990s by Anand, Brachman, Fayyad, Piatetsky-Shapiro and others. In a nutshell, we build upon these ideas and apply them to IoT analytics. We also create code in Open source for this methodology.

As a decision maker, by joining the course, you have early and on-going access to both the methodology and the open source code.

Please contact us to sign up or to know more info@futuretext.com

Testimonials for our current course

 Jean Jacques Bernand – Paris – France

“Great course with many interactions, either group or one to one that helps in the learning. In addition, tailored curriculum to the need of each student and interaction with companies involved in this field makes it even more impactful.

As for myself, it allowed me to go into topics of interests that help me in reshaping my career.”

Johnny Johnson, AT&T – USA

“This DSIOT course is a great way to get up-to-speed.  The tools and methodologies for managing devices, wrangling and fusing data, and being able to explain it are taking form fast; Ajit Jaokar is a good fit.  For me, his patience and vision keep this busy corporate family man coming back.”

Yongkang Gao, General Electric, UK.

“I especially thank Ajit for his help on my personal project of the course — recommending proper tools and introducing mentors to me, which significantly reduced my pain in the beginning stage.”

karthik padmanabhan Manager – Global Data Insight and Analytics (GDIA) – Ford Motor Pvt Ltd.

“I am delighted to provide this testimonial to Ajit Jaokar who has extended outstanding support and guidance as my mentor during the entire program on Data science for IoT. Ajit is a world renowned professional in the niche area of applying the Data science principles in creating IoT apps. Talking about the program, it has a lot of breadth and depth covering some of the cutting edge topics in the industry such as Sensor Fusion, Deep Learning oriented towards the Internet of things domain. The topics such as Statistics, Machine Learning, IoT Platforms, Big Data and more speak about the complexity of the program. This is the first of its kind program in the world to provide Data Science training especially on the IoT domain and I feel fortunate to be part of the batch comprising of participants from different countries and skill sets. Overall this journey has transformed me into a mature and confident professional in this new space and I am grateful to Ajit and his team. My wish is to see this program accepted as a gold standard in the industry in the coming years”.

Peter Marriott – UK – www.catalystcomputing.co.uk

Attending the Data Science for IoT course has really helped me in demystifying the tools and practices behind machine learning and has allowed me to move from an awareness of machine learning to practical application.

Yair Meidan Israel – https://il.linkedin.com/in/yairmeidandatamining

“As a PhD student with an academic and practical experience in analytics, the DSIOT course is the perfect means by which I extend my expertise to the domain of IoT. It gradually elaborates on IoT concepts in general, and IoT analytics in particular. I recommend it to any person interested in entering that field. Thanks Ajit!”

Parinya Hiranpanthaporn, Data Architect and Advanced Analytics professional Bangkok

“Good content, Good instructor and Good networking. This course totally answers what I should know about Data Science for Internet of Things.”

 

Sibanjan Das – Bangalore

Ajit helped me to focus and set goals for my career that is extremely valuable. He stands by my side for every initiative I take and helps me to navigate me through every difficult situation I face. A true leader, a technology specialist, good friend and a great mentor. Cheers!!!

Manuel Betancurt – Mobile developer / Electronic Engineer. – Australia

I have had the opportunity to partake in the Data Science for the IoT course taught by Ajit Jaokar. He have crafted a collection of instructional videos, code samples, projects and social interaction with him and other students of this deep knowledge.

Ajit gives an awesome introduction and description of all the tools of the trade for a data scientist getting into the IoT. Even when I really come from a software engineering background, I have found the course totally accessible and useful. The support given by Ajit to make my IoT product a data science driven reality has been invaluable. Providing direction on how to achieve my data analysis goals and even helping me to publish the results of my investigation.

The knowledge demonstrated on this course in a mathematical and computer science level has been truly exciting and encouraging. This course was the key for me to connect the little data to the big data.

Barend Botha – London and South Africa – http://www.sevensymbols.co.uk

This is a great course for anyone wanting to move from a development background into Data Science with specific focus on IoT. The course is unique in that it allows you to learn the theory, skills and technologies required while working on solving a specific problem of your choice, one that plays to your past strengths and interests. From my experience care is taken to give participants one to one guidance in their projects, and there is also within the course the opportunity to network and share interesting content and ideas in this growing field. Highly recommended!

- Barend Botha

Jamie Weisbrod – San Diego - https://www.linkedin.com/in/jamie-weisbrod-3630053

Currently there is a plethora of online courses and degrees available in data science/big data. What attracted me to joining the futuretext class “Data Science for ioT” is Ajit Jaokar. My main concern in choosing a course was how to leverage skills that I already possessed as a computer engineer. Ajit took the time to discuss how I could personalize the course for my interests.

I am currently in the midst of the basic coursework but already I have been able to network with students all over the world who are working on interesting projects. Ajit inspires a lot of people at all ages as he is also teaching young people Data science using space exploration.

 Robert Westwood – UK – Catalyst computing
“Ajit brings to the course years of experience in the industry and a great breadth of knowledge of the companies, people and research in the Data Science/IoT arena.”

Overall, the syllabus covers the following themes in 6 months

Note that the schedule is personalized and flexible for the strategic course

i.e. we discuss and personalize your schedule at the start of the course

  • Principles
  • Problem solving with Data Science: Is an overall process of solving Data Science problems(agnostic of a language) and covers aspects such as exploratory Data analysis)
  • IoT analytics (includes analysis for each vertical within iot. This will be ongoing throughout the course including in the methodology)
  • Foundations of R: The basics of one Programming language ( R ) and how to implement Data science algorithms in R
  • Time Series – which forms the basis of most IoT data (code in R)
  • Spark and NoSQL databases: Code in Scala and implementation in Cassandra
  • Deep Learning
  • Data Science for IoT Methodology
  • Maths and Stats – (this will also be ongoing but will be a core module)
we also have (from day one) what we call foundation projects where you work in groups with projets where you already have code etc. so you apply the concepts in context of a real situation

 

Data Science for Internet of Things: A coaching approach

In the Data Science for Internet of Things course I take a coaching approach. I have alluded to this in the post about foundation projects  and construtivism.

Coaching has a questionable reputation – with some justification

But here, we are talking of high performance coaching strategies

For example: Consider the approach of a book like The talent code 

The author explores the world’s greatest talent hotbeds: tiny places that produce huge amounts of talent ex a small gym in Moscow that produces a large number of gold medalists in athletics. He found that there’s a pattern common to all of them: methods of training, motivation, and coaching. They also place and emphasis on hard skills

So, what does this mean for participants in context of foundation projects?

a)      Start with what you know(ideally)

b)      Work collaboratively

c)       Push your limits(you can choose something different)

d)      Each group for a project will have one person/s who is knowledgeable

e)      Your outcomes should be specific

f)       You can see the big picture through the methodology for problem solving with Data Science for Internet of Things

g)      Your contribution should be measurable

h)      Your contribution should be based on acquiring a specific skill

i)        foundation projects have a quiz

From my perspective – as tutor / coach

  • I need to understand what the participants already know (baseline)
  • Provide measurable feedback
  • Extend your capabilities/push limits
  • Ensure you acquire definite skills
  • Keep you motivated
  • Keep your learning at the right pace
  • Foster a sense of community
  • Provide alternative mentors in the community
  • Use newer methods of learning ex concept maps
  • Create great conversations
  • Allow room for unplanned expansion

I think these techniques applied online are new – and there is so much to learn for all.

If you are interested in the Data Science for Internet of Things course, please email us at info at futuretext.com

Data Science for IoT – role of foundation projects(constructivist learning)

 

In the Data Science for Internet of Things course, I use some elements of constructivism through the use of foundation projects.

Foundation projects allow the participant to choose a learning context which is most familiar to them based on their  existing experience
Foundation projects are different from the Capstone projects for each participant
This form of context based learning is not familiar to most people hence some notes
1) Context based learning is based loosely on constructivism .
A concise description –  Constructivism is pedagogy / learning theory which advocates that people construct their own understanding and knowledge of the world, through experiencing things and reflecting on those experiences. The teacher makes sure she understands the students’ preexisting conceptions, and guides the activity to address them and then build on them.
adapted from source :
“The most important single factor influencing learning is what the learner already knows. Ascertain this and teach him accordingly”
Quote by Asubel one of the pioneers of this education:
In Holland and Germany, this form of education in Science is called by various names ex concept context learning (pdf)
What it means for learning in the Data Science for IoT course:
1)  we follow two modes of learning in parallel - instructivist (via the video based modules) and constructivist (via the foundation projects)
2) for the foundation projects, the participants choose a context most familiar to them from your prior experience. (ex healthcare, renewables, Industrial IoT etc)
The downside of applying constructivist methods to learning is .. they take a relatively long time – hence the longer duration of the course
for the current batch, the foundation projects are:
The foundation projects and project leaders are
Wearables: led by Quang Nam Tran (London)

Renewables:led by vaijayanti vadiraj(Bangalore)

Python for Data Science - temporarily led by me

Big Data: Spark and Cassandra for IoT - temporarily led by me – looking to handover and Trenton Potgieter (Austin)
Deep Learning with Nvidia: led by Jean Jacques Bernard(Paris) and Yongkang Gao(UK)
Data visualization with R: Barend Botha(London)
Predix: Industrial IoT – temporarily led by me – looking to hand over
ETL/Pentaho -
Deep learning and Machine learning with H2O led by Sibanjan Das(Bangalore)
Remote monitoring of elderly/patient care / healthcare - Manuel Betancurt(Sydney)

More details about the course:  Data Science for Internet of Things course

Image: Jean Piaget – the founder of Constructivism