Personalized #AI and #Datascience for IoT courses

Pioneering a personalized education platform for learning complex subjects 

(Artificial Intelligence and IoT)

As a child, did you have a favourite teacher?

 You can probably recall a teacher whose commitment and individual attention  inspired you to achieve in even the most challenging subjects.  That same passion and care was missing in today’s online education environment. Until now!

Welcome to futuretext

I am Ajit Jaokar and I created futuretext.

 Currently, we focus on teaching two courses: Artificial Intelligence for fintech and Data Science for the Internet of Things

I have taught Artificial Intelligence and the Internet of Things at some of the best universities globally.   With futuretext, I am pioneering new ideas for online education.

Most online courses try to mirror classroom procedures online. This does not work for complex topics, nor mesh well with the online student, often a busy professional juggling several priorities.

At futuretext, we’re pioneering  a new way of teaching complexity in an online setting, blending  the flexibility and convenience of Internet learning with the individual attention  and care of the best classroom teachers.

We express this through student-centered values: Here’s how ..

  • Patience and Flexibility You have up to a year to complete an online course. We understand that you value your time and don’t pressure you to meet contrived deadlines.
  • Flexibile Grading  Our grading system is based on milestone quizzes, evaluating you on what you know, not when you know it! No cramming or high stress all-nighters.
  • Personalized content  By offering you a choice of modules, time and focus, you feel comfortable and supported from Day One through choices of modules, time and area of focus (projects- see below).
  • Pastoral care (in a secular sense) Online teaching too often ignores the importance of personal care, attention and openness that makes the teacher/student relationship such a critical aspect of student success. We place that care and attention at the forefront of our work. Our teachers take a personal interest in the wellbeing of each student.
  • A Student-Centered milestone approach: Our courses are based on three fundamental milestones that inform our teaching process and your learning: A Concepts milestone, a Development milestone and a Deployment milestone. You get flexibility, plenty of time and no artificial pressures. Perfect for working professionals aspiring to transform their career! This allows you a lot of flexibility, especially if you are working. The quizzes are directly related to milestone content. No trick questions!
  •  Openness Your instructors  are available to share observations and advice or to listen to your specific needs and concerns.  We encourage a strong teacher/student relationship.
  • We’re there with you  As practicing Data Scientists, we bring the perspective of the practitioner to the class setting. We understand the unique challenges you manage in your work.


  • We believe that Education should transform Like your favourite teachers, we are passionate about the power of learning to transform lives and broaden horizons. Our classes not only directly correspond to your immediate career/start-up goals, but our sense of mission will also help you feel supported and even more motivated.
  • Affordability We recognize transformative education can’t help if it is out of reach. We strive to make our class affordable for all.


  • We bring an AI perspective to both  IoT and fintech   Our focus is on best use of AI in a small number of industry verticals (currently fintech and IoT)


  • Our custom tools aid in instruction  Such as platforms for development we’ve built specifically for this course In most cases, you will have access to the underlying code released under Apache version 2 license.
  • Unique teaching methods  Like the best teachers, we have built the coursework around our experience, using a useful, unique problem-solving methodology
  • You become part of a Supportive Community  As a student in our program, you become part of our unique support community, benefitting from and contributing to the experience of others for years to come.


The meaning of Projects

AI and IoT are complex subjects, requiring both an understanding of their foundations and the context in which you are applying them

To manage this complexity, we developed a unique Project-oriented approach.

For us, a Project is a learning and group artefact, an ongoing creation of students and teacher evolving over the duration of the course.

This approach is grounded in context- based learning, or more broadly, in the pedagogy of Constructivism  .

For this reason, we emphasise small groups – both for the class and the projects

Overall Process

A futuretext Project has three milestones::

  • Concepts milestone
  • Development milestone
  • Deployment milestone


Assessment is also built around milestones.

The Concepts milestone phase typically lasts two months. Six months are devoted to the Development milestone, and the final four months to the Deployment milestone.

Thus, the course normally spans a year.

After establishing a firm foundation in the Concepts Milestone phase, you choose a project relative to your context and define the problem statement.

You then continue to grow with an emphasis on development and deployment in an area which is already of interest to you.

You receive a new video/module every two weeks, and we meet face- to-face via online conference every two weeks.

At the end of a year, you have a sound conceptual grasp in the field, and a fully researched and developed project relevant to your immediate career position. Just as important, you are now part of a professional community that shares your interests and is devoted to mutual assistance long after the course is complete.


Business philosophy

A word about our business philosophy ..

Although the Internet and the Web are dominated by Big Business,

we believe the web will always have a place for niche businesses in almost every domain.

Our inspiration for the type of business we strive  to be is rooted in  the German Mittelstand companies, which value long-term focus, social responsibility, innovation and customer focus.


Notes about the course content

  • We currently run two courses: “Data Science for IoT course” and “AI for fintech course”
  • The courses are comprehensive and can be customized by agreement when you start
  • The courses are not associated with any academic institution.
  • Both courses take an AI/Deep learning perspective



  • The foundations of Data Science for IoT and how it differs from traditional data science.
  • A methodology for solving IoT projects. This methodology will consider the various options like Edge processing, Time series, Sensor fusion/Complex event processing, Streaming etc
  • An end to end application (in code) for implementing Data Science for IoT
  • Designing an IoT product
  • Understanding IoT platforms
  • Understanding IoT products and how analytics fit into it
  • Machine learning algorithms in TensorFlow and Keras:
  • QUIZ

Designing with Deep Learning algorithms

  •  Understanding AI and Deep Learning
  • An introduction to tensorflow and keras
  • Algorithms (ML and DL) : Multilayer Perceptron, Auto encoders, Deep Convolutional Networks, Recurrent Neural Networks, Reinforcement learning, Natural language processing
  • implementations covering both Time series and Images
  • QUIZ

Deploying IoT products

  •  In this section, we consider the deployment models for IoT applications both for consumer and Industrial IoT
  • These include Edge, Micoservices etc
  • A systems thinking approach for deploying complex IoT applications
  • Streaming and streaming analytics(ex Spark/Kafka)

Data Science for IoT advances/specific topics

Below is a list of IoT specific topics which we cover in the course

  • AI in IoT (time series and images based processing)
  • Time series data analysis
  • Time series feature selection
  • Time series databases
  • Knowledge discovery from Time series
  • Edge computing Hardware acceleration
  • Handling data with noise and skew
  • IoT feature engineering
  • Complex event processing
  • Multivariate time series
  • Co-relations between time series
  • Handling IoT data with Noise and Skew
  • IoT and Mobile(tensoeflow mobike and coreml)




•             Foundations of Enterprise AI

•             Understanding the application of AI for fintech

•             Introduction to TensorFlow and Keras

•             End-to-end implementation for an AI application

•             Theory of AI


At the conclusion of this stage  stage you choose a project and a problem statement to solve in a group. You will focus the rest of your studies within the context of that theme. For example, you may decide to work with Insurance (within fintech) or to explore the impact of IoT on the industry..


Concepts milestone + QUIZ

Designing an AI product for fintech

•             Basics of Designing an AI product

•             Understanding the complexities of fintech and AI design

•             Understanding Deep Learning

•             Machine learning algorithms in TensorFlow and Keras:

•             Designing with Deep Learning algorithms

Multilayer Perceptron

Auto encoders

Deep Convolutional Networks

Recurrent Neural Networks

Reinforcement learning

Natural language processing

Basics of Text Analytics


Development milestone + QUIZ

Deploying AI products for fintech

Deploying Deep Learning models on scale

Understanding fintech applications in AI from a deployment standpoint

Understanding the big Picture with Kubernetes and other platforms

Methodology for AI + fintech products

Deploying AI and fintech products


Acquiring Data and Training the Algorithm

Processing and hardware considerations

Business Models – High Performance Computing – Scaling and AI system

Costing an AI system

Creating a competitive advantage from AI

Specific considerations for fintech: ex EU payment directive (PSD2), etc.

Deployment milestone + QUIZ

Contact [email protected] – next batches starting in Nov and Jan 

workshop: A hands-on Introduction to Deep Learning using Keras and Tensorflow

On Saturday July 22 .. In London we have a very interesting workshop
A handson Introduction to Deep Learning using Keras and Tensorflow
by Ajit Jaokar and others

Both keras and tensorflow are hot technologies and they are a great way to get you started with Deep Learning

What you will learn First you learn the Theory and principles of Deep Learning(enough to understand Deep Learning code)
Then we will look at a tutorial in Keras and Tensorflow i.e. a handson implementation to build a basic neural network tutorial

What you need to know Python
What you need to bringA laptop with keras and tensorflow already installed

where?@fablab in Nottinghill

Barclay’s Eagle Lab
81 Palace Gardens Terrace
Notting Hill Gate
W8 4AT


This is free for participants of AI course (but not for the IoT course)

Else there is a small fee

Please email me at ajit.jaokar at if you are interested in attending

AI for Fintech course – Early discounts and limited places



The AI for fintech is a new course with limited places focused on AI design (product, development and Data) for the fintech industry.

We will first explain the end-to-end principles of AI and Deep Learning and then describe specific applications and the implications of deploying them in context of fintech

The course will be conducted by Ajit Jaokar and Jakob Aungiers 




•             Foundations of Enterprise AI

•             Understanding the application of AI for fintech

•             Introduction to TensorFlow and Keras

•             End to end implementation for an AI application


Designing an AI product

•             Basics of Designing an AI product

•             Understanding Deep learning

•             Machine learning algorithms in TensorFlow and Keras:

•             Designing with Deep Learning algorithms

Multilayer Perceptron

Auto encoders

Deep Convolutional Networks

Recurrent Neural Networks

Reinforcement learning

Natural language processing

Basics of Text Analytics


Deploying AI products for fintech

•             Methodology for Enterprise AI projects

•             Deploying Enterprise AI

•             Understanding the Enterprise AI layer

•             Acquiring Data and Training the Algorithm

•             Processing and hardware considerations

•             Business Models – High Performance Computing – Scaling and AI system

•             Costing an AI system

•             Creating a competitive advantage from AI

•             Specific considerations for fintech: ex EU payment directive (PSD2) etc


Course Logistics

The course targets designers or developers who work with fintech.

Strategic Option:  You can choose to work with the strategic option  (no coding)

Developer Option:  The full course based on development in TensorFlow and Keras

Duration: Starting July 31 2017 approximately six months

Offered Online  with video based content

Fees:         Please contact us

Contact :  [email protected]

AI for Smart Cities Lab launch

I have written about the AI for Smart cities lab before and we are close to launching it

Created by futuretext in collaboration with citysciences (UPM) and Nvidia  - the AI for Smart Cities lab explores complex problems in the deployment of AI for Smart cities

We have been working with the Nvidia Jetson product but are really looking forward to working with both Metropolis and Isaac for Smart cities

The lab will initially focus on Projects and events in London(based out of fablab), Berlin and Madrid.

Most of the initial development on the Nvidia platform will be using tensorflow

We will have an event in London on the week of June 12

More details of collaborators / team coming soon

Any questions – please email me on ajit.jaokar at

Re-thinking Enterprise business processes using Augmented Intelligence


In the 1990s, there was a popular book called Re-engineering the Corporation. Looking back now, Re-engineering certainly has had a mixed success – but it did have an impact over the last two decades. ERP deployments led by SAP and others were a direct result of the Business Process re-engineering phenomenon.

So, now, with the rise of AI: Could we think of a new form of Re-engineering the Corporation – using Artificial Intelligence? The current group of Robotic process automation companies focus on the UI layer. We could extend this far deeper into the Enterprise. Leaving aside the discussion of  the impact of AI on jobs, this could lead to augmented intelligence at the process level for employees (and hence an opportunity for people to transition their careers in the age of AI).

Here are some initial thoughts. I am exploring these ideas in more detail. This work is also a part of an AI lab we are launching in London and Berlin in partnership with UPM and Nvidia both for Enterprises and Cities

Re-thinking Enterprise business processes using Augmented Intelligence

How would you rethink Enterprise business processes using Augmented Intelligence?

To put the basics into perspective: we consider a very ‘grassroots’ meaning of AI. AI is based on Deep Learning. Deep Learning involves automatic feature detection using the data.  You could model a range of Data types (or combination thereof) using AI:

a)      Images and sound – Convolutional neural networks

b)      Transactional – ex Loan approval

c)       Sequences: including handwriting recognition via LSTMs and recurrent neural networks

d)      Text processing – ex natural language detection

e)      Behaviour understanding – via Reinforcement learning

To extend this idea to Process engineering for Enterprises and Cities, we need to

a)      Understand existing business processes

b)      Break the process down into its components

c)       Model the process using Data and Algorithms (both Deep Learning and Machine Learning)

d)      Improve the efficiency of the process by complementing the human activity with AI(Augmented intelligence)

But this just the first step: You would have to consider the wider impact of AI itself

So, here is my list / ‘stack’:

  • New processes due to disruption at the industry level (ex Uber)
  • Change of behaviour due to new processes( ex: employees collaborating with Robots as peers)
  • Improvements in current Business Processes for Enterprises: Customer services, Supply chain, Finance, Human resources, Project management, Corporate reporting, Sales and Logistics, Management
  • The GPU enabled enterprise  ex Nvidia Grid but more broadly GPUs Will Democratize Delivery of Modern Apps, More Efficient Hybridization of Workflows, Unify Compute and Graphics
  • The availability of bodies of labelled data
  • New forms of Communications: Text analytics, Natural language processing, Speech recognition, chatbots

I am exploring these ideas in more as part of my work on the Enterprise AI lab we are launching in London and Berlin in partnership with UPM and Nvidia both for Enterprises and Cities. Welcome your comments at ajit.jaokar at or @ajitjaokar



Young Data Scientist – more about forthcoming book/kickstarter

The role of a data scientist is one of the hottest jobs in the industry today – But how do we inspire the next generation of data scientists?

I have been working on the idea of Young Data Scientist for a years now – with various iterations and pivots

Its now ready to launch the next version as a book / kickstarter

The easiest way to inspire the next generation of Data Scientists is to go back to the basics – i.e. the Maths because Maths is the Universal language that underpins progress and innovation

This also aligns closely with my day job and my teaching – Data Science for Internet of Things at Oxford University

It allows me also to create the book /kickstarter using personal insights from my years of teaching

So, if you consider the maths foundations needed to learn Data Science, you could divide them into four key areas

Linear Algebra

Probability Theory and Statistics

Multidimensional Calculus


All of these are taught(in some shape or form) in high schools(13 to 17 years)

So, the book aims to build upon these foundations for a high school audience to inspire them to take up Data Science

The challenge here is to simplify and co-relate to existing maths knowledge considering the audience(13 to 17 year olds)

and most importantly to inspire!

It also would take them on a path to be Artificial Intelligence(AI) aware

Young Data Scientist will be a book, a kickstarter, a community

It will have Open source foundations

Young Data Scientist community will also work with teachers

And finally, the Young Data Scientist community will draw upon interesting examples in Space exploration, Genomics, Ecology etc

Coding will be in Python (including numpy and tensorflow sometimes)

Please email me at ajit.jaokar at

Best of my recent articles on KDnuggets and Data Science Central

I have been regularly featured on both kdnuggets and data science central.

Here is a list of my top articles recently

I discuss these ideas in the Implementing Enterprise AI course


  • Continuous improvement for IoT through AI / Continuous learning
     - 25 Nov 2016

    In reality, especially for IoT, it is not like once an analytics model is built, it will give the results with same accuracy till the end of time. Data pattern changes over the time which makes it absolutely important to learn from new data and improve/recalibrate the models to get correct result. Below article explain this phenomenon of continuous improvement in analytics for IoT.

  • Data Science for Internet of Things (IoT): Ten Differences From Traditional Data Science
     - 26 Sep 2016

    The connected devices (The Internet of Things) generate more than 2.5 quintillion bytes of data daily. All this data will significantly impact business processes and the Data Science for IoT will take increasingly central role. Here we outline 10 main differences between Data Science for IoT and traditional Data Science.
  • The Evolution of IoT Edge Analytics: Strategies of Leading Players - 02 Sep 2016
  • This article explores the significance and evolution of IoT edge analytics. Since the author believes that hardware capabilities will converge for large vendors, IoT analytics will be the key differentiator.

  • How to Become a (Type A) Data Scientist
     - 23 Aug 2016

    This post outlines the difference between a Type A and Type B data scientist, and prescribes a learning path on becoming a Type A.

Implementing Enterprise AI course – new batch – April 2017


The January batch of the Implementing AI course is completely sold out!

I am pleased to announce a new cohort for Implementing Enterprise AI course – starting April 24 2017. We are accepting places now. As usual, numbers are limited and we have an early bird discount

Implementing Enterprise AI is a unique and limited edition course that is focussed on AI Engineering / AI for the Enterprise.

Created in partnership with , the course uses Open Source technology to work with AI use cases. Successful participants will receive a certificate of completion and also validation of their project from

 To sign up or learn more, email [email protected]

The course covers

  • Design of Enterprise AI
  • Technology foundations of Enterprise AI systems
  • Specific AI use cases
  • Development of AI services
  • Deployment and Business models

 The course targets developers and Architects who want to transition their career to Enterprise AI. The course correlates the new AI ideas with familiar concepts like ERP, Data warehousing etc and helps to make the transition easier.

The implementation / development for the course is done using the H2O APIs for R, Python & Spark. 


Starting April 2017 approximately six months (3 months for the content and up to three months for the Project)
Course includes a certificate of completion and also validation of the project from (Projects will be created in a team)

 Course Logistics:

Offered Online  and Offline ( London and Berlin)

When:    April 2017
Duration: Approximately six months (including project)
Fees:      contact us

To sign up or learn more, email [email protected]


April – May 2017


  • Understanding the Enterprise AI layer
  • Introduction to Machine Learning
  • Unsupervised Learning
  • Supervised Learning
  • Generalized Linear Modeling
  • Gradient Boosting Machine
  • Ensembles
  • Random Forest
  • Programming foundations(see notes below)



  • Introduction to Deep Learning
  • Multilayer Perception
  • Auto encoders
  • Deep Convolutional Networks
  • Recurrent Neural Networks
  • Reinforcement learning
  • Programming foundations(see notes below)


July 2017


  • Natural language processing
  • Basics of Text Analytics
  • POS Tagging
  • Sentiment Analysis
  • Text Classification
  • Intelligent bots
  • Programming foundations(see notes below)


Aug – Oct 2017 – Projects and deployment


  • Deploying Enterprise AI
  • Acquiring Data and Training the Algorithm
  • Processing and hardware considerations
  • Business Models – High Performance Computing – Scaling and AI system
  • Costing an AI system
  • Creating a competitive advantage from AI
  • Industry Barriers for AI


Implementation of Enterprise AI use cases (in groups)


  • Healthcare
  • Insurance
  • Adtech
  • Fraud detection
  • Anomaly detection
  • Churn, classification
  • Customer analytics
  • Natural Language Processing, Bots and Virtual Assistants



  • The course covers Design of Enterprise AI, Technology foundations of Enterprise AI systems, Specific AI use cases, Development of AI services and Deployment and Business models
  • The implementation / development for the course is done using R, Python and Spark using the H2O APIs
  • For Deep learning, we also work with GPUs, tensoflow, Mxnet and Caffe
  • We focus on large scale problems
  • Notes on Programming foundations: We assume that you have significant Programming knowledge. However, we do not assume that you are familiar with Python, R or Spark.
  • The course provides you background in these languages over the first three months. You will then use this knowledge to work on the use cases in the Project phase Certification of completion is based on completion of quiz related to modules.
  • Project certification (validated by is based on Projects working in groups
  • Note that the syllabus is subject to change

Project certification by

Twelve types of Artificial Intelligence (AI) problems

Background – How many cats does it take to identify a


In this article, I cover the 12 types of AI problems i.e. I address the question : in which scenarios should you use Artificial Intelligence (AI)?  We cover this space in the  Enterprise AI course

Some background:

Recently, I conducted a strategy workshop for a group of senior executives running a large multi national. In the workshop, one person asked the question: How many cats does it need to identify a Cat?

This question is in reference to Andrew Ng’s famous paper on Deep Learning where he was correctly able to identify images of Cats from YouTube videos. On one level, the answer is very clear: because Andrew Ng lists that number in his paper. That number is 10 million images .. But the answer is incomplete because the question itself is limiting since there are a lot more details in the implementation – for example training on a cluster with 1,000 machines (16,000 cores) for three days. I wanted to present a more detailed response to the question. Also, many problems can be solved using traditional Machine Learning algorithms – as per an excellent post from Brandon Rohrer – which algorithm family can answer my question. So, in this post I discuss problems that can be uniquely addressed through AI. This is not an exact taxonomy but I believe it is comprehensive. I have intentionally emphasized Enterprise AI problems because I believe AI will affect many mainstream applications – although a lot of media attention goes to the more esoteric applications.


What problem does Deep Learning address?

What is Deep Learning?

Firstly, let us explore what is Deep Learning


Deep learning refers to artificial neural networks that are composed of many layers. The ‘Deep’ refers to multiple layers. In contrast, many other machine learning algorithms like SVM are shallow because they do not have a Deep architecture through multiple layers. The Deep architecture allows subsequent computations to build upon previous ones. We currently have deep learning networks with 10+ and even 100+ layers.


The presence of multiple layers allows the network to learn more abstract features. Thus, the higher layers of the network can learn more abstract features building on the inputs from the lower layers.  A Deep Learning network can be seen as a Feature extraction layer with a Classification layer on top. The power of deep learning is not in its classification skills, but rather in its feature extraction skills. Feature extraction is automatic (without human intervention) and multi-layered.


The network is trained by exposing it to a large number of labelled examples. Errors are detected and the weights of the connections between the neurons adjusted to improve results. The optimisation process is repeated to create a tuned network. Once deployed, unlabelled images can be assessed based on the tuned network.


Feature engineering involves finding connections between variables and packaging them into a new single variable is called. Deep Learning performs automated feature engineering. Automated feature engineering is the defining characteristic of Deep Learning especially for unstructured data such as images. This matters because the alternative is engineering features by hand. This is slow, cumbersome and depends on the  domain knowledge of the people/person performing the Engineering


Deep Learning suits problems where the target function is complex and datasets are large but with examples of positive and negative cases.  Deep Learning also suits problems that involve Hierarchy and Abstraction.

Abstraction is a conceptual process by which general rules and concepts are derived from the usage and classification of specific examples. We can think of an abstraction as the creation of a ‘super-category’ which comprises of the common features that describe the examples for a specific purpose but ignores the ‘local changes’ in each example.  For example, the abstraction of a ‘Cat’ would comprise fur, whiskers etc. For Deep Learning, each layer is involved with detection of one characteristic and subsequent layers build upon previous ones.  Hence, Deep Learning is used in situations where the problem domain comprises abstract and hierarchical concepts. Image recognition falls in this category. In contrast, a Spam detection problem that can be modelled neatly as a spreadsheet probably is not a complex problem to warrant Deep Learning

A more detailed explanation of this question can be found in THIS Quora thread.

AI vs. Deep Learning vs. Machine Learning

Before we explore types of AI applications, we need to also discuss the differences between the three terms AI vs. Deep Learning vs. Machine Learning.

The term Artificial Intelligence (AI) implies a machine that can Reason. A more complete list or AI characteristics (source David Kelnar) is

  1. Reasoning: the ability to solve problems through logical deduction
  2. Knowledge: the ability to represent knowledge about the world (the understanding that there are certain entities, events and situations in the world; those elements have properties; and those elements can be categorised.)
  3. Planningthe ability to set and achieve goals (there is a specific future state of the world that is desirable, and sequences of actions can be undertaken that will effect progress towards it)
  4. Communication: the ability to understand written and spoken language.
  5. Perception: the ability to deduce things about the world from visual images, sounds and other sensory inputs.


The holy grail of AI is artificial general intelligence (aka like Terminator!) that allows machines to function independently in a normal human environment. What we see today is mostly narrow AI (ex like the NEST thermostat). AI is evolving rapidly. A range of technologies drive AI currently. These include: image recognition and auto labelling, facial recognition, text to speech, speech to text, auto translation, sentiment analysis, and emotion analytics in image, video, text, and speech. Source: Bill Vorhies  AI Apps  have also reached accuracies of 99% in contrast to 95% just a few years back.


Improvements in Deep Learning algorithms drive AI.  Deep Learning algorithms can detect patterns without the prior definition of features or characteristics. They can be seen as a hybrid form of supervised learning because you must still train the network with a large number of examples but without the requirement for predefining the characteristics of the examples (features). Deep Learning networks have made vast improvements both due to the algorithms themselves but also due to better hardware(specifically GPUs)


Finally, in a broad sense, the term Machine Learning means the application of any algorithm that can be applied against a dataset to find a pattern in the data. This includes algorithms like supervised, unsupervised, segmentation, classification, or regression. Despite their popularity, there are many reasons why Deep learning algorithms will not make other Machine Learning algor…

12 types of AI problems

With this background, we now discuss the twelve types of AI problems.

1) Domain expert: Problems which involve Reasoning based on a complex body of knowledge

This includes tasks which are based on learning a body of knowledge like Legal, financial etc. and then formulating a process where the machine can simulate an expert in the field

2) Domain extension: Problems which involve extending a complex body of Knowledge

Here, the machine learns a complex body of knowledge like information about existing medication etc. and then can suggest new insights to the domain itself – for example new drugs to cure diseases.

3) Complex Planner: Tasks which involve Planning

Many logistics and scheduling tasks can be done by current (non AI) algorithms. But increasingly, as the optimization becomes complex AI could help. One example is the use of AI techniques in IoT for Sparse datasets  AI techniques help on this case because we have large and complex datasets where human beings cannot detect patterns but a machine can do so easily.

4) Better communicator: Tasks which involve improving existing communication

AI and Deep Learning benefit many communication modes such as automatic translation,  intelligent agents etc

5) New Perception: Tasks which involve Perception

AI and Deep Learning  enable newer forms of Perception which enables new services such as autonomous vehicles

6) Enterprise AI: AI meets Re-engineering the corporation!

While autonomous vehicles etc get a lot of media attention, AI will be deployed in almost all sectors of the economy. In each case, the same principles apply i.e. AI will be used to create new insights from automatic feature detection via Deep Learning – which in turn help to optimize, improve or change a business process (over and above what can be done with traditional machine learning). I outlined some of these processes in financial services in a previous blog: Enterprise AI insights from the AI Europe event in London. In a wider sense, you could view this as Re-engineering the Corporation meets AI/ Artificial Intelligence. This is very much part of the Enterprise AI course


7) Enterprise AI adding unstructured data and Cognitive capabilities to ERP and Datawarehousing

For reasons listed above, unstructured data offers a huge opportunity for Deep Learning and hence AI.  As per Bernard Marr writing in Forbes:  “The vast majority of the data available to most organizations is unstructured – call logs, emails, transcripts, video and audio data which, while full of valuable insights, can’t easily be universally formatted into rows and columns to make quantitative analysis straightforward. With advances in fields such as image recognition, sentiment analysis and natural language processing, this information is starting to give up its secrets, and mining it will become increasingly big business in 2017.” I very much agree to this. In practise, this will mean enhancing the features of ERP and Datawarehousing systems through Cognitive systems.

8) Problems which impact domains due to second order consequences of AI

David Kelnar says in The fourth industrial revolution a primer on artificial intelligenc…

“The second-order consequences of machine learning will exceed its immediate impact. Deep learning has improved computer vision, for example, to the point that autonomous vehicles (cars and trucks) are viable. But what will be their impact? Today, 90% of people and 80% of freight are transported via road in the UK. Autonomous vehicles alone will impact: safety (90% of accidents are caused by driver inattention) employment (2.2 million people work in the UK haulage and logistics industry, receiving an estimated £57B in annual salaries) insurance (Autonomous Research anticipates a 63% fall in UK car insurance premiums over time) sector economics (consumers are likely to use on-demand transportation services in place of car ownership); vehicle throughput; urban planning; regulation and more. “


9) Problems in the near future that could benefit from improved algorithms

A catch-all category for things which were not possible in the past, could be possible in the near future due to better algorithms or better hardware.  For example, in Speech recognition, improvements continue to be made and currently, the abilities of the machine equal that of a human. From 2012, Google used LSTMs to power the speech recognition system in Android. Just six weeks ago, Microsoft engineers reported that their system reached a word error rate of 5.9% — a figure roughly equal to that of human abilities for the first time in history.  The goal-post continues to be moved rapidly .. for example is building an avatar that can capture your personality

10) Evolution of Expert systems

Expert systems have been around for a long time.  Much of the vision of Expert systems could be implemented in AI/Deep Learning algorithms in the near future. If you study the architecture of IBM Watson, you can see that the Watson strategy leads to an Expert system vision. Of course, the same ideas can be implemented independently of Watson today.


11) Super Long sequence pattern recognition

This domain is of personal interest to me due to my background with IoT see my course at Oxford University Data Science for Internet of Things. I got this title from a slide from Uber’s head of Deep Learning who I met at the AI Europe event in London. The application of AI techniques to sequential pattern recognition is still an early stage domain(and does not yet get the kind of attention as CNNs for example) – but in my view, this will be a rapidly expanding space. For some background see this thesis from Technische Universitat Munchen (TUM) Deep Learning For Sequential P…  and also this blog by Jakob Aungiers   LSTM Neural Network for Time Series Prediction


12) Extending Sentiment Analysis using AI

The interplay between AI and Sentiment analysis is also a new area. There are already many synergies between AI and Sentiment analysis because many functions of AI apps need sentiment analysis features.

“The common interest areas where Artificial Intelligence (AI) meets sentiment analysis can be viewed from four aspects of the problem and the aspects can be grouped as Object identification, Feature extraction, Orientation classification and Integration. The existing reported solutions or available systems are still far from being perfect or fail to meet the satisfaction level of the end users. The main issue may be that there are many conceptual rules that govern sentiment and there are even more clues (possibly unlimited) that can convey these concepts from realization to verbalization of a human being.” source: SAAIP

Notes: the post The fourth industrial revolution a primer on artificial intelligenc…  also offers a good insight on AI domains also see #AI application areas – a paper review of AI applications (pdf)



To conclude, AI is a rapidly evolving space. Although AI is more than Deep Learning, Advances in Deep Learning drive AI. Automatic feature learning is the key feature of AI. AI needs many detailed and pragmatic strategies which I have not yet covered here. A good AI Designer should be able to suggest more complex strategies like Pre-training or AI Transfer Learning

AI is not a panacea. AI comes with a cost (skills, development, and architecture) but provides an exponential increase in performance. Hence, AI is ultimately a rich company’s game. But AI is also a ‘winner takes all’ game and hence provides a competitive advantage. The winners in AI will take an exponential view addressing very large scale problems i.e. what is possible with AI which is not possible now?

We cover this space in the  Enterprise AI course 

Protected: PredictionV

This post is password protected. To view it please enter your password below: