Do we need only AI or IoT or ML or BlockChain or all of them together?

02 FEB, 2019
by Bikash Mohanty

Few interesting articles I had read recently about Machine-Joking . Humans had some achievement in outlining what would create humour. Can machines do that using AI with the help of ML? Nah, it's not been completely codified. As even full-time stand-up comedians would confess, there is no magic formula to produce the unspoiled joke. Much of what makes us laugh depends on subtle elements such as context or body language of the comedian himself. Sometimes even we humans don't know why a joke is comical!!! What is comical for one person may not be amusing for others. The same joke told by two people might produce different reactions. A joke is fun, and a witticism can be subjective. So, how can we teach AI to create jokes if we ourselves don't grip the reasons why a joke is funny? AI, which inclines to emphasise on a very slender range of tasks, is poorly equipped to spot the wide range of elements involved in humour, let alone know what they mean. Lack of context is one area that makes it more difficult for algorithms or codification.

Take a look at some of the machine generated one-liners jokes:

  • I like my coffee like I like my war: cold. 
  • You know what really pushes my buttons? That guy that's in control of me.
  • And like my Alexa at home told me a joke “What is the favourite crisp flavour of the pilot, answer is “Plane””.

That in principle defines deficiencies of machine controlled, code based undertakings. The system learns to think from data. It is based on the trends and patterns from the data that the AI makes decisions. The decisions have to be logical and defined. AI is just basically an efficient decision making machine. It is can be taught to think but it cannot create thought beyond the patterns and trends from the data it receives.

The purpose of creating the AI systems is for optimization and efficiency. If they are liable to the same issues as humans, then they are not required . An AI that can make a joke is like an AI that can make mistakes; not just conscious mistakes or errors but unconscious mistakes that are not errors. That defeats the purpose of AI. Incidentally it’s one of the things that makes humans standout i.e. we do mistakes and we learn from our mistakes. And really that is where art comes from. AI was built to be perfect, to have one that can do jokes is to purposely build a flawed one. But can it automatically learn from the mistakes like we humans do? How long that learning process takes for it to be perfect, where it does not make any more mistakes? Are these learnings for one machine can benefit other machine or every machine has to go through its own learning cycle? Or there will be a huge global data lake (Data Ocean may be); where all machine can take their intelligent feed from, to be at same level as other machines? How long that stabilization process takes, for machines to take over humans? That’s the fear we are living today, right? J

AI has its place, but there are just some lines it cannot cross. This is not just because the system cannot learn it, rather it is because the system has to be anti-intelligent to achieve it. Anti-intelligence will take AI away from its standout edge. And  if an AI can make a choice whether to be intelligent or stupid, then the world is in trouble ; what humans have not achieved in thousands of years will be achieved in days! (The total destruction of the planet). It is possible, but better not explored. What we need for that is for AI to have a really good model of the world, which it doesn't have at the moment. Maybe this knowledge would include knowing about feelings, human feelings, animal feelings, cause and effect in all/most sense possible.

Now; let’s see cases where they absolutely make senses.

Today Artificial Intelligence, Machine Learning and Deep Learning (AI | ML | DL | IoT) are at the heart of digital transformation by enabling organizations to leverage their growing wealth of big data to optimize key business outcomes and propagate operational used cases.

When the Internet became conventional, a lot of business models were unsettled. Companies had to transform themselves or vanish and we were privy to many examples such as bookstores, retailers, DVD sellers etc. Something similar is about to happen with AI in the next few decades. But with a big dissimilarity: AI is not going to be a new industry, it is going to be in every industry. It's going to be in every application, in every process imaginable, and in every aspect of our lives, almost unsparingly. It won’t just remain contented with the business world, and will increasingly encroach on other areas like culture and art, just as much. It's our responsibility to make sure that this new revolution turns out right, our number one priority should be to make intelligent & cognitive capabilities available to everyone. Much like the ease of creating a web app today because the tools and technologies underlying the web are easy to use, have open-source, and are usually free. In addition, learning resources are readily available for free or at a throwaway price. Of course, this is not about transitioning everyone to jobs that will involve AI, rather this is about making sure that all those who have the potential to create value with AI are able to do so freely. This is about ensuring that no human potential goes to waste. And that is possible; first by creating awareness, encouraging discussions, creating knowledge, mass-circulate the knowledge, creating alternative occupations/opportunities for those who are affected and then finally landing the prospect – a sustained and step by step approach, with the collaboration of government and private sectors. Definitely; not like the landing of Brit-exit poll, without any preparation, without creating proper awareness and public knowledge. Otherwise, the outcome will be the same…lack of acceptance. We are engaged in debating where Brit-Exit was good or bad. We should actually debate, whether the way it was introduced was good or bad. Same applies here.

First thing first; let’s start with some of the most commonly used acronyms and their definitions:

  • Artificial Intelligence (AI) - is the overarching discipline that covers anything related to making machines smart. Whether it’s a robot, a refrigerator, a car, or a software application, if we are making them smart, then it’s AI.
  • Machine Learning (ML) - is commonly used alongside AI but they are not the same thing. ML is a subset of AI. ML refers to systems that can learn by themselves. Systems that get smarter and smarter over time without human intervention.
  • Deep Learning (DL) – It is ML but applied to large data sets.
  • Artificial Neural Networks (ANN) - Refers to models of human neural networks that are designed to help computers learn. There are many techniques and approaches to ML. One of those approaches is artificial neural networks (ANN), sometimes just called neural networks. A good example of this is Amazon’s recommendation engine. Amazon uses artificial neural networks to generate recommendations for its customers. Amazon suggests products by showing us “customers who viewed this item also viewed” and “customers who bought this item also bought”. Amazon assimilates data from all its users browsing experiences and uses that information to make effective product recommendations.
  • Natural Language Processing (NLP) - Refers to systems that can understand language. NLP is the processing of the text to understand meaning.
  • Automated Speech Recognition (ASR) - Refers to the use of computer hardware and software-based techniques to identify and process human voice i.e. processing of speech to text. Because humans speak with colloquialisms and abbreviations it takes extensive computer analysis of natural language to drive accurate outputs.


There we go:

  • ASR and NLP fall under AI.
  • ML and NLP have some overlap as ML is often used for NLP tasks.
  • ASR also overlaps with ML. It has historically been a driving force behind many machine learning techniques.
  • Most AI work now involves ML because intelligent behaviour requires considerable knowledge, and learning is the easiest way to get that knowledge.
  • Data Analytics, Predictive Analytics and Prescriptive Analytics are different applications of Artificial intelligence.

That is the relationship between AI & ML. The image below captures the relationship between AI, ML, and DL.

AI involves machines that can perform tasks that are characteristic of human intelligence . While this is rather general, it includes things like planning, understanding language, recognizing objects and sounds, learning, and problem solving.

We can put AI in two categories, general and narrow. General AI would have all of the characteristics of human intelligence, including the capacities mentioned above. Narrow AI exhibits some facet(s) of human intelligence, and can do that facet extremely well, but is lacking in other areas. A machine that’s great at recognizing images, but nothing else, would be an example of narrow AI.

At its core, machine learning (ML) is simply a way of achieving AI. We can get AI without using machine learning, but this would require building millions of lines of codes with complex rules and decision-trees. So instead of hard coding software routines with specific instructions to accomplish a particular task, machine learning is a way of “training” an algorithm so that it can learn how. “Training” involves feeding huge amounts of data to the algorithm and allowing the algorithm to adjust itself and improve. It reminds me for the Man & Fish story: “Give a Man a Fish, and we Feed Him for a Day. Teach a Man To Fish, and we Feed Him for a Lifetime”. To give an example, machine learning has been used to make drastic improvements to computer vision (the ability of a machine to recognize an object in an image or video). We gather hundreds of thousands or even millions of pictures and then have humans tag them. For example, the humans might tag pictures that have a cat in them versus those that do not. Then, the algorithm tries to build a model that can accurately tag a picture as containing a cat or not as well as a human. Once the accuracy level is high enough, the machine has now “learned” what a cat looks like. I would imagine, a tag proposal from Facebook is built on same/similar principle.

Deep learning is one of many approaches to machine learning . Other approaches include decision tree learning, inductive logic programming, clustering, reinforcement learning, and Bayesian networks, among others. Deep learning was inspired by the structure and function of the brain, namely the interconnecting of many neurons. Artificial Neural Networks (ANNs) are algorithms that mimic the biological structure of the brain. In ANNs, there are “neurons” which have discrete layers and connections to other “neurons”. Each layer picks out a specific feature to learn, such as curves/edges in image recognition. It’s this layering that gives deep learning its name, depth is created by using multiple layers as opposed to a single layer.

AI and IoT are Indivisibly Entangled :==>

Relationship between AI and IoT is much like the connection between the human brain & body.

Our bodies collect sensory input such as sight, sound, and touch. Our brains take that data and makes sense of it, turning light into recognizable objects and turning sounds into understandable speech. Our brains then make decisions, sending signals back out to the body to command movements like picking up an object or speaking.

All of the connected sensors that make up the Internet of Things are like our bodies, they provide the raw data of what’s going on in the world. Artificial intelligence is like our brain, making sense of that data and deciding what actions to perform. And the connected devices of IoT are again like our bodies or body parts, carrying out physical actions or communicating to others.

Unleashing Each Other’s Potential  :

The value and the promises of both AI and IoT are being realized because of one another. Machine learning (ML) and Deep learning (DL) have led to huge leaps for AI in recent years. As mentioned above, machine learning and deep learning require massive amounts of data to work, and this data is being collected by the billions of sensors that are continuing to come online in the Internet of Things.  IoT makes better AI. Improving AI will also drive adoption of the Internet of Things, creating a virtuous cycle in which both areas will accelerate drastically. That’s because  AI makes IoT useful.

  • On the industrial side , AI can be applied to predict when machines will need maintenance or analyse manufacturing processes to make big efficiency gains, saving millions of dollars.
  • On the consumer side , rather than having to adapt to technology, technology can adapt to us. Instead of clicking, typing, and searching, we can simply ask a machine for what we need. We might ask for information like the weather or for an action like preparing the house for bedtime (turning down the thermostat, locking the doors, turning off the lights, etc.). (Remember Hive Home in the UK using IoT platform to deliver this? Or even Amazon Alexa using ASR technology to make this happen)

Blockchain solutions  :

Blockchain is redefining how trusted transactions ought to be carried out. The internet is itself highly vulnerable and Blockchain is out with the solution to address it. One problem that Blockchain solves of AI and IoT is the security fault lines. Most IoT devices are connected to each other via public networks and it is needless to say how vulnerable public networks really are. Blockchain solves this problem by linear and permanent indexed records which can be created. Globally general public can reference them without censorship. They can also smoothen the commerce process by providing a payment mechanism as well as communication channel. The public is the authority and not any centralized entity as is the case with the banks. Any kind of hacking and tampering with the data like taking control of device and records is impossible due to the way blocks are stored and guarded in a specific database in the Blockchain system. Every IoT device is a point of vulnerability and the risks are still higher as even AI is involved in making decisions for users. Hence Blockchain can be used to provide a secure, scalable and verifiable platform that has invincible security implementations.


Process flow  :

Internet of things (IoT) (Sensors to record task statistics, with BlockChain solutions) ==>   BigData (Capability to store large volume of data, whether from sensors or from systems) ==>  Machine Learning (ML) / Deep Learning (DL) (Decision Options for AI based on pattern of data and statistics, derived from BigData) ==> Artificial Intelligence (AI) (The decision maker, who decided based on best-case scenarios).


The common thread in all these technology is DATA. DATA is the one linking all of these technologies. Data has to be acquired, has to be transmitted back and forth, has to be stored, has to be analysed, data patterns have to be explored and data authentication and ledger has to be maintained. These different technologies comes at various different steps in this value chain. Not necessary that all technologies have to come together, some may and some may not — based on the actual requirement of the use case.

These revolutionary technologies, are expected to change our routine life; just as introduction of computers did, introduction of internet did, and introduction of mobile/smart phones devices did. A considerate collaboration is required, from both government and private sectors; to bring out cost-effectiveness for industrialisation, alternative choices for those who would be affected and further technological advancement to make it a sustained reality. It is not crucial yet for these technologies to be tried out with urgency. Planning/assessment can certainly be done to check, whether any real-benefit can be achieved. If there is benefit, then investment must be made in line with “considerate collaboration” we talked above. The current impression of it being reachable to large corporations only; that it required a lot of investment to get it realised; and it is going to take away lot of human engagements; must change through better awareness and perhaps economies of scale. It’s happening…for sure. My favourite areas where I would like to see application of these technologies are: for revitalization of agriculture industry; to combat global warming/extreme weathers and finally in the pharmaceutical industries to find cures for deadly diseases. And yours ?