October 31, 2018 | Written by: Ameer Shahul
Categorized: Cognitive | Digital Reinvention
A global Television company recently launched a TV set with ‘Artificial Intelligence.’
Another consumer durable major has just launched an ‘AI powered’ washer. The same company has also launched an AI powered refrigerator, and an Intelligent Air-Conditioner last month.
Companies across the globe are coming out with products and services tagged with ‘AI’.
AI to capture market share?
Increasingly, it is becoming a trend to re-brand products and services as ‘powered with AI.’ Brands are feeling that if they don’t incorporate ‘AI’ in their products and services, they would be left behind, and lose out the mind-share and thereby fall out at the market place. In their absence, new comers with AI branding will enter and capture the market. Result: From salt to rocket, there is ‘AI’ branding now.
How far these products and services have ‘AI’ in their offerings? For instance, how can there be ‘AI’ in aerated, bottled water? Can there be ‘AI’ in sliced bread and in Italian Risotto?
Most of the products that claim ‘AI powered’ use simple off-the-shelf technologies like voice recognition software, smart sensors and some amount of automation. The more intelligent among the lot wrap their products and services with Google Assistant or Amazon Alexa or a chat bot to claim the ‘AI’ tag.
On a random check, one would realize that most of such offerings are not deploying any propriety AI algorithms or Machine Learning tools to design, develop or distribute their products or services, leave alone directly incorporating ‘AI’ in their offering?
AI Powered Filter Coffee and Explainable AI
Driving down from India’s Silicon Valley to the nearby port city of Chennai, one could find highway side coffee shops with signage – “Filter Coffee, Powered by Artificial Intelligence.” If you stop by and inquire the role of ‘AI’ in filter coffee, you will be explained that an ‘AI powered’ machine is selecting the best beans for roasting and grinding coffee powder.
This growing trend of misuse of ‘AI’ and ‘Machine Learning raises an immense challenge. A challenge to identify and isolate the ‘quacks’ and promote the ‘ace’ among the AI offerings. The challenge extends beyond segregating the wheat from the chaff to the rights of companies in putting their products and services at the marketplace with claims of ‘AI.’
Organizations like IBM have been advocating that companies that can’t explain the decisions made by ‘Artificial Intelligence’ in their products have no right to put their products on the market. This was well articulated by Ginni Rometty, IBM Chairman and President in her message to the World Economic Forum 2018 that “when it comes to the new capabilities of artificial intelligence, we must be transparent about when and how it is being applied and about who trained it, with what data, and how.”
Echoing the message, Data Responsibility @IBM argues for transparency and data governance policies that will ensure people understand how an AI system came to a given conclusion or recommendation.
When an algorithm is built, and trained, and when neurons in a Neural Networks are trained, it is done for a logical conclusion – as logical as it can. And therefore, an algorithm or neuron produces an outcome as logical as possible.
For the same reason, inventors and their patrons should be able to explain the logic or rationale in the decision-making process involved in the algorithm or the Neural Networks. If it cannot, there is something amiss about it. It would lead to the question of misusing the terminology of “Artificial Intelligence” as a marketing gimmick and also to the motivation behind not disclosing the logic?
The common justification being advanced by the defenders of this argument is that sharing the decision-making process of the algorithm would mean disclosing the Intellectual Property. Sharing the logic of the algorithm and decision making process of the algorithm does not mean that one needs to open-up the algorithm for everyone to see it and thereby compromise its’ Intellectual Property. None would be seeking to peep into the codes or training pattern of the algorithm, rather would only satisfy themselves on the logic of machine intelligence deployed.
In brief, inventors and their employers are only required to explain the rationale of the algorithm and how it would be resolving a problem. That is not Intellectual Property. That’s defining the problem, and how the problem was approached for a solution.
You could call it Rational AI.
In brief, Artificial Intelligence tools have to be ‘rational’ that can be explained in a layman’s language. That would not leave AI as an unexplainable “Blackbox”.
The test, therefore, is to comprehend, interpret and explain the rationale of codes and neuron-training for the benefit of its customers and prospective customers.
How necessary is it?
Take the case of deploying AI in Medical Science and in Judicial Decision Making.
Can an unexplainable AI be entrusted with the job of diagnosis of a dementia patient? What if the AI decides that the patient is suffering from Parkinson Disease, and not from Alzheimer’s when in reality it is Alzheimer’s? Can an eunxplainable AI be left with the decision making of a murder suspect that sentences the accused with Death Sentence against a deserving life term?
Don’t the patients and accused, and their near and dear ones have a right to know what went into the process of such decision making?
Therefore, it is no surprise that the challenges of comprehending the decision-making process of neural networks and deep learning algorithms have been bothering policy makers for a while, and would continue to daunt in the immediate future till a permanent solution is found.
The permanent solution can be an industry self-regulation, rather than governments coming up with regulatory controls.
Such a self-regulation should weed out branding and marketing gimmicks, and help users pick up the real ‘AI’ solutions.