January 11, 2019 | Written by: Wired Brand Lab
Categorized: AI for the Enterprise
Share this post:
You don’t have to be deeply enmeshed in the tech world to notice the spread of AI.
The average consumer is likely now to own a smartphone with an AI-based, voice-activated personal digital assistant. When they take a trip, consumers use AI to help navigate traffic. If they contact a company’s customer service, they’re increasingly likely to converse with a system that can handle natural language processing in real-time.
In 2019, it should come as no surprise that we’ll see AI pop up in more places and become increasingly useful for a range of tasks. This “democratization of AI” will include both customer touchpoints and back-office functions.
Meanwhile, developers are driving a shift in AI, to explain the reasoning that goes into recommendations, rather than just the end result. Because of this shift, AI will become more trusted.
Cracking open the black box
AI is already adept at recommending the right course of action. Based on studying patterns of interactions, AI has proven itself in tasks such as evaluating insurance claims, to ensure fair results. Increasingly, platforms are able to explain those recommendations to regulators, business leaders and customers.
In a recent example, IBM announced AI OpenScale, a cloud-based service that provides continuous insights into how it makes decisions, searches for potential prejudices, and recommends adjustments to offset them.
The recommendations of decision-support tools become much more useful and powerful when the system can share its reasoning. Take the example of credit card fraud detection. If a system denies your credit, it’s vital to understand why. Explainability in these circumstances helps to increase trust in AI-based decisions.
Transparency in AI is a key focus area for scientists and researchers. In a recent paper that demonstrates a new method of breaking down the black box, IBM Research introduced ProfWeight, a process by which information is transferred from a deep neural network to a much simpler network so that it’s easier to see the reasoning behind decision-making.
An essential element of boosting trust in AI systems is to offer transparency around the data they work with, where it came from, and how it was curated.
Beyond such tech fixes, we can expect more ethics advisory boards looking at algorithmic fairness, explainability, robustness and transparency, along with efforts to deploy AI for social good. In addition, the tech industry will need to make efforts to encourage diversity and inclusion so that AI’s results are based on data that’s representative of the entire population.
Game-changing quantum computing
Quantum computing, which offers exponentially higher computing power than today’s binary-based systems, will continue to develop and offer AI an assist in 2019. As AI’s complexity continues to grow, quantum computing could potentially change how AI approaches computational tasks.
While quantum computing has been discussed for years, such machines exist now. As the technology develops, quantum computing could be used to solve hard problems like transportation logistics, pharmaceutical research and optimization of the food supply.
For businesses, the biggest change for AI in 2019 is likely to be an advance in user-friendliness. Tools such as Neural Network Synthesis, available in AI OpenScale, go so far as to help organizations train neural networks without needing extensive coding experience. Once the purview of data scientists and the IT staff, AI will be spread throughout the organization by such tools and be presented in consumerized interfaces. As a result, every department from marketing, to HR, to customer service, will get access to AI, allowing them to perform their jobs at a higher level and automate repetitive processes to focus on more complex tasks.
AI OpenScale is the open platform for businesses to operationalize trusted AI and extend their deployments enterprise-wide.