October 26, 2018 | Written by: Christopher Markle
Categorized: Discovery and Exploration
Share this post:
Big Data, Data Science and AI
The era of enterprise-level AI is here, and businesses are increasingly seeking to incorporate data into their decision-making processes. However, the idea that a business can simply plug in an AI black-box, feed in terabytes of data and immediately have an effective AI system is an idea bound to fail. For many enterprises, embracing machine learning and artificial intelligence as new sources for solutions has been a journey filled with promise … and as we’ve found out, pitfalls.
At Identity Guard, our work with Watson has gone from testing the AI waters, to progressing well into our AI journey and ensuring our Identity Guard product stands out from the crowd. Along the way, we learned 3 important lessons.
Where to start?
When Identity Guard started its AI journey, our state-of-the-market product focused almost exclusively on credit reports. Using these reports, we could tell our customers if someone had stolen their identity and how they could take it back. However, after speaking with our customers, it became clear that they wanted something different, something no one else was offering: a service that would help prevent them from becoming a victim of identity theft in the first place. They wanted proactive, timely protection.
Lesson 1: AI products that serve customers start with customers.
To protect our customers, we developed a solution wherein we would monitor news events related to software vulnerabilities, data breaches, online scams and malware campaigns, and alert our customers if we found something that may threaten them personally while advising on how to address the threat. Such a solution empowers our users to take steps to prevent a possible compromise of their identity from turning into identity theft.
While the idea sounds great, it was fraught with implementation hurdles. A simple Google search for the term “breach” should give you a sense of our first hurdle: getting topically relevant news articles. A search for “breach” will likely offer some relevant news stories, but it will also have articles featuring whales jumping out of the ocean or articles about GDPR. Upon reaching this hurdle, our first solution was to hire an analyst to sift through the trash.
Enter the second hurdle: the internet has no shortage of trash. Our solution only works if we can be sure that we’ve caught every relevant news article: this means casting a wide net and consequently reeling in hundreds of articles a day. This hurdle could be surmounted by simply throwing more analysts at it. But that approach doesn’t scale. Instead, we threw Watson at it.
Lesson 2: AI products that scale are built on technology designed to scale.
Limit scope, be creative.
To sift through the near endless supply of undesired news articles printed daily, we needed technology that could determine with a high degree of accuracy whether an article described a real and present danger to a customer. In data science, determining if a given input such as an article should produce output A (don’t present to our customers) or B (present to our customers) is a pretty typical classification problem. However, our problem was a well-known variation called “anomaly detection”. The defining characteristic of anomaly detection problems is relatively abundant in examples of A but a relative dearth in examples of B. This imbalance makes it difficult for AI-systems to learn. This was the third hurdle.
Our attempts to train a single neural network to do everything we wanted failed. Then, we found a solution – a simpler solution. Instead of training a single neural network to find a needle in a haystack, we trained many neural networks with each designed for a different purpose. When combined, these were capable of sifting through the hay, leaving only the needle behind.
We used Watson Natural Language Classifiers as ‘building blocks’ for processing content – we defined a target classifier, a practice classifier and an effect classifier – each trained to identify a different characteristic of an article. For example, when we find a story about a breach, one classifier might identify the ‘target’ as a database of a local retailer, another might identify the ‘practice’ as hacking, and yet another might identify the ‘effect’ as “names and credit card numbers were stolen”. This new information can be used in aggregate to drastically improve classification accuracy.
Our strategy of using many simple classifiers solved our anomaly detection problem, because each component classifier was able to be trained on its own dataset. So, while relevant news articles are rare, news articles describing a ‘target’, ‘effect’, or ‘practice’ are not. Thus, we were able to re-cast our anomaly detection problem as a set of ordinary classification problems.
Lesson 3: Successful AI products are often built of simpler AI products.
Using a system of inter-related Natural Language Classifiers, Identity Guard is able to identify threats to our customers as they develop, associate specific threats to specific customers and ensure our customers only get updates on issues that are a real threat to them personally.
Don’t wait, jump in.
Building an AI product is an iterative process, with many small wins, failures, and lots of opportunity. We recommend you start your AI journey by identifying the customer’s need and the data necessary to address that need. De-construct the need into tasks with limited scope, and identify those tasks that can be addressed with AI building blocks. Leverage the power of conventional computing, human decision-making, and AI building blocks in a cohesive intelligent system. Lastly, test your solution. If it doesn’t work for your customers, it doesn’t work. Iterate – ask your customers “what worked and why” and “what didn’t and why”.
These are the steps we’ve taken to build and grow an intelligent solution for our customers at Identity Guard.
Create your own insight engine with IBM Watson.