A Q&A with machine learning and analytics expert, Marc Hayem
Businesses love AI as a concept, but the nuts and bolts of actually carrying out AI-based projects and initiatives can post daunting challenges for many companies. And many AI efforts fail for that reason.
The key to preventing AI flops is understanding what it can and can’t do, and working incrementally to figure out if AI can deliver the intelligence the business seeks. Not all AI projects will succeed, nor will they show sufficient ROI, explains Marc Hayem, Partner in IBM’s Global Advanced Analytics Practice.
Can the data you choose for AI projects help improve results?
Good data is of course a key enabler for good data science. There’s a family of data in particular that AI can enable, we call it “dark data”—the unstructured data inside companies like customer call transcripts, marketing studies, blueprints, or consumer complaints. We can use AI to derive insights from information that might have seemed insignificant. For instance, HR data about how far employees live from the office, or whether they previously worked for a competitor, matched with manager reviews about their performance, could tell us about someone’s risk of leaving the company.
AI can also help you derive answers from images, like photos taken on an assembly line or of finished products. You can use AI to locate quality issues in an assembly line – machine learning is better at spotting a production issue than humans are. If you combine image data with dark data, neither of which you could previously make use of, the results can be powerful.
What’s the value of starting small with AI projects?
AI is well-suited to the agile, incremental approach to solving problems. When you build a predictive model, you can continue to fine-tune it by adding more data, and iterate to see if you can produce the answers you want. If you have thousands of readings from the production chain, you don’t need to use them all – you should see if you can predict production yields with a smaller data sample, then add data incrementally.
A good reason to start small is that if your model isn’t giving you the predictions you want, you can stop before you’ve invested too much time and money. You can write the algorithm first, then create the user interface for the application later if it works. You won’t have wasted several weeks of a developer’s time trying to build the application.
And what happens when you don’t get the results you’re seeking?
In my experience, about a third of AI projects result in a full production application rollout. If you’re starting small and iterating as discussed above, then you can succeed or fail fast. What I mean by failure is not just that you don’t get the predictions you need – even when the predictive models work technically, sometimes the ROI isn’t large enough for a full roll-out.
How do you get everyone in the business to follow this strategy?
We recommend centralizing the analytics function into an analytics center of excellence. If you want to use data science as an engine for innovation, then you’ll need the C-level mandate. It’s the best way to demonstrate support from the top. You form a central team that will work on analytics projects for several departments. Eventually the skills become mainstream, and you can set up centers of excellence in each department. The idea is to grow analytics competence until it spreads across the entire company.