Five telltale symptoms of machine learning meltdown—and how to find a cure

6 min read

By: Cecelia Shao

Five telltale symptoms of machine learning meltdown—and how to find a cure

Machine learning is one of the most exciting areas of data science, with enormous potential to transform data into the pure gold of competitive advantage. Data scientists can seem like wizards when their models first accurately predict customer or market behavior, or reveal valuable insight from previously untapped data sources.

Disjointed data science workflows

However, machine learning is still a relatively new discipline, and it can be difficult to implement in an obvious, robust and scalable way. As a result, many businesses are still experiencing growing pains: as demand increases, their data scientists are struggling to design, train and deploy accurate models, and keep them aligned with business needs.

Let’s take a look at some of the common symptoms, and see if we can find a cure.

1. Disjointed data science workflows

If you are treating data science and machine learning as two separate entities—managed by different teams using different tools—then it’s likely that your end-to-end workflow is disjointed.

Machine learning models are hungry for data, but not just any data will do. You need to feed them with relevant, well-prepared data sets that may come from multiple sources. To deliver this data efficiently, you need to build a data pipeline that runs from initial exploration and refinement through to your model design, build and deployment processes.

If exploration, refinement and modeling are all managed separately, with time-consuming handoffs between each stage, your data scientists are probably spending too much of their valuable time on pipeline configuration, instead of focusing on real analysis.

2. Framework fatigue

As one of the fastest-evolving domains in computer science, machine learning is currently going through its equivalent of the Cambrian Explosion. We’ve seen an incredible proliferation of frameworks and libraries, allowing data scientists to build models in a wide range of different languages and run them on all sorts of different technologies.

Choice is good, but having too much choice can be exhausting. The effort it takes to keep up with all the latest popular frameworks is becoming a burden. The lack of an obvious “right way” to do things can lead to analysis paralysis, as data scientists continually second-guess their tooling decisions. Again, this distracts them from focusing on the more important work of building, training and deploying new models.

3. Built-in biases

With more complex models—and particularly with the neural networks used in deep learning—it can be difficult for a data scientist to understand how the model has produced its results, which can also lead to serious compliance issues down the line. As a result, there’s a tendency among practitioners to treat model design as more of an art than a science.

Because the mechanics of the model are a black box—and because data scientists generally don’t have time to take a rigorously scientific approach to testing every single hyperparameter—model design decisions are often based on experience and intuition. Or to put it another way, data scientists allow their model building decisions to be influenced by their own personal biases, based on “what seemed to work best last time” or what they have the most expertise in.

If they rely on gut feeling, even the most careful and diligent data scientist will inevitably inject some of their biases into their models. As a result, they may overlook or ignore promising avenues of inquiry, and the models they build may not be ideally optimized for accuracy, trainability, or reliability.

4. Model maintenance

Once a model is moved into production, data scientists can’t just forget about it and move onto the next project. If the model is left running for too long without retraining, it may become less accurate—or even positively misleading.

This may happen for a number of reasons: perhaps the model didn’t take a key factor into account because that factor wasn’t captured in the training or testing data. Or perhaps the format of the data that the model is ingesting has changed (for example, due to a change in a data ingestion API), so that the data pipeline is no longer feeding the model correctly.

Monitoring the performance of machine learning models and refreshing them when necessary is typically a highly manual process, eating into the time data scientists have for new research and building new models.

5. Answering the wrong questions

Finally, a model may be capable of answering a given question with a very high degree of accuracy—but still not deliver any business value. In part, this is a problem of measurement: there may be a gap between the model’s performance metrics and business metrics.

To take a trivial example, a movie theater company might build a model that accurately predicts the proportion of sweet to salted popcorn that customers will buy—but it may not be clear whether this new insight leads to business advantages. Learning how to tickle your customers’ taste-buds might be an interesting exercise, but does it really increase popcorn sales, cut production costs, or reduce wastage? To get a true idea of whether the model is helpful, you need to not only monitor its accuracy over time, but also make sure that data science projects are linked to critical business metrics.

Fundamentally, you might find that the model is answering the wrong questions, or questions that no one is asking. When data science teams are disconnected from the rest of the business, there’s a significant risk that data scientists will find themselves building models that address niche problems (such as boosting popcorn sales) instead of addressing more important concerns (such as how to attract customers to the theater in the first place).

Finding a solution

Finding a solution

Most of these problems can begin to be solved by taking a more holistic approach to machine learning. Solutions like IBM® Watson® Data Platform can help to put machine learning in its proper context within a wider data science workflow, providing well integrated tools that make it easy to discover, explore and refine data, seamlessly feed it into the model building process, and then deploy, monitor and retrain the models in production.

For example, data scientists can use IBM Data Catalog to find relevant data sets, and instantly load them into IBM Data Science Experience for exploration and visualization. If they seem to offer possibilities for machine learning, they can then be imported into IBM Watson Machine Learning to help with model design, training, testing, deployment, monitoring and maintenance.

Data Science Experience and Watson Machine Learning offer a lot of flexibility in terms of languages and libraries, including Spark MLlib, SPSS Modeler, and various Python machine learning libraries—while at the same time, freeing your data scientists from needing to keep abreast of all the latest frameworks. The choice of solutions provides versatility, while helping teams build workflows that are business-ready.

IBM’s research teams are also working hard to extend Watson Machine Learning’s meta machine learning capabilities. By harnessing cognitive computing technologies to help with the feature engineering and model design processes, it should become easier to eliminate the influence of individual biases. For example, by suggesting different model-building techniques and making it easier to test different combinations of hyperparameters, the solution should make it easier for data scientists to experiment with new algorithms and learn new approaches, instead of basing their design decisions on experience and intuition.

Looking at the platform on a higher level, IBM Watson Data Platform automates much of the important plumbing between different parts of the workflow. This significantly reduces the time that knowledge workers spend handing data off to different teams, and gives all participants better visibility of the end-to-end process. By breaking down silos and uniting the data science team with the business team, the platform also helps to clarify the role of data science within the broader business context. As a result, it becomes much easier for stakeholders to understand what data scientists are working on, and to ensure that projects are aligned with the right business strategies.

Finally, it’s not just about technology: Watson Data Platform gives organizations easy access to IBM’s own data science specialists, who can advise on best practices and help you focus on building the right models to solve the right business problems. With IBM tools and expertise working together to support your data science team, you can put your organization in a strong position to seize true competitive advantage.

To learn more about the capabilities of IBM Watson Data Platform, and explore our roadmap for the future, please visit ibm.co/watsondataplatform.

Be the first to hear about news, product updates, and innovation from IBM Cloud