Lifelong (machine) learning: how automation can help your models get smarter over time

5 min read

Machine learning should happen constantly

Imagine you’re interviewing a new job applicant who graduated top of their class and has a stellar résumé. They know everything there is to know about the job, and has the skills that your business needs. There’s just one catch: from the moment they join your team, they’ve vowed never to learn anything new again. You probably wouldn’t make that hire, because you know that life

Imagine

long learning is vital if someone is going to add long-term value to your team. Yet when we turn to the field of machine learning, we see companies making a similar mistake all the time. Data scientists work hard to develop, train and test new machine learning models and neural networks. However, once the models get deployed, they don’t learn anything new. After a few weeks or months, become static and stale, and their usefulness as a predictive tool deteriorates.

Get started with Data Science Experience for free

Why models stop learning

Data scientists are well aware of this problem, and would love to find a way to enable their models to participate in the equivalent of lifelong learning. However, moving a model into production is typically a tough task, and deployment requires help from busy IT specialists. When a single deployment can take weeks, it’s no wonder that most data scientists prefer to hand over their latest model and move onto the next project, rather than persist with the drudgery of continually retraining and redeploying their existing models.

Deployment isn’t just painful for data scientists—it can be a headache for IT teams too. Data scientists might have used any one of a wide variety of languages, frameworks and tools to build their models, and there is no guarantee that those choices will make the model easy to integrate into production systems. In a worst-case scenario, the model may need to be substantially refactored or even rebuilt from scratch before it can be deployed. As a result, if data scientists ask for their models to be redeployed too frequently, they may be met with significant resistance from the IT department.

Streamlining deployment to keep models in training

The good news is that model deployment isn’t inherently labor-intensive. Just as in other forms of software development, the principles of DevOps apply here. With the right platform, it is possible to create seamless continuous deployment pipelines that automate many aspects of the process, transforming deployment from weeks of manual effort to a matter of a few mouse-clicks. For example, with IBM® Watson® Machine Learning integrated in IBM Data Science Experience, data scientists can develop models using a wide range of languages (including Python, R and Scala) and frameworks (such as SparkML, Scikit-Learn, xgboost and SPSS). The solution will abstract the models into a standardized API that can be integrated easily with production systems. This gives data scientists the flexibility they need to choose best-of-breed tools and techniques during development, without increasing the complexity of deployment for the IT team.

Watson Machine Learning aims to combine other elements of IBM Watson Data Platform to provide a continuous feedback loop. When your model is ready to move into production, you can specify how frequently you would like to retrain it, and automate the redeployment process. You can also monitor and validate the results of the retrained model to ensure that the new version is an improvement—and with integrated version control, you can easily roll back to the previous release if necessary.

Giving data scientists more power

These capabilities help to reduce the need for IT teams to act as intermediaries in the deployment process, eliminating the biggest bottleneck for continuous improvement of machine learning m

Imagine

odels. They also place more power in the hands of data scientists, empowering them to focus on building and maintaining the most accurate models possible, instead of being forced to sacrifice quality for practicality. Most importantly, solutions like Watson Machine Learning give your models the chance to do what they were always meant to do: learn. By continuously retraining your models against the latest data, you can ensure that they continue to reflect today’s business realities, giving your organization the insight it needs to make smarter decisions and seize competitive advantage.

Get started with Data Science Experience for free

Be the first to hear about news, product updates, and innovation from IBM Cloud