February 8, 2018 | Written by: Anthony Stevens
Categorized: Data Science
Share this post:
Businesses today are eager to harness machine learning and deep learning for competitive advantage—yet few businesspeople realize that building a machine learning model or neural network is a marathon, not a sprint.
Although the basic principles of model design, training and evaluation tend to be reasonably well understood by businesses, it’s easy to underestimate how complex, time-consuming and labor-intensive real-world model development can be, particularly for neural networks.
Let’s take model design as a starting point. The Asimov Institute’s “Neural Network Zoo” gives an idea of the wide range of types and families of neural network that can be used to solve business problems. Making the right choice among various flavors of recurrent, convolutional, or other network types can be crucial to achieving the right outcome, and some degree of trial-and-error is necessary for even an experienced data scientist to pick the best option.
Selecting the right type of model or neural network is only the first step, though—in practice, every machine learning problem is unique, and requires a uniquely customized model. Deep learning models, for example, can be composed of any number of layers, and each layer will have multiple settings (known as hyperparameters) that need to be adjusted to produce the best output.
Models also tend to evolve over time—and the process of evolution tends towards sophistication, not simplicity. As an example, this blog visualizes how the convolutional neural networks used in state-of-the-art image recognition have grown in complexity over the past 15 years. Today’s most sophisticated neural networks contain 100+ layers with >10,000 neurons per layer and over 10,0000 connections between the neurons of each layer adding up to over 100 million individual weights that must be trained.
That’s why most of the more complex neural networks in current use come from academic research, where teams have enough time to experiment thoroughly and identify optimal designs. By contrast, since they are generally under pressure from the business to get results quickly, commercial data scientists often need to balance quality and sophistication with expediency and speed to market.
The burden of training
One of the big problems is that you don’t just need to train one model – you need to train hundreds. You try different combinations of layers and parameters, you start training several different models, you kill off the least promising candidates, and then you iterate on the best ones, until you find what seems to be the best available combination of layers and hyperparameters to deliver the results you need.
In a typical machine learning workflow, the experimentation process is time-consuming and normally conducted overnight in batches of 10-20 training runs. With 30 to 50 batches being executed to find an optimal model that meets the business requirements, the entire workflow—from initial data exploration through to model deployment—can produce 100s of model variations over three to four months.
Moreover, even when a model makes it to deployment, the job isn’t over. Models need to keep evolving to remain relevant, and must be redesigned, retrained and redeployed regularly to maintain competitive advantage.
Streamlining an iterative process
As we have seen, model training is an inherently time-consuming process—but there are several approaches that you can take to optimize it:
- Focus on training 10-20 models at a time, so you explore enough hyperparameter space that you’re confident of choosing the right network configuration to explore next.
- Access elastic compute resources in the cloud to perform batch training overnight and focus daytime hours on analysis and planning.
- With hundreds of training runs to manage, track the hyperparameters and network configurations explored to avoid duplicating efforts.
IBM Watson Data Platform aims to help with all three of these goals by enriching and streamlining the tooling that data scientists use to design, train, evaluate and deploy their models. Instead of using a disjointed set of different technologies to build, train and document your models, you can take advantage of a single, coherent ecosystem of well-integrated tools.
For example, IBM Watson Machine Learning enables you to create an updated version of a model, set its hyperparameters, and initiate the training process in just a few mouse-clicks. All the settings and results will be saved and stored together with the model, so it will be easy to compare performance between iterations and select the best candidate for production deployment.
Critically, the solutions that we are building within IBM Watson Machine Learning are designed to encourage a more experiment-centric approach to model development. For example when you’re limited by available compute resources, you tend to train only a few models at a time and explore a limited range of hyperparameter space. We’re working to eliminate these constraints by making it possible for you to train dozens of models simultaneously, monitor the progress of training, kill off the worst-performing models quickly, and focus your efforts on the best.
These new capabilities will help data science teams to “fail fast”—minimizing the amount of time and resources they need to spend on each iteration, and reducing the cost of failure to a point where experimentation becomes economically viable.
With more time to experiment, data scientists can take a more rigorous, scientific approach to exploring how a model’s hyperparameters affect its trainability and accuracy. As a result, it should be possible to develop better models in less time, and to continuously monitor, evaluate and retrain them to evolve with the business problems that they are designed to solve.
At IBM, we’re currently working on adding these capabilities to IBM® Watson® Machine Learning, which is available in the IBM Cloud. To learn more about the latest features and our roadmap for the future, please click here to visit our website.