thinkLeaders

Think that machine learning requires tons of data? Think again.

Share this post:

Bread is surprisingly simple, and surprisingly complex. There are only three basic ingredients – flour, water, yeast—but there are loads of ways to combine them.

Machine learning is a bit like baking bread. Traditionally, the most important ingredient has been real-world data. You need to have lots of examples to teach a computer how to accomplish a task. If you want to teach a computer to recognize cats, you need lots of images of cats.

But getting the right data isn’t as simple as picking up flour at the supermarket. Imagine trying to teach a robot arm to pick up a plate from a table using machine learning. It won’t be able to do this effectively without knowing about lots of different types of tables and lots of different types of plates. Collecting this data might require you to set up lots of robot arms trying to pick up lots of different objects. Since the datasets you need to effectively train a machine learning system are large, these approaches can quickly become costly and time-consuming.

What’s more, big data collection projects also often require the collection of sensitive, personally identifiable information, which isn’t always easy or desirable.

This might be changing. Some of the most interesting recent developments in machine learning have been discoveries that suggest that the expensive step of collecting real-world data isn’t always as necessary as we thought it was.

Again, it’s a little like bread. I used to think that making bread always involved a lot of flour—after all, it’s what bread is mostly made of. But, it turns out that you can add significantly more water and less flour, and still get bread that’s as good or better than the bread you would have made with more equal proportions.

The same is true in artificial intelligence. Machine learning has depended on having both data—the examples that help to teach the machine the task, and computing power—the actual machines that process the data. If data is the flour, then computing power is the water, and it turns out that, under certain circumstances, you can still get excellent results with way less data and a whole lot more computing power.

There have been two clever tricks that researchers have used to reduce this need in certain situations. The first is what is called simulation learning, which essentially tries to train the machine in virtual reality. Rather than having the robot arm try to pick up a plate in the real world, this method aims to have a simulated robot pick up a simulated plate. Once the computer learns to solve the problem in a virtual space, the same system can control a robot arm doing this task in the real world.

There are a couple of advantages to this approach. For one, the data is simulated and therefore much cheaper. You don’t have to purchase and physically set up a bank of robot arms to collect the data that you need. Since the environment is virtual, it’s also easy to have the robot arm try to accomplish this task in a wide range of simulated environments. With enough processing power, you can also simulate a huge number of robot arms, way more than would be practical to set up in the real world.

This might sound like science fiction, but it isn’t. Researchers recently showed great results comparing two robot arms, one of which was trained in a simulation and another of which was trained in the conventional way. The simulation-trained arm presented much better results in its first real-world tests. Simulation learning lowers a dependence on collecting real world data, though at the same time it does require more computing power in order to simulate all those virtual environments.

Another method that researchers have been seeing success with is self-play. One way to teach a computer to play the game of Go is to provide it with lots of examples of Go games that humans have played against one another. Another method, demonstrated recently by researchers, is to set up two computers that play the game of Go against one another. With significant computing power, it is possible to train machine learning systems “from scratch” without the need for data from human players. This method has proven hugely successful, with these systems beating human champions at Go, chess, and the Japanese game of shogi.

Now, it’s true that these techniques won’t be equally useful in all contexts. Building a recommendation engine with machine learning, for instance, still requires that you collect data from real people about what they like. But in certain domains, these methods certainly make it seem likely that it will be possible to train powerful and effective AI systems without access to very much real-world data at all.

This has a few big implications for the business applications of AI. For one, incumbents that are already holding onto large datasets can no longer be so sure that it gives them the decisive advantage in training the best machine learning systems. Particularly in markets where AIs are being used to solve physical problems, simulation learning might make it possible for upstarts to cost effectively compete without first having to collect huge amounts of data themselves.

Instead, competition will shift in these situations from a question of who has the most access to data to who has the most access to computing power. Companies that are able to simulate or self-play more rapidly and at large scale will be able to develop the most effective machine learning systems, giving their products an advantage in the market.

More generally, the overall economic impact from AI might be accelerated by these developments. Many tasks have not yet been automated by AI because it has been too expensive or otherwise impractical to collect the data that would be necessary to train a computer to do the task effectively in the past. These techniques may make it cost effective to create machine learning systems that do the task by easing the logistical hurdles of collecting data.

Ultimately, the business impact of machine learning will often depend less on what AI might theoretically be able to do, and more on the types of ingredients that are necessary to make them successful in practice. For all these reasons, it’ll be important to keep an eye out on these techniques as researchers continue to explore them in the coming years.

More thinkLeaders stories

thinkPod: Bitcoin & the Democratization of Commerce feat. Alex Adelman, Co-Founder & CEO, Lolli

In this episode, we are joined by Alex Adelman, Co-founder and CEO of Lolli, a bitcoin rewards e-commerce startup based in NYC. We talk about borderless commerce, barriers of entries to cryptocurrency, and evolving business models.

Continue reading

thinkPod: Foodie Tribes and Changing Times feat. Jon Carter, CDO & CTO, Snap Kitchen

In this episode of thinkPod at SXSW 2018, Amanda Thurston (Marketing Services Leader, IBM iX) is joined by Jon Carter (Chief Digital Officer and Chief Technology Officer, Snap Kitchen) to discuss conscious sourcing, Foodie Tribes, and transitioning to digital business models.

Continue reading

thinkLeaders @ CEBIT 2018: Data-based Cutomer Understanding feat. Prof Markus Warg, COO, SIGNAL IDUNA & Board Member, SDA SE

In this episode of thinkPod at CEBIT 2018, Amanda Thurston, Leader, Digital Marketing Services, talks to Professor Markus Warg, COO at SIGNAL IDUNA & Board Member at SDA SE, about Service Dominant Architecture, partnerships, and working in agile.

Continue reading