Think that machine learning requires tons of data? Think again.

Share this post:

Bread is surprisingly simple, and surprisingly complex. There are only three basic ingredients – flour, water, yeast—but there are loads of ways to combine them.

Machine learning is a bit like baking bread. Traditionally, the most important ingredient has been real-world data. You need to have lots of examples to teach a computer how to accomplish a task. If you want to teach a computer to recognize cats, you need lots of images of cats.

But getting the right data isn’t as simple as picking up flour at the supermarket. Imagine trying to teach a robot arm to pick up a plate from a table using machine learning. It won’t be able to do this effectively without knowing about lots of different types of tables and lots of different types of plates. Collecting this data might require you to set up lots of robot arms trying to pick up lots of different objects. Since the datasets you need to effectively train a machine learning system are large, these approaches can quickly become costly and time-consuming.

What’s more, big data collection projects also often require the collection of sensitive, personally identifiable information, which isn’t always easy or desirable.

This might be changing. Some of the most interesting recent developments in machine learning have been discoveries that suggest that the expensive step of collecting real-world data isn’t always as necessary as we thought it was.

Again, it’s a little like bread. I used to think that making bread always involved a lot of flour—after all, it’s what bread is mostly made of. But, it turns out that you can add significantly more water and less flour, and still get bread that’s as good or better than the bread you would have made with more equal proportions.

The same is true in artificial intelligence. Machine learning has depended on having both data—the examples that help to teach the machine the task, and computing power—the actual machines that process the data. If data is the flour, then computing power is the water, and it turns out that, under certain circumstances, you can still get excellent results with way less data and a whole lot more computing power.

There have been two clever tricks that researchers have used to reduce this need in certain situations. The first is what is called simulation learning, which essentially tries to train the machine in virtual reality. Rather than having the robot arm try to pick up a plate in the real world, this method aims to have a simulated robot pick up a simulated plate. Once the computer learns to solve the problem in a virtual space, the same system can control a robot arm doing this task in the real world.

There are a couple of advantages to this approach. For one, the data is simulated and therefore much cheaper. You don’t have to purchase and physically set up a bank of robot arms to collect the data that you need. Since the environment is virtual, it’s also easy to have the robot arm try to accomplish this task in a wide range of simulated environments. With enough processing power, you can also simulate a huge number of robot arms, way more than would be practical to set up in the real world.

This might sound like science fiction, but it isn’t. Researchers recently showed great results comparing two robot arms, one of which was trained in a simulation and another of which was trained in the conventional way. The simulation-trained arm presented much better results in its first real-world tests. Simulation learning lowers a dependence on collecting real world data, though at the same time it does require more computing power in order to simulate all those virtual environments.

Another method that researchers have been seeing success with is self-play. One way to teach a computer to play the game of Go is to provide it with lots of examples of Go games that humans have played against one another. Another method, demonstrated recently by researchers, is to set up two computers that play the game of Go against one another. With significant computing power, it is possible to train machine learning systems “from scratch” without the need for data from human players. This method has proven hugely successful, with these systems beating human champions at Go, chess, and the Japanese game of shogi.

Now, it’s true that these techniques won’t be equally useful in all contexts. Building a recommendation engine with machine learning, for instance, still requires that you collect data from real people about what they like. But in certain domains, these methods certainly make it seem likely that it will be possible to train powerful and effective AI systems without access to very much real-world data at all.

This has a few big implications for the business applications of AI. For one, incumbents that are already holding onto large datasets can no longer be so sure that it gives them the decisive advantage in training the best machine learning systems. Particularly in markets where AIs are being used to solve physical problems, simulation learning might make it possible for upstarts to cost effectively compete without first having to collect huge amounts of data themselves.

Instead, competition will shift in these situations from a question of who has the most access to data to who has the most access to computing power. Companies that are able to simulate or self-play more rapidly and at large scale will be able to develop the most effective machine learning systems, giving their products an advantage in the market.

More generally, the overall economic impact from AI might be accelerated by these developments. Many tasks have not yet been automated by AI because it has been too expensive or otherwise impractical to collect the data that would be necessary to train a computer to do the task effectively in the past. These techniques may make it cost effective to create machine learning systems that do the task by easing the logistical hurdles of collecting data.

Ultimately, the business impact of machine learning will often depend less on what AI might theoretically be able to do, and more on the types of ingredients that are necessary to make them successful in practice. For all these reasons, it’ll be important to keep an eye out on these techniques as researchers continue to explore them in the coming years.

More thinkLeaders stories

Changing the World, One Website at a Time

Corporate social responsibility is important. Very important. With the ability for widespread internal communications and the advantage of robust organizational structures, corporations are poised to help in a big way. And it seems the timing is right to double down on CSR for two reasons: Consumers want to support brands with charitable missions (see: Toms […]

Continue reading

How the Blockchain Will Free Your Identity

How do you know I am who I say I am? How do you know that I can be trusted? You might look for me on Twitter or Facebook. If you’re an app maker, you might rely on Apple or Google. If you’re an officer of the law, you may ask for my driver’s license, […]

Continue reading

These Schools Are Already Using Virtual Reality to Teach

Marcus Belingheri, a 16-year-old junior at the Marin School of the Arts in Novato, California, is excited about showing off his digital arts class’s work at the spring parent showcase, in a gallery the class is designing themselves. He’s especially looking forward to the moment when the white walls and dark wood floor varnish. That’s […]

Continue reading