Smarter energy means more collaboration and dialogue

What’s next in your industry? For Dario Gil, who leads an organization within IBM Research responsible for all physical sciences, it’s making matter compute in new ways.

That’s because he sees a future beyond traditional computing. For decades, computing power has doubled roughly every two years — a pattern known as Moore’s Law. Those advances have relied on making transistors ever smaller, thereby enabling each computer chip to have more calculation power. IBM has invented new ways to shrink transistors. But many, including Gil, believe that approach is finite, and its end is in sight.

“Underlying Moore’s Law is scaling, the ability to pack more and more transistors into a smaller and smaller space,” he says. “At some point … you’re going to reach atomic dimensions and that’s the end of that approach.”

With his team, Gil, vice president of science and technology for IBM Research, is looking beyond that point. They’re working with entirely different ways of computing and exploring new materials.

Portrait of Dario Gill

A quantum leap in computing

One of those new approaches is quantum computing, a system that’s based on a completely different foundation to the logic underpinning current computers. Today’s computers rely on bits, which are embodied as switches that can be set to zero or one. Quantum computing turns that simple approach on its ear. “The unit we’re historically accustomed to is the bit. In quantum computing we have the quantum bit, or qubit,” says Gil. It can hold several values at the same time, a condition known as superposition.

Gil explains: “Imagine a sphere, and within this sphere is an arrow. That arrow can only be in two positions, North or South Pole. That would be a bit. Zero is the South Pole; one, the North Pole. But in a quantum bit, that arrow can be anywhere pointing at any aspect of the internal surface of the sphere. You can represent a much larger number of states at the same time.

“The second aspect of that,” he continues, “is that we can exploit another principle of quantum mechanics called entanglement, what Einstein called ‘spooky action at a distance.’ And that allows us to be able to compute across all of those bits in parallel and is the basis of vast speed-up for algorithms that a quantum computer runs.”

Q bits organized in a grid lattice

 

We're at 99.99 percent purity now. We still need two more nines after the decimal point.

Colder than outer space

But before you put a quantum laptop on your gift list, consider that quantum computing has some severe physical requirements and won’t be available in your home anytime soon.

“Turns out,” says Gil, “it is very, very difficult to make this entangled space, to make these qubits have this superposition and compute together. They are very subject to ‘noise,’ thermal variations, etc. So we have to cool these qubits to very close to absolute zero (about -459 degrees Fahrenheit). The experimental systems that we’re building bring the devices to a state that is colder than outer space itself.”

And if that weren’t enough of a challenge, researchers have to find ways to communicate with the machine. “We send input in the form of microwave pulses that communicate the coding language to the qubits. The qubits do their function; they do the calculations and in the end we extract the signals from the refrigerator, and a conventional computer then interprets the results of what the quantum computer has done.”

All of this seems like a lot of trouble to do something relatively simple like keep track of a payroll. But quantum computing holds the promise of huge increases in computation power applied to the right uses.

“For certain classes of problems — an example of this is cryptography — you can solve problems exponentially faster than even the fastest computers in the world today. Something that would take decades or years for the most powerful computer to solve, a quantum computer could solve in minutes. So it’s not universal. It doesn’t solve all problems, but there are patterns of problems where exponential speed-up is possible.”

Get inspired by biology

Another approach the team is exploring centers on computers that take inspiration from how biology performs computation.

“Today’s computers have been designed to be pretty much like a calculator. Very exact,” explains Gil. “We require a high degree of precision because we’re using them for accounting and all sorts of things that need exact answers. But if you look at many of the problems that brains solve, that exactness is not a requirement.”

Take, for example, comparing two apples. A human with normal vision can easily decide which apple is redder.

“You don’t have to be exact, you just have to make comparisons. You have to be able to recognize an object as ‘good enough’ to figure out that decision,” he says. “So ‘good enough’ as a measure allows you to think things like, ‘Could I be less exact when I perform this calculation? And could I use that freedom from exactness to implement some of these algorithms much, much more efficiently?’ And the answer looks like ‘very much so’.”

And that, says Gil, opens up entirely new approaches to computer systems that can learn and discern much more quickly and more economically than today’s machines.

illustration of an apple

 

We have to cool these qubits to very close to absolute zero (about -459 degrees Fahrenheit).

The goal: universal understanding

What does that mean in the world outside the lab? “In the future, all humans should be understood by our computers,” predicts Gil. “And what I mean by that is all languages, all dialects, all accents. People should be able to speak to their systems and they should be understood. It is a sensible goal, but today it’s a somewhat intractable problem. Because to train machine-learning networks for all languages, all dialects, all accents, with a very low error rate, is just computationally prohibitive. We just couldn’t do it. But if we can make these exponential increases in performance and capability of these neuromorphic architectures to address these problems, that vision could become a reality.”

“We’ve seen proof that technology could do it, but it’s not universal. There’s a quote about what the future may look like. It goes something like: ‘In many ways the future is already here; it’s just unevenly distributed.’ Part of our task is to sow the seeds of science and engineering that we know are possible and to break down the barriers that inhibit those advances from being universally available.”

Totally tubular

In addition to exploring new ways of computing, Gil and his team are experimenting with different materials, specifically carbon.

“Carbon is a very interesting material,” says Gil. “Carbon nanotubes are among the very few, if not the only, materials that have proven in laboratories to have much better performance that the best silicon devices. That makes them a potential candidate for replacing transistors as we push down below the 5 nanometer (5nm) node.”

Note the word “potential.” There are several challenges to overcome before nanotubes, which are 10,000 times thinner than a human hair, can be manufactured efficiently.

“When you grow carbon nanotubes they can be metallic or semiconductor. We do not want metallic ones because they short-circuit the transistor. So we want semiconductor ones. Historically it’s been a challenge to separate them because they grow together, but we’ve made major progress on that. We’re at 99.99 percent purity now. We still need two more nines after the decimal point.”

There’s also the matter of placing the nanotube transistors on a chip. It’s a very different process from that used to make traditional silicon computer chips. Gil figuratively compares the two this way: “With silicon, you have a piece of marble, you chip away and the statue is left behind. In carbon it’s totally different. In carbon you have dust, and somehow from the dust you’ve got to assemble a statue.”

And finally, you have to be able to contact the tubes so they can perform their functions. As transistors get smaller, so do their metal electrical contacts. And smaller contacts have higher, often impractical, resistance to electricity. IBM researchers recently invented a process similar to microscopic welding that chemically binds the metal atoms to the carbon atoms at the ends of nanotubes. That means the contacts can be smaller than 10nm without deteriorating the performance of the tubes.

“This brings us a step closer to the goal of practical carbon nanotube technology,” says Gil.

nano cone illustration