Big Data

The Bell Prize: Unlocking the Secrets of the Earth’s Crust

Share this post:

We’re on the cusp of a new era for science and computer science. People are able to measure natural phenomena more precisely thanks to all the new sensors and the Internet of Things. We can handle immense amounts of data thanks to new data management technologies. And we’re producing ever more sophisticated algorithms and computer simulations. Wrap it all together and it means we are on the path to understanding nature more deeply than was ever possible before.

earthA group of scientists from the University of Texas at Austin, IBM Research, New York University and the California Institute of Technology that I’m a part of pushed the needle of progress forward with our realistic simulations of the dynamics of the earth’s  mantle and crust. For our work, we just yesterday received the Gordon Bell Prize, one of the top honors in the computer science field.

The group—spearheaded by Omar Ghattas of the University of Texas—created a technology tool that geologists and seismologists can use to improve their understanding of the forces that are behind the origin of earthquakes and volcanoes. But, more broadly, this collection of advanced algorithms and know how will enable scientists in multiple fields to make new discoveries and make it possible for industries to greatly reduce the time it takes to invent, test, and bring new products to market—including, for example, new materials and energy sources.

Our focus was on mantle convection, the physical process responsible for the thermal and geological evolution of the planet, including plate tectonics—the movement of pieces of the earth’s crust. Here’s a paper about our work.

The project combines advances in mathematics, in algorithms, and computer science demonstrated in scaling simulations to run super-efficiently on two of the most powerful high-performance computers in the world. Our initial runs were made  on a computer at the Juelich Supercomputing Centre in Germany and then we further scaled it up on Sequoia, a machine made by IBM that’s owned by the U.S. Lawrence Livermore National Laboratory. Sequoia possesses 1.6 million processor cores capable to a theoretical peak performance of 20.1 petaflops.

This project got started in an interesting way. Two years ago, a group of IBM and university scientists won the Bell prize for our work simulating the behavior of clouds of bursting bubbles. Afterwards, my colleague Costas Bekas and I spoke to Omar, and he suggested that we combine forces to tackle the earth mantle convection problem he was working on. So we did. Once again, like with the bubble project, we involved scientists with a wide range of expertise from multiple institutions. A single organization could not have accomplished this. We needed a wide variety of skills and a spirit of open collaboration.

A key element of our system came from my IBM Research colleagues Cristiano Malossi, Costas Bekas, Yves Ineichen and Peter Staar, who have expertise in mathematics, physics, computer science and fluid dynamics.

 A view of the Marianas fault from the Earth mantle and crust model.

A view of the Marianas fault from the Earth mantle and crust model.

Ours was an extremely difficult problem. We had to deal with an immense amount of data about the conditions in different areas of the mantle and crust—including temperature, position, movement and viscosity. This consisted of a problem with 600 billion non-linear equations, which can only be solved on the largest supercomputers. In addition, we had to adapt the resolution at which were measuring and simulating conditions depending on the geometry of different areas of the mantle and crust. To handle all of this complexity and produce results at the precision we wanted, it was necessary to approach the problem differently and re-think most of the algorithms.

Rather than using simple explicit schemes, which are easier to scale on large supercomputers but at the same time lose in accuracy, the team developed an innovative implicit solver, which allowed us to simulate mantle convection with an unprecedented level of accuracy. Indeed, the quality of resolution is essential when you’re studying the position and movement of the earth’s tectonic plates.

Understanding mantle convection has been named one of the “10 Grand Research Questions in Earth Sciences” by the U.S. National Academies. We hope our invention provides earth scientists with a valuable new tool to help them pursue this grand quest.

Add Comment
2 Comments

Leave a Reply

Your email address will not be published.Required fields are marked *


Dario D'angelo

Fantastic work. Can the same approach be used to simulate the behavior of rapidly rotating neutron stars/pulsars (MSP) and maybe get closer to a model that would explain glitch and anti-clitch events? is there any implication in the way we conceive Magnetohydrodynamics (MHD)?

Reply

Cristiano Malossi

@Dario D’angelo, Thanks for your question. The model has been designed for mantle convection, so as it is cannot be used for the pulsars. However, it may be possible to use it as a starting point to develop a sort of fluid model to simulate that (with probably a lot of assumptions). The solver and algorithm might work well also for the new model. If you are an expert in this area and you have ideas, we would welcome the chance to explore this further.

Reply
More IBM Research Stories

The CDO: Helping to Harness the Power of Data

Data used to be viewed by businesses as exhaust — a necessary but relatively useless output of projects that became a nuisance to store and maintain. Advances in analytical, and more recently cognitive, technology have changed that mindset for good. Data is now an asset that can be applied to benefit all aspects of an […]

Continue reading

5 nanometer transistors inching their way into chips

Announced at the 2017 Symposia on VLSI Technology and Circuits conference in Kyoto this week, IBM and our research alliance partners, GLOBALFOUNDRIES and Samsung built a new type of transistor for chips at the 5 nanometer (nm) node. To achieve this feat, the architecture – how the elements of a chip are arranged and the […]

Continue reading

Why We Patent

To some, today’s announcement that IBM once again led the U.S. in patents in 2016 might sound as predictable as snow in upstate New York in January. After all, this marks the 24th consecutive year IBM has received the most patents. It’s what we do, right? But to gloss over this news would be to […]

Continue reading