January 18, 2021 | Written by: Wang Zhou and Levente Klein
Categorized: AI | Physics | Science
Share this post:
Cats aren’t dogs. Even modern AI knows that.
But how exactly AI distinguishes cat images from those of dogs is not clear. Standard neural networks are akin to a black box, as even the people who program them often have little to no idea how they make decisions.
It’s not as critical when it’s just a picture of a cute puppy or a kitten. But it becomes important when an AI tries to interpret, say, a sequence of weather images that show the formation of a hurricane and its propagation across the Atlantic. In this case, the AI might predict hurricanes and winds that have never been observed or measured — or even make any sense.
We’d like to change that.
Our team has developed Physics-Informed Neural Networks (PINN) models where physics is integrated into the neural network’s learning process — dramatically boosting the AI’s ability to produce accurate results. Described in our recent paper presented at NeurIPS 2020, PINN models are made to respect physics laws that force boundaries on the results and generate a realistic output.
After all, interpreting a hurricane’s movement relies on well-understood physics: we know how they slow down or accelerate, how water evaporation affects them, how ocean currents move, and how much the Sun is warming the Earth. Knowing the physics, it is possible to predict how the seed of a hurricane forms, how the hurricane moves across the ocean, and whether it would hit the land or just fizzle out.
Our work shows that by integrating physics, it’s possible to improve the robustness of neural network models and take the results ‘out’ of the black box — making them explainable. This explainability stems from connecting the output of the model to the input variables by following the rules of physics.
For example, we can train models to predict the size of a lake from aerial observations where the lake would shrink under extended drought. Similarly, we know that warming up of the oceans leads to more evaporation, triggering bigger hurricanes. In our paper we show how to integrate such physics knowledge directly into the neural network learning.
Saying goodbye to the black box
Many AI users often get baffled by programmers’ inability to explain what’s happening in their AI’s digital brain. Yes, the users say, “your models are nice, but we don’t understand how and why you get these results.” Even when they see that the model works, they often wonder what happens if it encounters data on which it was not trained. How can we trust these models in critical situations?
That’s a very legitimate question, especially if these models are to be integrated into complex systems like mass transportation or a nuclear reactor operation, where complicated, unforeseen situations might occur at any time — and could result in a loss of lives.
Instead of relying on statistics like traditional neural networks do — which means comparing to something similar in nature — our models integrate the physics directly. Physics constrain the problem, and if the neural network produces an unrealistic result, we flag it so that the system avoids such mistakes in the future.
Schematics of how physics is taught, and is guiding the learning of the neural network.
In our work, we integrated into a neural network a simple physics model describing a plume, which can be visualized as smoke moved by a breeze. We know the sources and we know that as you move further away from the source, the smoke’s concentration decays progressively and the smoke gets harder and harder to detect. This process is simple enough to be modeled and all the components are well understood.
First, we trained a standard neural network using the modeled data but without any physics constraints. Then we did the same but imposed on the neural network the physics law called advection-diffusion equation. In the second case, the results were much more accurate: the plume concentration map reconstructed from coarse data was a lot more precise since it obeyed the governing physics laws.
Looking into the future: climate change modeling
We are now looking to apply the physics-constrained neural network to more complicated data such as that of the changing climate. We expect these network models to be functional over the next couple of years.
To do so, our team is studying global satellite-acquired datasets of different emission sources of methane and nitrogen dioxide. Methane is mainly generated by oil and gas explorations while nitrogen dioxide is emitted by cars and factories. Both greenhouse gases contribute greatly to climate change. However, the satellite data has coarse resolution and is acquired only once a week, so it’s tricky to identify individual polluters and how much they emit.
To boost our understanding of the data, we are applying our physics-informed neural network method to better resolve satellite images. This work can help us identify pollution sources, integrating the knowledge on how pollution is dispersed in the atmosphere and how the weather is dissipating it. The results could help improve climate models and enforce regulations to reduce emissions.
We are also developing a general platform for cross-industry applications. We are using data from PAIRS (Physical Analytics Integrated Data Repository & Services) — a geospatial platform that integrates petabytes of Earth satellite observations and weather and climate data. This information helps companies, for instance, to predict where to trim trees that may threaten power lines or where to improve irrigation in agriculture. By using our physics-informed neural networks on these datasets, we are extracting insights for the oil and gas industry and for the supply chain carbon footprint quantification.
Slowly but surely, neural networks are learning physics — allowing us to peek inside their ‘black box’ artificial brains.
Inventing What’s Next.
Stay up to date with the latest announcements, research, and events from IBM Research through our newsletter.