Our team has developed Physics-informed Neural Networks (PINN) models where physics is integrated into the neural network’s learning process – dramatically boosting the AI’s ability to produce accurate results. Described in our recent paper, PINN models are made to respect physics laws that force boundaries on the results and generate a realistic output.
IBM-Stanford team’s solution of a longstanding problem could greatly boost AI.
Today, we are announcing a challenge for the computer vision community to develop robust models for object recognition, demonstrating accurate predictions on ObjectNet images.
The competition, called NLC2CMD for ‘Natural Language to Command,’ ran as part of the NeurIPS 2020 program until December – and this Saturday, we’ll finally see what the winners have come up with.
Deep learning may have revolutionized AI – boosting progress in computer vision and natural language processing and impacting nearly every industry. But even deep learning isn’t immune to hacking.
Enter microcontrollers of the future – the simplest, very small computers. They run on batteries for months or years and control the functions of the systems embedded in our home appliances and other electronics.
Our latest breakthrough in AI training, detailed in a paper presented at this year’s NeurIPS conference, is expected to dramatically cut AI training time and cost. So considerably in fact that it could help completely erase the blurry border between cloud and edge — offering a key technological upgrade for hybrid cloud infrastructures.