Share this post:

Edited by: Anne Nicholson, Brand Strategy Lead, IBM A/NZ

Journalist, Art Buchwald, said “the best things in life are not things, they are moments” – of running, walking, splashing, resting, laughing, crying, jumping. Moments happening in the world can unfold at time scales from a second to minutes, occur in different places, and involve people, animals, objects, as well as natural phenomena, like rain, wind, or just silence.

Dan Gutfreund

Dan Gutfreund

Artificial intelligence has been able to capture and interpret moments in still images and language (both written and spoken), but interpreting videos remains a challenge. That’s something the MIT-IBM Watson AI Lab hopes to change with the launch of the Moments in Time Dataset, a collection of one million three-second long video clips to help AI models identify action.

The data set is publicly available for non-commercial and educational use. “Our hope is that it will foster new research, addressing the challenges in video understanding and help to further unlock the promise of AI,” says Dan Gutfreund, Video Analytics Scientist, IBM Research AI.

A lot can happen in a moment of time. Consider a woman walking her dog in the park on a sunny afternoon. While the human brain can quickly recognise what is happening, due to the complexity and number of actions undertaken with each step and tail wag it’s difficult for a computer to process the information as quickly. “For decades, researchers in the field of computer vision have been attempting to develop visual understanding models approaching human levels. Only in the last few years, due to breakthroughs in deep learning, have we started to see models that now reach human performance (although they are restricted to a handful of tasks and on certain datasets).”

For the past year, Dan and his team have been working closely with Dr. Aude Oliva and her team from MIT, where they have been tackling the specific challenge of action recognition. This has been an “important first step in helping computers understand activities which can ultimately be used to describe complex events (e.g. changing a tire, saving a goal, teaching a yoga pose)”

“We predict that the number of applications will grow exponentially,” says Dan Gutfreund. The expectation is the research could be applied to assisting the visually impaired, elderly care, automotive, media & entertainment and many more. To learn more you can visit Dan’s blog here or explore the Moment in Time videos here. To learn more about our partnership with MIT and the MIT-IBM Watson AI Lab you can watch the video below.

More Artificial Intelligence stories

Artificial Intelligence to help in the fight against diabetes

Jessica Ball’s life changed in March. The 33-year-old, from Hervey Bay in Queensland, has been living with diabetes for 12 years. Balancing her blood glucose levels as a busy mother with three young children is a continuous struggle. Some days, she takes about four insulin injections; other days, she may need up to eight to […]

Continue reading

Meet the beauty start-ups shaking up the cosmetics industry

The beauty industry is constantly changing. Trends come and go, taking with them different colours and styles. One minute, everyone has heavily contoured cheek-bones; the next, customers prefer nude lips and natural curls. Ingredients, manufacturing and distribution processes change at a similar rate. Olivia Panzic knows first-hand that it can take some surprising directions. Making […]

Continue reading

Power outages decline with AI and high-res weather forecasting

Advances in computing power are enabling utility companies to better manage problematic vegetation and more effectively predict and respond to disruptions caused by weather. Detailed weather forecasting and artificial intelligence are being used to curtail power outages caused by severe weather amid concerns that extreme weather events are becoming more frequent due to climate change. […]

Continue reading