Edited by: Anne Nicholson, Brand Strategy Lead, IBM A/NZ
Journalist, Art Buchwald, said “the best things in life are not things, they are moments” – of running, walking, splashing, resting, laughing, crying, jumping. Moments happening in the world can unfold at time scales from a second to minutes, occur in different places, and involve people, animals, objects, as well as natural phenomena, like rain, wind, or just silence.
Artificial intelligence has been able to capture and interpret moments in still images and language (both written and spoken), but interpreting videos remains a challenge. That’s something the MIT-IBM Watson AI Lab hopes to change with the launch of the Moments in Time Dataset, a collection of one million three-second long video clips to help AI models identify action.
The data set is publicly available for non-commercial and educational use. “Our hope is that it will foster new research, addressing the challenges in video understanding and help to further unlock the promise of AI,” says Dan Gutfreund, Video Analytics Scientist, IBM Research AI.
A lot can happen in a moment of time. Consider a woman walking her dog in the park on a sunny afternoon. While the human brain can quickly recognise what is happening, due to the complexity and number of actions undertaken with each step and tail wag it’s difficult for a computer to process the information as quickly. “For decades, researchers in the field of computer vision have been attempting to develop visual understanding models approaching human levels. Only in the last few years, due to breakthroughs in deep learning, have we started to see models that now reach human performance (although they are restricted to a handful of tasks and on certain datasets).”
For the past year, Dan and his team have been working closely with Dr. Aude Oliva and her team from MIT, where they have been tackling the specific challenge of action recognition. This has been an “important first step in helping computers understand activities which can ultimately be used to describe complex events (e.g. changing a tire, saving a goal, teaching a yoga pose)”
“We predict that the number of applications will grow exponentially,” says Dan Gutfreund. The expectation is the research could be applied to assisting the visually impaired, elderly care, automotive, media & entertainment and many more. To learn more you can visit Dan’s blog here or explore the Moment in Time videos here. To learn more about our partnership with MIT and the MIT-IBM Watson AI Lab you can watch the video below.
Author: Amanda Johnston-Pell, Chief Marketing Officer and Co-Chair Customer Officer, IBM A/NZ Reddit founder Alexis Ohanian recently said: “Programming is modern-day literacy.” This quote really resonated with me. Last year my son took part in a Coder Academy Day – a dedicated IBM-sponsored program which gave him his first taste of coding and artificial intelligence. He […]
Standing in front of a wall of wine bottles trying to find something that you’ll like can feel futile. One online wine retailer is using artificial intelligence to help find what you’re looking for. Author: Alex Braae, Staff Writer of The Spinoff. Originally published on The Spinoff. It can take a lifetime to truly become […]
Author: Dr Adam Makarucha, Data Scientist, IBM Systems Australia is blessed with some of the world’s most beautiful coastline. Our island nation is home to more than 10,000 beaches, ranging from a few dozen metres to hundreds of kilometres long. But increasingly, these iconic locales are slowly disappearing before our eyes. As a Data Scientist […]
For over 80 years, IBM has been working to solve some of the biggest issues facing Australia and New Zealand. Today IBM is helping doctors diagnose disease, predicting the latest fashion trends and creating better services for citizens.
These are our stories; this is IBM.