Turning visual recognition into video recognition
Ryan VanAlstine, CTO, BlueChasm and Robert Rios, Software Developer, BlueChasm welcomed the audience. Who’re BlueChasm? They’re a team of Austin developers that build platforms and API’s that integrate to existing infrastructure.In the example at the Watson Developers Conference, they were demonstrating how Watson can identify what’s in video footage.
Robert kicked off with a demonstration using Node.js to use ffmpeg to process images. The code pulls images out of video footage at a frame rate that you specify and posts them to a folder. A ‘glob’ function then pulls the images out of the folder and sends them to Watson’s image classifier.
Recognising what’s in a video
The tags that Watson’s Image Classifier returns let you know exactly what’s in each frame of video. It then summarises all of the tags to give you an idea of the spread of what’s in a video. Robert’s example was of a train running down some tracks and Watson was able to analyse what was in the video from train and tracks through the poles and buildings in the background.
Any processor can run the query, right down to the Raspberr Pi – which would run a little slower due to the speed of the Pi’s processor. But this does show that very small pieces of technology could recognise what’s in a video.
When would you use this?
Thinking about home security – being able to understand what your security camera is looking at might be really useful. Should your home be empty, or is there something on the video that suggests that someone is there who shouldn’t be? If you have a holiday home that springs a leak, using the capability to ‘see’ the water would be very useful. Even more so when it’s teamed with an alert that would let you know that Watson had ‘seen’ something in the video that you should be aware of.