IBM Research-China

IBM Research and Tencent Delight Basketball Fans with AI-Based Highlight Reel

Share this post:

At 11:45 p.m. on June 8 in the Quicken Loans Arena in Cleveland, Ohio, the Golden State Warriors finished a sweep of the Cleveland Cavaliers with a 108-85 blowout win. As one of the most dominant players in the NBA, the Warriors’ Kevin Durant was named the 2018 NBA Finals MVP, his second time earning the MVP recognition.

At the same time, on the other side of Pacific Ocean, in a high-tech park in Beijing known as Asia’s “Silicon Valley”, the AI Vision team at IBM Research-China was hard at work applying AI technology to the game. Over the course of the three-hour game, our AI system analyzed and categorized each frame, live and in real time. The system identified players and tagged their expression and movement along with key actions (such as shooting, rebounding, blocking, falling on the floor), the ball’s trajectory, and the position of basketball net.

Prior to the Finals, our team performed the same analysis on more than 200 hours of game footage featuring Kevin Durant (from the 2007 season to today). While the players were still celebrating a historic win, an editor clicked a button and in approximately 20 seconds, a highlight reel featuring some of his most dazzling performances was created. The first AI-edited basketball highlight reel made its debut via Tencent Sports’ online video streaming platform, where 143 million basketball fans have enjoyed it so far.

AI-based highlight reel for NBA Finals

Preparing the highlight reel

This was the first time AI has been used to edit sports videos in China. IBM partnered with Tencent Sports on the project. In preparation for this work, Tencent Sports organized an online poll inviting fans to select a description (e.g., accurate, powerful, wild, consistent) to match popular NBA stars. Fans selected “accurate” as the best descriptor for Stephan Curry and “powerful” for Lebron James. Based on these descriptions, a human editor wrote a “script” for a 1- to 2-minute highlight reel for each player, dictating what types of clips should appear at what point in time. For example, the script might call for a dunk by Lebron James from 2015 to appear after a clip of him passing the ball. The editor also pre-selected music to overlay the highlight reel.

Once our AI system receives such a script from a human editor, it gets to work. Based on the description voted on by fans, and using the script and soundtrack provided by the editor, our system selects the best clips from each player’s career and stitches them together in the format of a highlight reel.

AI at work behind the scenes

Our AI Vision system is multi-modal, meaning it is able to ingest both audio and visual data. The system is able to track and recognize players’ faces and facial expressions (frustration, sadness, happiness), identify objects (court, basketball, baskets, jersey numbers), and categorize actions (slam dunks, shoots, alley-oops, lay-ups, dives for the ball, cheering).

Using deep learning techniques, the AI system creates a confidence ratio for each clip, reflecting how confident it is that a given action is occurring, such as a dunk or a blocked shot. The AI system then matches the clips in its database to the requirements in the script provided by the human editor. For example, if a “happy” facial expression is registered in a clip where the system is confident that a dunk is occurring, that clip would be considered more exciting and thus a good candidate for inclusion in a highlight reel.

Basketball analysis is one of the most complicated tasks for computer vision. The extreme visual complexity of players’ fast movements, crowded screens, multiple camera angles, and rapid camera movement means that basketball video editing is a highly challenging task. For a human, this would be a labor-intensive process that could take hours, but thanks to AI, it can now be simplified and streamlined. A human editor is still able to dictate the content, flow, and narrative of the highlight reel but is relieved of the tedious duty of manually searching through hours of footage to find the perfect clip. Compared with traditional editing, it takes only 20 seconds for our AI Vision system to process a two- or three-hour game to create a one-minute final cut. Our system can make editing more efficient, freeing human editors’ minds to focus on creating and innovating.

More IBM Research-China stories

More Is Less: Learning Efficient Video Representations

IBM researchers developed a novel low memory footprint and efficient architecture for spatio-temporal analysis of video. The results show strong performance on several benchmarks – and allow training of deeper models using larger sequences of input frames, which will lead to higher accuracy on video action recognition tasks.

Continue reading

Sobolev Independence Criterion

At NeurIPS 2019, IBM Research AI will showcase the “Sobolev Independence Criterion," a novel interpretable dependency measure that provides feature importance scores that can be used for controlling False Discovery Rate.

Continue reading

IBM Research AI at NeurIPS 2019

IBM researchers from our labs around the world will present more than 100 papers across regular sessions and workshops at NeurIPS. They are all focused on different core technologies and use cases of AI. And a number of them will be on display in booth #111 with demos scientists will be presenting throughout the week.

Continue reading