Automated speech-recognition capabilities ingest audial elements of video to generate captions with smart layout algorithm, segmenting caption cues at natural breaking points for greater readability.
For on-demand content, captions can be reviewed and corrected from a web-based panel with suggested edits seen in virtually real-time while the AI will learn based on edits made to generated captions.
The AI functionality of the solution is both self-learning, through edits made to generated captions, and also can be taught through expanding vocabulary or uploading sample texts for reference.
Manage a calendar to schedule out live sessions while being able to feed the AI scripts ahead of a broadcast, through services like iNEWS or MediaCentral, for greater context and to expand vocabulary.
IBM Watson Captioning can support numerous workflows, from broadcast TV to web distribution, with support for caption delivery through formats like CEA-608 captions or using the VTT format.
Each instance of IBM Watson Captioning is isolated: any vocabulary training or contextual training support is exclusive to the instance, and is not used in other instances of the service by others.