5725-H94 IBM Intelligent Video Analytics V3.0IBM United States Sales Manual
Revised: April 14, 2020
|Table of contents|
|Program Number||VRM||Announced||Available||Marketing Withdrawn||Service Discontinued|
Back to top
- IBM Intelligent Video Analytics V3.0.0 (5725-H94)
Back to top
IBM Intelligent Video Analytics has been helping agencies and organizations worldwide analyze video captured by fixed cameras, such as those used for physical security, closed-circuit television (CCTV), and monitoring traffic, to extract key information from streaming video to uncover insights and patterns within untold hours of camera footage.
The Intelligent Video Analytics base installation includes these functions for live streaming, fixed cameras:
- Real time alerts to call attention to events
- Rich content-based indexing to find critical images and patterns
- Standards-based open and extensible architecture.
Optional capabilities extend the solution to include:
- Ingestion of pre-recorded videos from both fixed cameras and cameras in motion. With ingested video files, analysts can extract critical information and find relevant images faster, which may help accelerate investigations.
- Automated, intelligent redaction.
- Advanced facial recognition, which may improve lead generation and risk assessment. Matching faces on video to an agency's or organization's watch list may help them identify persons of interest and speed investigation.
- Advanced people search capabilities based on cognitive analytics that may provide more accurate results
Moreover, Intelligent Video Analytics employs a continuous delivery and support model that enables organizations to obtain more rapid access to functional enhancements and fixes.
Back to top
IBM Intelligent Video Analytics V3.0 introduces new features and functions to enhance already powerful video analytics capabilities that include:
- Full support for IBM POWER9 infrastructure
- Deeper integration with IBM PowerAI Vision modeling software to enable dynamic modeling without the need to engage developers or AI experts
- New object and classification models
- Container-based delivery for Smart Surveillance Engine (SSE) component, enabling all Intelligent Video Analytics components to be container based
- Support for 13 additional languages
IBM Intelligent Video Analytics V2.0 offers both traditional and optional cognitive analytics to help users analyze, catalog, and index video captured from both fixed cameras and cameras in motion. Finding the relevant images within the camera footage may help public safety agencies and security organizations uncover insights and patterns to help them disrupt threats and aid investigations.
Intelligent Video Analytics includes comprehensive security, intelligence, and investigative capabilities for live-streaming fixed cameras.
- Real-time alerts to call attention to events
- Rich content-based indexing and search
- A standards-based open and extensible architecture
Cognitive analytic options extend its capabilities.
- The ability to ingest video from cameras in motion and fixed cameras
- Automated redaction, which may assist users in addressing their compliance and privacy requirements
- Facial recognition and advanced people search functions based on cognitive analytics
These capabilities offer far more accurate and adaptive techniques to address various analytic needs when compared to traditional algorithm-based approaches.
Back to top
Intelligent Video Analytics delivers deeper cognitive video analytic capabilities more rapidly and easily to help organizations build their own visual recognition models for their domain or industry. This new release also includes a broader set of standard object and classification models that help drive value within your organization. As before, Intelligent Video Analytics can be used with real-time or archived video, captured from both fixed cameras and cameras in motion. Analyzing the video rapidly and accurately helps organizations, across a variety of industries, uncover insights, anomalies, and patterns to help them identify and address operational risks that aid in the process of resolving potential issues.
IBM Intelligent Video Analytics V3.0
- Full support on the POWER9 platform for all components
- Improved ease of installation for Intelligent Video Analytics using containerization for both IBM Power Systems and x86 platform deployments to help enable faster install and upgrades.
- Deep-learning object detection model that supports a wide variety of object types, including cars, trucks, buses, motorcycles, bicycles, and persons. The new model is combined with the existing fine-grained person parts model that identifies various parts of the body, such as head, shoulder, torso, and legs. This facilitates increasingly accurate detection in many different deployment scenarios.
- AI-based vehicle color classification model that supports eight different colors (black, white, light gray, dark gray, red, green, blue, and yellow). Applications for this classification model can be extended to object types beyond vehicles with reasonable performance from the system.
- Basic model management capability to add and remove models on the deep learning engine (DLE), command line interface (CLI) only. This capability enables users to turn models on and off, and deploy them as needed. It gives organizations more control over how the models are deployed.
- Composite object detection and alerting. This capability lets
users combine multiple models in parallel, so that a single image is
fed to all models. The results from all models are combined into a
single response as though all detections came from a single model.
For example, if you had a model that detects people's legs, shoulders, and heads, as well as a model that detects dogs, you can create a composition service. This composition service can run these models, and more, on a single image or video, instead of sending the image several times to a server for analysis. This makes the process far more efficient. Using this parallel approach and depending on the number of models, users can see between 50% and 100% improvement in DLE response times.
Alternatively, multiple models can be combined in sequence, such that a primary model is run first on the input image, and then one or more secondary models are run on specific types detected by the primary model. For example, users can run a model to detect persons, and then run secondary models to detect specific types of safety gear that the persons are wearing. Using this method avoids having to run all the models continuously on all the images, since many images might not have a person in them. This capability boosts the efficiency of the solution while providing a flexible way for users to detect objects and generate alerts.
Collectively, these new capabilities add even more flexibility and ease of use to Intelligent Video Analytics. In addition, Intelligent Video Analytics is now fully supported on the POWER9 platform, extending support beyond the current x86-based platform.
IBM Intelligent Video Analytics V2.0
Intelligent Video Analytics is designed to help security and public safety organizations develop comprehensive security, intelligence, and investigative capabilities using video. The solution supports analytics and alerts for real-time streaming fixed cameras, such as those used for physical security, closed-circuit television (CCTV), and monitoring traffic. Optional capabilities can be added to provide support for video file ingestion, which enables the ability to apply forensic search capabilities of video files from fixed cameras as well as cameras in motion, such as body-worn cameras. The broad capabilities of Intelligent Video Analytics can fully complement and enhance an existing security infrastructure to provide defensive as well as proactive understanding of security vulnerabilities.
Once video is captured, whether from cameras in motion or from fixed cameras, it may be necessary to share clips with the media, the public, and other security organizations. An optional capability for redaction is designed to assist agencies in their efforts to comply with privacy and criminal justice evidence handling laws for publicly shared video.
To use the redaction function, an agency or organization sets up the criteria to automatically blur out sensitive images that may have been captured by the camera lens. Manually performing such a task would be labor-intensive work. Automated redaction may significantly reduce the time and labor required to release video footage for shared use.
Intelligent Video Analytics supports the following redaction functions:
- Redact one or more faces for a specified video segment.
- Optionally redact all detected faces.
- Redact all pixels for a specified video segment.
- Redact image pixels that are outside the selected regions for a specified video segment.
- Manually select an object, such as computer screen, to be redacted from a video segment.
- Manually redact any content of a single video frame.
- Automatically redact selected objects with optional user adjustments.
- Redact specified video content with options of various masking techniques, including blurring, pixilation, blank mask, and gradient map.
Face capture and recognition for lead generation
Faces captured by body-worn cameras or fixed cameras could prove helpful to security analysts and investigators when run through facial recognition tools. However, not all face capture images will yield effective results; the more images of a face that are captured, the greater the likelihood of a resulting match. The accuracy provided by optional cognitive analytics may help users identify good facial images for further recognition analysis, potentially saving time and personnel costs, and possibly identifying a person of interest who might otherwise have been missed.
To enable facial recognition, the agency enrolls or loads a set of facial images representing potential persons of interest into their Intelligent Video Analytics watch list. The analytic software automatically compares the faces identified in ingested video clips with the faces in the watch list. The facial recognition function detects high-quality matches and sends them to Intelligent Video Analytics as a recognition alert. This process can take a matter of minutes rather than the hours previously needed to review footage and manually cross-check with all the persons of interest.
Intelligent Video Analytics' basic detection, classification, and indexing algorithms can index hundreds of millions of events and alerts to create a full index for live-streaming fixed cameras that can be quickly searched, analyzed, and correlated in seconds. Users can data mine the index for a wide array of criteria, including searches of real-time and historical data for specific items, such as vehicles and objects.
To help search further, the data mining capabilities allow for searching by such attributes as color and size. Additionally, Intelligent Video Analytics provides a statistical analysis of events that can be sorted by date and time, or over an extended period of time, in order to perform a trend analysis. These figures are directly linked back to their particular video feed or ingested file for immediate viewing when a potential anomaly has to be reviewed promptly.
Rapid searching for persons of interest
Optional cognitive analytics in Intelligent Video Analytics may help make forensic searching for images of people within video files simpler and faster. The solution can search hundreds of hours of footage across multiple files from a variety of camera types.
The agency can set search criteria for characteristics such as hair color, baldness, facial hair, age range, glasses, skin tone, gender, clothing colors or patterns, and more. Once the agency sets the characteristics, searching for such attributes is done automatically. This capability may reduce the extensive time and effort that would otherwise be required of an analyst or officer to view all the footage manually.
Real-time alerts to call attention to events
Intelligent Video Analytics provides event-based security that analyzes streaming video feeds from fixed cameras to provide real-time alerts about user-predefined behaviors of people, vehicles, or objects. It can help identify perimeter or tripwire breaches, abandoned objects, objects removed, vehicles moving in unexpected directions, and more. In addition, a key differentiator from other analytics offerings is that Intelligent Video Analytics indexes these alerts along with other activities across cameras and sensors. It can index a whole set of attributes about each and every event.
Standards-based open and extensible architecture
The Intelligent Video Analytics architecture is designed specifically to facilitate interoperability with products from different vendors to broaden and enhance the overall security framework for the particular environment in which it is deployed. This approach to video analytics and security helps an organization to deliver preventive security controls, countermeasures, and safeguards, and to evolve, as necessary, by incorporating third-party products and services (such as specialized analytics, sensor data, and integration with transactional information technology systems). Intelligent Video Analytics interoperability contributes to its ease of implementation by leveraging and enhancing current technologies rather than isolating them. As a result, it provides the necessary framework to help implement effective security controls that can adapt to ever-changing and new threats.
The optional capabilities in Intelligent Video Analytics deliver cognitive analytics that may provide a higher degree of accuracy. Cognitive technology can:
- Interpret unstructured video and structured metadata tagging at high speeds and volumes.
- Prioritize recommendations, which may help humans make better decisions.
Intelligent Video Analytics uses IBM DB2. DB2 offers increased data protection, scalability, and performance for all its database intensive operations. It is designed to help manage data more effectively and efficiently. Greater availability is delivered through enhancements, such as online, automated database reorganization. In addition, the increased scalability and the ability to leverage the latest in server technology helps deliver increased performance of backup and recovery processes.
IBM Intelligent Video Analytics V1.5
IBM Intelligent Video Analytics V1.5 helps increase the efficiency of your physical security and public safety staff in two ways:
- Alerting operators in real time based on predefined events, such as a person entering a restricted area
- Allowing operators to search previous activity for specific events, such as finding all yellow cars that drove by the building yesterday
IBM Intelligent Video Analytics examines each frame of video, extracts information about events in that video, and stores it in a database for future reference. This approach allows for real-time alerts as well as rapid searching of hundreds of millions of past events. In addition, it is able to incorporate data from other sources.
Intelligent Video Analytics V1.5 enhancements:
- Upgrade to people search: It now has the ability to do single attribute and combined attribute search on a person's features or clothing, such as baldness, hat, glasses, sun glasses, hair color, skin tone and texture, and up to three colors combination search on torso area with 13-color palette.
- Provides tricolor combinational search capability on upper and lower body areas with 13-color palette for clothing.
- Provides enhancements to optimize detection rate and reduce false positives for critical and specialized video analytics for city counter terrorism operations and crime investigation such as abandoned object detection, and face detection which enhances face recognition integration.
- Provides enhancements to optimize detection rate and reduce false positives for critical and specialized video analytic alerts for rail and subway security and safety such as rail crossings in undesignated areas, people or animals entering a tunnel, objects or people on the tracks or close to the platform's edge.
- Provides enhanced analytics capability to handle crowded scenes and challenging environmental conditions such as lighting, shadows, reflections, and clouds.
- Provides capability for event detection and searching based on actual world measurements (for example, object length, height, size, and speed).
- Enhanced ability to detect and search objects of interests (such as cars, busses, and trucks) with multiple colors (bigger selection and better colors).
- Improvements to analytic installation, configuration, and tuning in order to reduce deployment time while enhancing alert accuracy.
- Ability to run multiple frameworks on a single physical server for improved scalability.
- Includes an operator UI framework to improve operator usability by
making the operator UI more flexible, which:
- Is customizable to meet the specific needs and operational processes of a given user environment, and to support unique requirements specific to forensic search and alert response
- Supports globalization and localization
- Provides integration with Global Information System (GIS) functionality to allow for event displays on maps
- Enables analytic functionality, such as people search, augmented color support, and support of worldwide coordinates
- Delivers native federated proxy capabilities where several Intelligent Video Analytics instances can be federated together into a seamless user experience. This allows the system administrator to enable all or a portion of the channels (cameras) for single sign-on, alert passing, forensic searching, and adjudication (alert dispositioning) between instances to specific or all operators.
IBM Intelligent Video Analytics V1.0
IBM Intelligent Video Analytics V1.0 is a combination of Smart Vision Suite Video Analytics V3.6.7 running on the IBM Intelligent Operations Center infrastructure.
IBM Intelligent Video Analytics is designed to facilitate comprehensive physical security to enable city or other infrastructure planners, administrators, and managers to quickly ascertain and intelligently react to current situations, improve their operational efficiencies, and effectively manage their operations. Line-of-business executives and chief security officers, tasked with finding a more effective way to process, store, and respond to surveillance data, can provide network-based video analytics that include real-time alerts, post-event searching capabilities, and remote access. Results can provide increased levels of security, faster incident response time, reduced risk of loss, and prevention of malicious events.
IBM Intelligent Video Analytics delivers real-time alerts for predefined behaviors such as people or vehicles or objects crossing a tripwire or being in a secured area. In addition, a key differentiator from other analytics packages is that it indexes these alerts along with other activities across cameras and sensors. It can index a whole set of attributes about each and every event. As an example for vehicles, categories can include the size, the color, the trajectory, when it entered the secure zone, when it exited. For people, categories can be the color of jacket or shirt, and the trajectory. For objects, categories can be the size, color, time removed from the field of view.
You can index hundreds of millions of events to provide a full index that can be quickly searched, analyzed, and correlated in seconds.
Comprehensive security, intelligence, and investigative capabilities
IBM Intelligent Video Analytics provides event-based surveillance that analyzes video feeds in real time to provide real-time alerts for security personnel. Additionally, it supports activity search, cross- correlation, and trend analysis to allow for efficient analysis of video footage in both real-time and investigative circumstances. IBM Intelligent Video Analytics has the functionality to develop a comprehensive threat model customized to the environment in which it is implemented. It can identify perimeter breaches, as well as abandoned objects, objects removed, and people and vehicle activity. IBM Intelligent Video Analytics framework can integrate specialized analytics such as license plate and face recognition. The broad capabilities of IBM Intelligent Video Analytics fully complement and enhance an existing security infrastructure to provide defense as well as proactive understanding of security vulnerabilities.
Rich content based indexing and search
IBM Intelligent Video Analytics advanced detection, classification, and indexing algorithms enable the user to data mine the index of events for a wide array of criteria. Users can search real time and historical data for specific items such as vehicles and objects. To narrow the search further, the data mining capabilities allow for searching by the color, size, or speed of a moving object such as a car or truck. Additionally, it provides a statistical analysis of the activity such as people entering a building. This can be sorted by date and time or over an extended period of time in order to perform a trend analysis. These figures are directly linked back to their particular video feed for immediate viewing when a potential anomaly has to be reviewed promptly.
Standards-based open and extensible architecture
IBM Intelligent Video Analytics architecture has been designed specifically to facilitate interoperability with products from different vendors to broaden and enhance the overall security framework for the particular environment in which it is deployed. This approach to video analytics and surveillance allows for an organization to deliver preventive security controls, countermeasures, and safeguards, and to evolve, as necessary, by incorporating third- party products and services (such as specialized analytics, sensor data, and integration with transactional information technology systems). IBM Intelligent Video Analytics interoperability contributes to its ease of implementation by leveraging and enhancing current technologies rather than isolating them. As a result, it provides the necessary framework to implement effective security controls, which can adapt to ever changing and new threats.
Cities around the globe are faced with the common challenges of aging infrastructure, shrinking budgets, shifting populations, and increasing threats. Innovative city leaders understand that to address these challenges, they not only have to work harder, but smarter. This requires analyzing information to make better decisions, anticipating problems to resolve them, and coordinating resources to operate effectively.
Content based searches
IBM Intelligent Video Analytics supports a variety of content-based searches of surveillance events. The search can be based on the following attributes:
Object Description ------------------------------ -------------------------------- Object class Vehicle, person and group Object appearance Color and size Object location Position in the scene, as marked by users using a bounding box Object movement Speed and duration of presence Time range of event occurrence Time ranges can be set either relative to the current time (e.g. last 4 days) or within an absolute date/time range (Monday at 4 PM till Tuesday 10 AM) License plate number Partial digit string from a license plate or the entire license plate string
Table 1: Search attributes
Unstructured text-based keyword/clustering/semantics is supported by Intelligent Video Analytics. A users can search through the database of events in the following ways.
Search Description ------------------------------ -------------------------------- Composite region-based search This feature allows a composite sub-region of the scene. For example, find red car of certain size traveling on lane one. Multi-camera search Any search query can be applied to a list of cameras, one or more, selected by the users. Saving searches Any search query can be saved and named for later use by the users. The search queries saved by a particular users are accessible only to that users. Applying retrieved searches A users can retrieve any saved search and apply it to the same cameras at a later time or to a different set of cameras. A relative time search (for instance, last 4 hours) can be applied to the same or a different camera at a later time to get the most recent results.
Table 2: Database search functions
Pan-Tilt-Zoom (PTZ) adjustments in event recognition
Alerts can be enabled and disabled according to the preset positions of the camera. If a Pan-Tilt-Zoom (PTZ) camera moves off of its home position, alerts that depend on observing a specific location can create false alarms. Intelligent Video Analytics can detect the change and disable alerts defined on the home position.
Function Description ------------------------------ -------------------------------- Crowd density analysis Intelligent Video Analytics available space and can alert if a certain threshold is met. Queue management The users can define when a queue is too long or determine approximately how many people are currently in a queue. Overcrowding alarms Thresholds are defined to determine when a space becomes overcrowded. Stationary people or object With the proper camera view, alarms Abandoned Object alerts can be defined to alert officials to when an object or person is stationary in a camera view. Congestion analysis in high Using the Crowd Density traffic Analysis tool, an alert can be defined to alert the users to congestion in an area. Counter flow In one way paths, the users can define an alert that triggers when a person walks in the opposing direction. At an exit only door, for example, an alert is triggered when an individual enters through this door. Bi-directional counting Using the directional motion alert, the system can count individuals moving in two different directions on the same camera view. These counts an be recorded in reports with time frames that are defined by the users. Counting in a crowd Count the number of people or vehicles in an area. Real time flow rates Calculate average speed of objects moving thru scene. Aggregate values Statistical and graphical reports can be created with this data.
Table 3: Crowd management functionality
Intelligent Video Analytics has a high accuracy people counting profile. This profile is applicable to top down cameras, which are specifically placed for the purpose of people counting. This system can be tuned based on the expected size of a person. It has the ability to differentiate between a single person and group.
The people-counting module uses calibration information to estimate the number of people within each moving cluster.
The module is expected to be 90-95% accurate in a typical entrance environment. The current version is likely to operate at a higher error rate in the presence of children, shopping carts, heavy crowding and strong lighting artifacts.
Intelligent Video Analytics can identify individuals with specific features and store these characteristics into the metadata repository. This gives Intelligent Video Analytics operators the ability to search for features of a person. For example, operators can search for individuals that are wearing yellow shirts. In a situation where police officials are searching for a person with these type characteristics, this will significantly reduce the amount of time searching through hours and hours of video. This is a key feature for law enforcement scenarios (i2 Analyst Notebook and i2 COPLINK up-sell opportunities).
The following face analysis profiles are supported:
Face analysis Description ------------------------------ -------------------------------- Face tracking Supports the capture of one face image per person (highest resolution frontal image) as people approach a camera. Sensitive face tracking Supports the capture of one face image per person as people approach a camera, with additional indexing to capture a person image in case the face is not visible. This is a key feature for law enforcement scenarios (i2 Analyst Notebook and i2 COPLINK up-sell opportunities).
Table 4: Face tracking functionality
Face tracking can distinguish the presence of a face in a given video frame, track the face in subsequent video frames and produce a single key frame zoomed in to show the person's face, for every person that walks in front of a camera. The users will be able to query Intelligent Video Analytics in the following ways when looking for a person.
Query Description ------------------------------ -------------------------------- Find last 100 faces approaching the camera Range query Find all faces of people who entered the building between a specified start-time and end-time Statistic view Show the statistics, via bar chart, of number of faces captured over time, including drill down capability
Table 5: Query descriptions
Traffic surveillance functionality provided by Intelligent Video Analytics:
This is a key feature for law enforcement scenarios (i2 Analyst Notebook and i2 COPLINK up-sell opportunities).
Function Description ------------------------------ -------------------------------- Average speed Intelligent Video Analytics can determine the average speed a vehicle is moving through the camera view. Individual speed Likewise, the application can determine the individual speed of a moving object. Travel time Travel time can be calculated by measuring the flow of a vehicle between two points. Vehicle counts Using the tripwire or directional motion alerts, Intelligent Video Analytics can count the number of vehicles, for example, a count of the car traveling Density calculations Vehicle density analysis can be determined by evaluating the used space to available space. Accidents The system can help determine if there is an accident if the flow of traffic has stopped. Slow, fast or stopped vehicles Slow moving, fast moving or stopped vehicles can be identified by the system. Wrong direction Alerts can be defined for vehicle moving in the wrong direction on city streets. Unauthorized vehicles Vehicles parked in an unauthorized spot can be identified using the Smart Vision Suite.
Table 6: Traffic surveillance functionality
The following real-time alert attributes are fully supported by Intelligent Video Analytics. This is a key feature for law enforcement scenarios (i2 Analyst Notebook and i2 COPLINK up-sell opportunities).
Alert attributes Description ------------------------------ -------------------------------- Multi-zone All alerts in Intelligent Video Analytics are multi-zone, for instance, the users can set multiple alerts on a single camera field of view Region of un-interest The users can indicate multiple regions of un-interest in a camera to mask out high false alarm zones Scheduling alerts All alerts that are set by a user can be scheduled to operate on a schedule of choice by the users (day of week, hour of day, and so forth.) Setting alerts Users with the right privilege can configure the alerts on any camera from any browser Storing and retrieving alert Alerts that are active on a definitions camera are archived in the database and become persistent for a given camera, even if the camera and engine are restarted Annotating and archiving alerts Users can annotate an alert with additional information and save the alert for future reference Searching alerts Users can search through user-archived alerts or current database for a text string, which can be the name of the alert or the comment entered by a user Alert priority Any alert can be assigned a different priority level. The priority level can be used to search and sort through alerts in the forensic mode Alert auto play Any alert that is defined as priority Urgent will trigger an auto-play of the video. This feature is very useful in systems with large numbers of cameras. Critical alerts will automatically start playing on the user interface - as soon as the alert occurs. Adjudication Users can intercept an incoming alert, annotate it and determine if the alert will display onto a common dashboard such as a video wall.
Table 7: Real-time alert attribute descriptions
Real time alert types
The following real-time alert types that are fully supported by Intelligent Video Analytics. This is a key feature for law enforcement scenarios (i2 Analyst Notebook and i2 COPLINK up-sell opportunities).
Intelligent Video Analytics Real-time alert name Features -------------------------------- -------------------------------- Motion detection alert: Allows Users specify minimum and the users to detect motion in a maximum object sizes, minimum specified region of the camera number of frames motion should image last, minimum number of objects. Users can mark (set) object size with mouse Tripwire alert: Allows the Users specify multi-segment users to detect the directional tripwire location, direction of crossing of a trip wire - setup line crossing and maximum and by the users minimum object sizes. Users optionally specify color of object, type of object (vehicle and person) and minimum and maximum speed of object. Users can mark object size with mouse. Region alert: Allows the users Users specify the minimum and to detect various types of maximum object size and have a object behaviors within a zone, number of choices regarding how such as starting, stopping, the moving object interacts entering, leaving with the region of interest. Users can mark object size with mouse. Abandoned object: Allows the Users specify the minimum users to detect objects that object size, maximum object have been abandoned size and wait time before triggering alarm. Significant improvements in the base algorithm which have been tested in real urban environments. Directional motion: Allows the Users specify minimum and users to detect movement of maximum size, tolerance range objects in a specified direction for direction. Users optionally specify color of object, type of object (vehicle and person) and minimum and maximum speed of object. Users can mark object size with mouse. Camera move - blind: Users specify sensitivity to Automatically detects changes in camera movement and the minimum camera state like movement of pre-event recording duration the camera and blinding of the camera Camera movement atop: Alert Users specify how long camera will be triggered when a should be stopped before the Pan-Tilt-Zoom camera stops alert triggers moving
Back to top
IBM Intelligent Video Analytics can support large-scale deployment for thousands of video camera channels through federated and clustering architecture. For example, a small (20 camera) installation would consist of the following hardware components.
Smart Surveillance Engine (SSE) server: Either one IBM BladeCenter HS23 or one IBM System x3650 M4
- CPU - Dual quad-core 3 GHz, 4 GB memory
- Dual 73 GB 10K SAS hard drives
Middleware for Large-Scale Surveillance (MILS) server
- Either one IBM BladeCenter HS23 or one IBM System x3650 M4.
- CPU: Dual Quad-core 3 GHz, 4 GB memory (16 GB for clustering).
- Storage: Dual 146 GB 10K SAS hard drives. (Dual disk drives are used for mirroring the operating system and middleware and application components installed on the blade server.)
- Connectivity to an external storage system requires Fibre Channel connectivity components (for example, Qlogic 4 GB Fibre Channel Expansion card - 46 M6-065)
- High speed network connectivity (for example, two Cisco Catalyst Switch 3110X for IBM BladeCenter).
Analytics Engine capacity: The video analytics consist of the Smart Surveillance Engine (SSE) and the Middleware for Large-scale Surveillance (MILS)
- SSE server capacity: A SSE server is able to process approximately 20 channels (cameras) of video (depending on view activity level).
- MILS server capacity: MILS can be deployed for small instances as
a single node that supports up to 100 channels (cameras) or for larger
installations in a clustered configuration that supports up to 500
channels (cameras) per instance.
- Multiple instances can be federated to enable single sign-on, alert passing, forensic searching, and adjudication (alert dispositioning) between instances.
- Video management server capacity: The Video Management Server capacity will vary depending on the selected partner product.
IBM Intelligent Video Analytics uses a video feed via a direct show filter. IBM Intelligent Video Analytics is currently integrated with the following video management providers:
- Omnicast 4.7
- Omnicast 4.8
- XProtect Corporate 5.0
- XProtect Corporate 6
- ELSAG License Plate Recognition
- Microsoft Windows Media Player
Video can be received for analytic process from Internet streaming video, which are video streams associated with your configured Video Management Server provider or other streaming source.
Supported desktop operating systems
- Microsoft Windows 7 Professional, Enterprise, or Ultimate x86 (32 bit) and x64 (64 bit) SP1
- Microsoft Windows Vista Business, Enterprise, or Ultimate x86 (32 bit) and x64 (64 bit) SP1, or later
Supported server operating systems
- SSE - Microsoft Windows 2008 Server, 32 bit and 64 bit.
- MILS - For small configurations, single-node environments are typically deployed with either a Microsoft Windows 2008 Server or a Linux SUSE SLES 11 SP3, 32-bit only OS environment. For larger configurations (above 100 cameras), a clustered environment running Linux SUSE SLES 11 SP3 is recommended.
Supported web browsers
- Microsoft Internet Explorer 8 and 9
Back to top
The customer is responsible for evaluation, selection, and implementation of security features, administrative procedures, and appropriate controls in application systems and communication facilities.
Back to top
No printed publications are shipped with this product.
The IBM Publications Center is located at
The Publications Center is a worldwide central repository for IBM product publications and marketing material with a catalog of 70,000 items. Extensive search facilities are provided. Payment options for orders are via credit card (in the US) or customer number for 20 countries. A large number of publications are available online in various file formats, and they can all be downloaded by all countries.
Back to top
IBM Intelligent Video Analytics uses the security and auditability features of the host software.
Back to top
(R), (TM), * Trademark or registered trademark of International Business Machines Corporation.
** Company, product, or service name may be a trademark or service mark of others.
Windows is a trademark of Microsoft Corporation.
© IBM Corporation 2020.