IBM BigQuality is a data quality solution that provides a rich set of data profiling, cleansing and monitoring capabilities that execute on the data nodes of an Apache Hadoop cluster. IBM BigQuality helps ensure information quality and provides the ability to quickly adapt to strategic business changes by stewardship and monitoring of data and application of data quality rules for your Hadoop data.
Part of the IBM InfoSphere® Information Server product family built specifically to run on Hadoop clusters, BigQuality and IBM BigIntegrate offer end-to-end integration and governance capabilities for your Hadoop data.
Provides a massively scalable, shared-nothing, in-memory data integration and quality platform. Runs natively in a Hadoop cluster to help bring robust capabilities to the data lake.
Enables deep data profiling
Delivers a rich set of data profiling capabilities to understand the assets that are moved into Hadoop distributed data storage clusters.
Supports data privacy
Enables support for data privacy, data masking and test data management initiatives by identifying where personally identifiable information (PII), sensitive and other classes of data are stored.
Improves time to value
Supports fast time to value by identifying data contained within a column using three dozen predefined, out-of-the-box data classes including credit cards, taxpayer IDs, US phone numbers and more.
Provides powerful data tools
Enables data investigation, standardization, matching, survivorship and address verification support running directly inside a Hadoop cluster. Provides USAC and AVI address cleansing and validation.
Easy-to-use GUI with drag-and-drop feature
A graphical interface designed for ease of use helps organizations quickly transform information across the enterprise. Use with IBM BigIntegrate to create a feature-rich application palette that includes connectors to a wide range of data sources including all major traditional databases and platforms — distributed, IBM z/OS®, file types, Oracle, Salesforce.com, SAP, Hadoop and more. Developers make these data sources available through simple drag-and-drop capabilities.
Accelerated development with built-in transformation
IBM BigQuality is a highly scalable data quality solution that helps improve performance. Native connectivity to common data sources is available through specific interfaces and is easily achieved with built-in support for IBM BigIntegrate. Hundreds of built-in transformation functions enable you to accelerate your development timeline. Reduce the time required for custom coding by using and reusing powerful out-of-box data quality and integration functionality.
Comprehensive and customizable data cleansing
Use comprehensive and customizable data cleansing functionality in batch and in real time to automate source data investigation and data classification. This automation enables the data steward team to manage data assets effectively and respond faster to business objectives with trusted data. Automation features scale as required to help improve your processing of the exploding amount of data now sent to Hadoop.
Automated surveys and classification for improved governance
As the community of data providers and consumers expands, so does the uncertainty over data sensitivity and how to comply with mandated regulatory requirements for rapidly growing volumes of data. IBM BigQuality helps to survey the various data sources, including Hadoop, and to ensure appropriate data location and usage according to predefined policies. IBM BigQuality helps standardize and match records in accordance with customizable business rules.
Improved data warehousing
Deploy with the power of IBM BigIntegrate to create a full-featured integrated data solution that can bring big data and analytics into any organization. Combine traditional data warehouse tools with current big data techniques and technologies, including Hadoop, stream computing, data exploration, advanced analytics, enterprise integration and IBM Watson® cognitive computing.
Use with IBM BigIntegrate to enable your organization to integrate and transform any Hadoop data. Leverage both existing and new data sources for big data initiatives. Enhance data with scalable enterprise-class monitoring, cleansing and other rich and robust data quality capabilities. Continuously transform Hadoop data into trusted and governed information.