IBM InfoSphere® Information Server Enterprise Edition is an industry-leading, end-to-end data platform that provides a complete suite of capabilities, including automated data discovery, policy-driven governance, self-service data preparation, data quality assessment and cleansing for data in flight and at rest, and advanced dynamic or batch data transformation and movement. It helps you deliver trusted business-ready data to your key business initiatives such as big data, data lakes, data warehouse modernization and master data management either on-premises, private cloud, public cloud or hyperconverged systems such as IBM Cloud Pak™ for Data.
Drive innovation with improved trust
Provide a unified platform to deliver trusted data across your organization that enables collaboration and improves productivity for your IT and business users.
Business glossary capabilities
Reduce the risks and costs of maintaining your data lake by implementing a comprehensive data governance, including end-to-end data lineage, for business users.
Modernize and consolidate your systems
Improve cost savings by delivering clean, consistent and timely information for your data lakes, data warehouses or big data projects, consolidate applications and retire outdated databases.
Deployment and runtime flexibility
Build your jobs once and deploy anywhere: on-premises, public cloud, private cloud or AI-ready platforms such as IBM Cloud Pak for Data using containers.
Machine learning based in-line quality
Eliminate garbage in, garbage out reporting and analytics by implementing comprehensive and scalable data quality processing.
Fast time to value
Start small and scale your run-time without needing to change your design. Optimize your integration, transformation and data quality workloads based on data locality and resources availability.
See what's new in IBM InfoSphere Information Server v11.7.1
The DataStage Flow Designer features automatic schema propagation to speed up the job generation, type ahead search and backwards capability and allows you to design once and execute anywhere. Create data integration flows, enforce data governance and quality rules with a cognitive design that recognizes and suggests usage patterns.
Classify unstructured data sources
Classify email messages, word-processing documents, audio or video files, collaboration software, or instant messages by integrating IBM InfoSphere Information Governance Catalog with IBM StoredIQ®.
Supports a wide range of connectors
Supports a wide range of out-of-the-box native connectors as Google Cloud Storage, Azure, Cassandra, HBase, Hive, Kafka, Amazon S3, Clodera and many more.
Integration with IBM Watson Knowledge Catalog
Use IBM Watson® Knowledge Catalog to let business users exploit data assets in a governed and secure way.
Supports data integration across multicloud environments
Collect, transform and distribute large volumes of data with built-in transformation functions that reduce development time, improve scalability and provide for flexible design. Deliver data in real time to your business applications through bulk data delivery using extract, transfer and load (ETL), virtual data delivery or incremental data delivery using change data capture.
Business glossary and lineage for data governance
Improve visibility and information governance by enabling complete, authoritative views of information with proof of lineage and quality. Views can be made widely available and reusable as shared services, while rules are maintained centrally.
Explore relationships between business assets
Find assets in your enterprise, explore their relationships and collaborate using Enterprise Search and IBM Information Governance Catalog.
Assess, analyze and monitor data quality
Load cleansed information into analytical views to enable you to monitor and maintain data quality. Reuse these views across the enterprise to establish data quality metrics that align with business objectives, allowing your organization to quickly uncover and fix data quality issues.
Integrate with Hadoop
Enables data integration, data cleansing, and data profiling and analysis workloads to run on the data nodes of a Hadoop cluster, where your big data is stored, to minimize data movement.