Make data ready for AI with ICP for Data on Power Systems
As artificial intelligence (AI) capabilities mature, enterprise leaders are continuously evaluating use cases that can transform their business. A key challenge that slows down AI adoption is the abundant but untamed data that is not ready for AI. There is a strong correlation between companies outperforming in AI adoption and the ones that have a robust data infrastructure aligned with their business architecture.
According to the 2018 IBM Business Value survey, Shifting Toward Enterprise-grade AI, 65 percent of outperformers surveyed capture, manage and access business, technology and operational information on key corporate data with a high degree of consistency across the organization versus 52 percent of all others surveyed.
IBM recently introduced IBM Cloud Private for Data (ICP4D), a data and analytics platform, to help make your data estate ready for AI. It simplifies how you collect, organize and analyze your data in a cloud native platform and feed your AI models the data they need. IBM Cloud Private for Data can now be deployed on IBM Power Systems™ – the server infrastructure purpose-built for data-intensive and AI workloads.
IBM Cloud Private for Data on Power Systems can help:
Collect, connect and access data
- Connect and discover content from multiple data sources across your organization
- Provision databases and virtualize data access
Build descriptive, predictive and prescriptive models
- Create machine learning, deep learning, optimization and other advanced mathematical models
- Design your models programmatically or visually with popular open source tooling and IBM frameworks
- Train at scale with support for distributed compute and GPUs
Govern, search and find data
- Find data and analytics assets in the Enterprise Catalog
Understand and prepare data for analysis
- Understand, cleanse and prepare your data to create data preparation pipelines visually
Manage and deploy models
- Manage your models across dev, test, staging, and prod
- Deploy your models and scale automatically for online, batch or streaming use cases with SLAs
- Monitor model performance and automatically trigger retraining and redeployment as rolling upgrades
Create analytics applications
- Incorporate trusted and governed models into applications, dashboards and operational systems
The combination of ICP for Data and Power Systems delivers unprecedented value for clients looking to leverage all their data to infuse AI into every aspect of their business. POWER9 servers are designed for data-intensive, advanced analytics and AI workloads that help fuel critical insights to differentiate in and succeed in the AI era.
POWER9, with its advanced core and chip architecture, delivers 2X performance per core1, 2.6x RAM per socket2, and 1.8x memory bandwidth per socket3 vs compared x86 processors. The state of the art I/O subsystem technology, including next-generation NVIDIA NVLink, PCIe Gen4 and OpenCAPI is unmatched in the industry.
In performance benchmarks, Power Systems has consistently performed significantly better than competitive platforms tested for data, analytics and AI workloads. For example, in a Db2 Warehouse benchmark, IBM was able to run 2.54x more queries per hour per core than compared x86 servers at a 57 percent lower total solution cost Similarly, we saw impressive performance and price-performance advantage when running machine learning / deep learning model training on Power Systems. These results combined with the optimizations for running container-based applications positions Power Systems as the ideal platform for deploying ICP for Data.
A broad catalog of value-added components of ICP for Data will be made available on Power in the second quarter and second half of 20194. Clients will have the flexibility to deploy ICP for Data on an existing ICP deployment or as a standalone solution. I am looking forward to another banner year for Power Systems as the team delivers innovative solutions that help drive significant client value!
- 2X performance per core is based on IBM Internal measurements as of 2/28/18 on various system configuration and workload environments including (1) Enterprise Database (2.22X per core): 20c L922 (2×10-core/2.9 GHz/256 GB memory): 1,039,365 Ops/sec versus 2-socket Intel Xeon Skylake Gold 6148 (2×20-core/2.4 GHz/256 GB memory): 932,273 Ops/sec. (2) DB2 Warehouse (2.43X per core): 20c S922 (2×10-core/2.9 GHz/512 GB memory): 3242 QpH versus 2-socket Intel Xeon Skylake Platinum 8168 (2×24-core/2.7 GHz/512 GB memory): 3203 QpH. (3) DayTrader 7 (3.19X per core): 24c S924 (2×12-core/3.4 GHz/512 GB memory): 32221.4 tps versus 2-socket Intel Xeon Skylake Platinum 8180 (2×28-core/2.5 GHz/512 GB memory): 23497.4 tps.
- 2.6X memory capacity is based on 4TB per socket for POWER9 and 1.5TB per socket for x86 Scalable Platform Intel product brief: https://www.intel.com/content/dam/www/public/us/en/documents/product-briefs/xeon-scalable-platform-brief.pdf?asset=14606
- 1.8X bandwidth is based on 230 GB/sec per socket for POWER9 and 128GB/sec per socket for x86 Scalable Platform Intel product brief: https://www.intel.com/content/dam/www/public/us/en/documents/product-briefs/xeon-scalable-platform-brief.pdf?asset=14606
- IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice and at IBM’s sole discretion. Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision.
The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code or functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any future features or functionality described for our products remains at our sole discretion. Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multiprogramming in the user’s job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here.