データサイエンティスト・ゾーン

ツール、リソース、デモを詳しく調べて、IBM Z および LinuxONEにおける機械学習とディープラーニングモデルを構築し、デプロイすることにお役立てください

概要

Take advantage of faster inferencing and higher throughput by easily deploying your machine learning and deep learning models on IBM Z and LinuxONE. Build and train models anywhere using open-source data science frameworks and deliver increased the business impact with lower latency, scalability, and uncovering insights at the transaction level with production readiness.

Value

20x
Lower inferencing response time vs sending the same inferencing operations off platform 1
19x
Higher throughput with inferencing vs sending the same inferencing operations off platform 1

Footnotes

  • IBM z16 with z/OS delivers up to 20x lower response time and up to 19x higher throughput when co-locating applications and inferencing versus sending the same inferencing operations to a compared x86 cloud server with 60ms average network latency.*Disclaimer: Performance result is extrapolated from IBM internal tests running local inference operations in a z16 LPAR with 48 IFLs and 128 GB memory on Ubuntu 20.04 (SMT mode) using a synthetic credit card fraud detection model (https://github.com/IBM/ai-on-z-fraud-detection) exploiting the Integrated Accelerator for AI. The benchmark was running with 8 parallel threads each pinned to the first core of a different chip. The lscpu command was used to identify the core-chip topology. A batch size of 128 inference operations was used. Results may vary.