Data Scientist zone
Take advantage of faster inferencing and higher throughput by easily deploying your machine learning and deep learning models on IBM Z and LinuxONE. Build and train models anywhere using open-source data science frameworks and deliver increased the business impact with lower latency, scalability, and uncovering insights at the transaction level with production readiness.
Explore Use Cases and associated technical resources
Spotlight Data Scientist Resources
- IBM z16 with z/OS delivers up to 20x lower response time and up to 19x higher throughput when co-locating applications and inferencing versus sending the same inferencing operations to a compared x86 cloud server with 60ms average network latency.*Disclaimer: Performance result is extrapolated from IBM internal tests running local inference operations in a z16 LPAR with 48 IFLs and 128 GB memory on Ubuntu 20.04 (SMT mode) using a synthetic credit card fraud detection model (https://github.com/IBM/ai-on-z-fraud-detection) exploiting the Integrated Accelerator for AI. The benchmark was running with 8 parallel threads each pinned to the first core of a different chip. The lscpu command was used to identify the core-chip topology. A batch size of 128 inference operations was used. Results may vary.