The IBM® Telum® processor powers IBM LinuxONE 4 with AI acceleration, security and high performance for mission-critical workloads. Its successor, Telum II, is the foundation of IBM LinuxONE 5, enhancing performance, memory capacity and AI capabilities. Designed for scalability, these processors optimize AI, cloud and transactional applications across industries like banking, finance and healthcare while delivering speed, reliability and energy efficiency.
Telum enables on-chip AI for real-time fraud detection and risk analytics. Telum II enhances AI on IBM LinuxONE with 4x compute power, a high-speed DPU and deep learning, enabling more complex models and faster processing.3
Both processors offer advanced security, end-to-end encryption and confidential computing. Telum II introduces further enhanced quantum-safe cryptography and hardware-based tamper detection, further securing sensitive workloads.4
Telum is designed for sustainability, reducing power consumption while maintaining performance. Telum II improves efficiency further with up to 15% core power reduction, optimizing operations without compromising speed.5
Designed for 99.999999% uptime, Telum ensures continuous operation with self-healing and automatic failover. Telum II enhances system redundancy, minimizing downtime for mission-critical environments.
Telum provides 8 cores with 16 threads per core for parallel processing at up to 5.5 GHz. Telum II expands cache, introduces a DPU for accelerated I/O and further reduces latency for data-intensive applications.6
Telum is cloud-native, supporting Kubernetes, Red Hat® OpenShift® and major cloud providers. Telum II expands integration capabilities and optimizes workload portability across hybrid and multicloud environments.7
The IBM Spyre™ Accelerator is a 75W PCIe Gen 5x AI accelerator with 128 GB LPDDR5 memory, optimized for generative AI and multimodal LLMs.8 Featuring 32 (+2) cores with a 2 MB scratchpad per core and >55% core utilization, Spyre scales by card and drawer, enabling businesses to handle complex AI inferencing efficiently across enterprise applications.