Software-defined computing

Evolving high-performance computing to deal with mountains of data

Share this post:

Whether it’s the universe, the Earth’s population, your waistline, or data and compute capacity, coping with expansion is high on everyone’s agenda.

Data volumes are expanding in leaps and bounds, while at the same time compute capabilities are being greatly enhanced with technologies like hardware accelerators. Technologies have advanced to store and process massive amounts of data, resulting in new generations of very high-resolution models, and this delivers better-quality products to market faster than ever before. However, this is not without its challenges. Managing and moving large data volumes has become a major pain point in computing.

To address data management challenges, organizations with high-performance computing (HPC) environments are looking at how to minimize data movement and how to run their workloads where the data already resides. These are complex questions, especially when we move to multiple compute clusters scattered across the globe, all while competition and pressure to cut costs mounts.

Expanding from traditional compute-centric HPC to a data-centric model is a natural way to meet these challenges. Here are five key considerations for a data-centric approach to high-performance computing:

  • Orchestration of storage and compute resources; moving and caching data when it makes sense
  • Tailoring resources to workloads; considering workload types when selecting storage medium, processors
  • Storage intelligence; directing applications to the closest data
  • Storage efficiency; how to bring in data more quickly, speed operations on data
  • Data management, including long term storage

Learn more about these five key considerations for implementing a data-centric HPC approach in the white paper here. Get faster results and lower your investment with data-centric HPC solutions for storage and compute—all while creating a more flexible, data-centric HPC infrastructure.

As for the waistline reduction strategy, we’ll leave that up to you and your New Years resolutions!

More stories

What is the ROI of IBM Power Systems for SAP HANA?

Power servers, Power Systems, Workload & resource optimization

Infrastructure plays a critical role in the success of SAP HANA deployments. Organizations deploy SAP HANA applications to streamline business processes and generate real-time insights. However, exploiting these capabilities place massive scalability and availability demands on the IT infrastructure. These demands need to be met in an environment that constantly changes with the business needs. more

For enterprise AI, horsepower changes everything

AI, Deep learning, Workload & resource optimization

This blog post is sponsored by IBM. The author, Peter Rutten, is a Research Director for the IDC Enterprise Infrastructure Practice, focusing on high-end, accelerated, and heterogeneous infrastructure and their use cases. Information and opinions contained in this article are his own, based on research conducted for IDC. An IDC survey that I performed in more

3 things to do in Q3 to get ready for AI

AI, Digital transformation, Workload & resource optimization

Science fiction movies have been warning us for decades that artificial intelligence (AI) is going to rise up and take over the world. As it turns out, those movies were right; AI is poised to take over the world. The business world. More accurately, the organizations that are able to adopt, implement and maintain AI more