March 22, 2017 | Written by: IBM Academy of Technology
Share this post:
Systems Hardware Technical Leader
IBM Systems Hardware
Research Publications: 89
Connect with Xavier: LinkedIN, Twitter, ResearchGate
The expression artificial intelligence (AI) appeared in 1956 with the objective to build systems to think and act as humans. Machine Learning (ML) came in the seventies with a more pragmatic and humble approach, with algorithms able to accumulate knowledge and intelligence based on experiences, and guided via their own learning, rather than explicitly programmed. But the technology’s growth was hampered due to a lack of data and computing power.
Today, data transforms industries and professions. When we look at cognitive algorithms, there is a classical loop starting with learning, transmitting and improving what needs to improve. AI can learn from expertise and existent knowledge such as books, images, videos or scientific papers.
And data is hardly a gap today. Data flows from every IoT device, replacing guessing and approximations with precise information (1).
Why IT infrastructure is key
Let’s take an image and think about infrastructure in parallel. When we speak about transmitting, we speak about Systems; Systems able to process information, and Systems tuned for cognitive computing. A well-known system to process huge amounts of data and provide cognitive insights in real time is the human brain, with a memory of more than 2.5 million Giga Bytes, more than 80 billion neurons, and more than 100 thousand billion synapses. The brain only uses around 20 Watts continuously and is around 1450 cm3 in volume, and weighs an average of 1300g. Ideally, computing Systems should process data as efficiently and with the performance of the human brain.
What happens if the System doesn’t reach the expected speed and efficiency?
If the System lacks adequate capacity or efficiency, it will lose memory and thus data, will have I/O bottlenecks, will not store data in the right location, and will not provide answers when they are needed. In short, the System won’t be able to handle cognitive workloads.
Servers, storage and workload management need to be designed from the ground up for cognitive workloads. There are several critical requirements such as:
- rapid access to data (low latency and fast storage)
- faster time to insights (compute infrastructure designed for big data)
- accelerated performance for complex analytics/machine learning algorithms (hardware acceleration) and
- preventing data ingestion bottlenecks (unified access to block, file and object data).
Besides caches, memory bandwidth and IO bandwidth, other important components on the server design is to use new types of hardware accelerators, such as co-processors, hardware accelerator units in the processor, GPUs and FPGAs to offload processor-intensive tasks to more optimized hardware units.
Businesses today require cognitive systems that can gain insight from the structured and unstructured data flowing from their IT infrastructure. In our (2) latest study we provide information about cognitive workloads such as deep learning, machine learning or text mining, the main solutions in the market and open source community, and why infrastructure is a key element.
Read complete study
- IBM Point of view, 2015; https://www.ibm.com/it-infrastructure/us-en/
- Infrastructure Designed for Cognitive Workloads: Why is it crucial? Xavier Vasques, Laurent Vanel, Madeline Vega, Angshuman Roy, Gerd Franke, Jun Sawada, Raghava Reddy Kapu Veera, Shantan Kethireddy
Posted on behalf of Xavier Vasques.
These are the opinions of the author and while a distinguished member of our Academy and IBM, all thoughts expressed are solely his/her own.