How 4 organizations went from here to AI: IBM podcast series
Dez Blanchfield speaks with business leaders about artificial intelligence and deep learning adoption in the “From Here to AI” podcast series from IBM Power Systems.
When you start to investigate artificial intelligence (AI), or branch out to buy a couple AI servers to tinker with for your organization, the process of implementing a full AI solution can seem daunting.
With the help of four business executives and AI leaders and digital transformation expert and avid podcaster Dez Blanchfield, we set out to outline the natural progression of implementing AI in the data center. No matter what stage of the journey you are on, these podcasts should help you get “from here to AI.”
Below is a quick overview of each session. We’ve posted them as a series so you can binge-listen if you have the time, or you can tee them up separately to plug into the ones that interest you most. The series also can be found on iTunes, GooglePlay, SoundCloud, Stitcher and other major podcast platforms*.
From Here to AI podcast series’ episode list
- Episode 1— Build a Data Strategy
What is changing in the business landscape that is driving the need for AI? What are the most important factors to consider when building an AI data strategy? And how do you align them with your specific business objectives? Dez speaks with James Wade, Director of Application Hosting at healthcare provider Guidewell. Wade explains how the company’s commitment to open-source and a “permission to fail” culture helped him to build a data strategy around a very simple concept.
- Episode 2—Infrastructure for the AI Era
What are the overall trends regarding the use of GPUs for acceleration and artificial intelligence? What should you know about the data center challenge that’s underpinning AI? And where are the possible gains from traditional machine learning (ML) and deep learning (DL)? Dez speaks with Dave Salvator, Senior Product Marketing Manager at chipset provider NVIDIA, where accelerators and programmable GPUs are nothing new. Salvator talks about the current speed of ML and DL in neural networks, and the advantages of using GPU-equipped servers for high-performance computing (HPC) and AI.
- Episode 3—A Deep Dive on Deep Learning… and Beyond
What have we learned about deep learning in HPC? How can that help us within the enterprise? And what might the future of DL look like? Dez chats with Jack Wells, Director of Science at Oak Ridge National Laboratory (ORNL). ORNL just launched Summit, the world’s fastest supercomputer built on the new IBM POWER9™-based servers. Wells dives into the recent research performed on supercomputers like Summit, how the computer was designed and built, and where DL might be headed.
- Episode 4—How to Put Your AI Plan into Practice
How is AI impacting the costs of running a data center? What about the costs of implementation, scalability and training? At Think 2018, Dez sat down with Ari Juntunen, Chief Technical Officer of Elinar, a content and information company he started at his kitchen table. Juntunen talks about seeing AI opportunity in his business, how Elinar grew an AI practice and how seeing is believing.
For more information:
*Find “From Here to AI” using the following podcast applications and services: iTunes, Google Play, SoundCloud, Acast, Mixcloud, iVoox, Listen Notes, Stitcher, Player FM, Ustream, Podomatic, Myspace, YouTube, TuneIn, Ubook, MixLr, Tumblr, Podchaser, Pod Paradise, podbay.fm, Castbox, Castify, Anchor, and Subscribe On Android by BluBrry.