Q&A with Intel: What data virtualization means for the insight-driven enterprise

By | 2 minute read | January 14, 2019

Data virtualization gives businesses the power to handle big data and make queries across multiple data sources fast and simple without moving data. As big data continues to grow, enterprises should have the ability to manage all of their data, regardless of where it lives, and they need a platform that has the agility of querying across data silos.

But what do the experts, working day to day with data, have to say about data virtualization?

The IBM Big Data and Analytics Hub recently interviewed Melvin Greer, chief data scientist of the public sector for the Americas for Intel, to discuss how data virtualization can help businesses achieve a truly data-centric strategy and accelerate data monetization.

Big Data and Analytics Hub: As chief data scientist of the public sector for the Americas at Intel, what are your main objectives and areas of focus?

Melvin Greer: I’m responsible for building Intel’s data science platform through graph analytics, machine learning and cognitive computing, to help accelerate transformation of data into a strategic asset for public sectors and commercial enterprises. As Intel continues to mature into a data-centric company, I help our customers harness the power of their data.

BDAH: What are the top needs of companies as they look to make use of data as a strategic asset?

Greer: Understanding data science and AI is top of mind for C-level executives and agency leaders. They want to develop a data-centric workforce that can execute an AI strategy that works. Many senior leaders need help developing a data strategy that complements their hybrid cloud strategy. They want data governance capabilities and inventories of composable application development kits for rapid development and implementation.

BDAH: How can data virtualization help companies accelerate their ability to use data strategically and to drive outcomes?

Greer: The ability to autodiscover data sources and metadata helps identify the appropriate data for analysis and provides traceability of that data. Data integration and interoperability is therefore enhanced, providing a single point of administration for enterprise data.

BDAH: What makes IBM Cloud Pak for Data (formerly IBM Cloud Private for Data), the IBM data platform with data virtualization, unique in the industry?

Greer: Speed. Speed of access to distributed data sources, and speed of system optimization via machine learning and adaptive algorithms. This means that organizations can realize the benefits of data analytics faster, and more data no longer means more work.

BDAH: What use cases will benefit the most from Cloud Pak for Data?

Greer: Health and life sciences (immunotherapy), retail (hyper-personalization) and security (cyber intelligence). These industries will benefit from the ability to establish a single view of enterprise data. These complex use cases will leverage support for a wide range of programming languages and frictionless integration across multiple enterprise and cloud environments.

BDAH: What other technology advancements do you expect will have a strong impact on speeding up how companies monetize their data?

Greer: Code and workload optimizations will continue to increase performance. Algorithms and models optimized for deep learning and neural networks will expand the appropriate use cases and drive developer and data scientist adoption.

Interested in more about data virtualization and Cloud Pak for Data? Register for the on-demand webinar, “Accelerating AI Innovation with Data Virtualization” to hear directly from IBM and Intel on how you can benefit from this technology.