January 14, 2019 By Lauren Frazier 2 min read

Data virtualization gives businesses the power to handle big data and make queries across multiple data sources fast and simple without moving data. As big data continues to grow, enterprises should have the ability to manage all of their data, regardless of where it lives, and they need a platform that has the agility of querying across data silos.

But what do the experts, working day to day with data, have to say about data virtualization?

The IBM Big Data and Analytics Hub recently interviewed Melvin Greer, chief data scientist of the public sector for the Americas for Intel, to discuss how data virtualization can help businesses achieve a truly data-centric strategy and accelerate data monetization.

Big Data and Analytics Hub: As chief data scientist of the public sector for the Americas at Intel, what are your main objectives and areas of focus?

Melvin Greer: I’m responsible for building Intel’s data science platform through graph analytics, machine learning and cognitive computing, to help accelerate transformation of data into a strategic asset for public sectors and commercial enterprises. As Intel continues to mature into a data-centric company, I help our customers harness the power of their data.

BDAH: What are the top needs of companies as they look to make use of data as a strategic asset?

Greer: Understanding data science and AI is top of mind for C-level executives and agency leaders. They want to develop a data-centric workforce that can execute an AI strategy that works. Many senior leaders need help developing a data strategy that complements their hybrid cloud strategy. They want data governance capabilities and inventories of composable application development kits for rapid development and implementation.

BDAH: How can data virtualization help companies accelerate their ability to use data strategically and to drive outcomes?

Greer: The ability to autodiscover data sources and metadata helps identify the appropriate data for analysis and provides traceability of that data. Data integration and interoperability is therefore enhanced, providing a single point of administration for enterprise data.

BDAH: What makes IBM Cloud Pak for Data (formerly IBM Cloud Private for Data), the IBM data platform with data virtualization, unique in the industry?

Greer: Speed. Speed of access to distributed data sources, and speed of system optimization via machine learning and adaptive algorithms. This means that organizations can realize the benefits of data analytics faster, and more data no longer means more work.

BDAH: What use cases will benefit the most from Cloud Pak for Data?

Greer: Health and life sciences (immunotherapy), retail (hyper-personalization) and security (cyber intelligence). These industries will benefit from the ability to establish a single view of enterprise data. These complex use cases will leverage support for a wide range of programming languages and frictionless integration across multiple enterprise and cloud environments.

BDAH: What other technology advancements do you expect will have a strong impact on speeding up how companies monetize their data?

Greer: Code and workload optimizations will continue to increase performance. Algorithms and models optimized for deep learning and neural networks will expand the appropriate use cases and drive developer and data scientist adoption.

Interested in more about data virtualization and Cloud Pak for Data? Register for the on-demand webinar, “Accelerating AI Innovation with Data Virtualization” to hear directly from IBM and Intel on how you can benefit from this technology.

Was this article helpful?
YesNo

More from Cloud

Top 6 innovations from the IBM – AWS GenAI Hackathon

5 min read - Generative AI innovations can transform industries. Eight client teams collaborated with IBM® and AWS this spring to develop generative AI prototypes to address real-world business challenges in the public sector, financial services, energy, healthcare and other industries. Over the course of several weeks, cross-functional teams comprising client teams, IBM and AWS representatives worked to design, develop and iterate on prototypes that push the boundaries of what's possible with generative AI. IBM used design thinking and user-centric approach to guide the…

IBM + AWS: Transforming Software Development Lifecycle (SDLC) with generative AI

7 min read - Generative AI is not only changing the way applications are built, but the way they are envisioned, designed, tested, documented, and deployed. It’s also revolutionizing the software development lifecycle (SDLC). IBM and AWS are infusing Amazon Bedrock generative AI capabilities into the IBM® SDLC solution to drive increased efficiency, speed, quality and value in every application lifecycle consistently and at scale. The evolution of the SDLC landscape The software development lifecycle has undergone several silent revolutions in recent decades. The…

How digital solutions increase efficiency in warehouse management

3 min read - In the evolving landscape of modern business, the significance of robust operational and maintenance systems cannot be overstated. Efficient warehouse management helps businesses to operate seamlessly, ensure precision and drive productivity to new heights. In our increasingly digital world, bar coding stands out as a cornerstone technology, revolutionizing warehouses by enabling meticulous data tracking and streamlined workflows. With this knowledge, A3J Group is focused on using IBM® Maximo® Application Suite and the Red Hat® Marketplace to help bring inventory solutions…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters