Think 2018

Empowering the New Data Developer

Share this post:

After years of frustration with the trucking industry’s slow and inconsistent processes for loading and unloading cargo, Malcolm McLean in 1956 watched as his SS Ideal-X left port in New Jersey loaded with 58 of the world’s first intermodal shipping containers – a product he invented and patented.

The defining feature of his container was its simplicity: he designed it to be easy to load and unload, with no unique assembly required. It is estimated that this seemingly simple concept reduced shipping costs by 90% (from $5.86/ton to $.16/ton), which in turn led to a global standardization. With virtually unlimited expansion of cargo, on the basis of this standardization, global commerce began to accelerate.

Like shipping, the technical world has sought standardization throughout its history, as well. Some of the more recent advances in high tech, of course, include the advance of TCP and TCP/IP, the broad adoption of Linux, and has now entered into a new era with Kubernetes. The benefit of standardization in this realm is flexibility and portability: engineers build on a standard and in the case of Kubernetes, their work is fully flexible, with no need to understand the underlying infrastructure. Like McClean’s intermodal shipping container, the benefit is reuse, flexibility and efficiency.

With shipping containers, the expansion of cargo drove a revolution in commerce. The cargo was the purpose, and the container the mechanism. The cargo in the current technology landscape is data. Data that is put to work by the new data developers and holds the insights that determine competitive advantage in all industries.

Most of the advances in IT over the past few years have been focused on making it easy for application developers. But, no one has unleashed the data developer. Every enterprise is on the road to AI. But, AI requires machine learning, which requires analytics, which requires the right data/information architecture. These building blocks, essential for AI, when integrated, provide a clear business benefit: 6% higher productivity according to a recent MIT Sloan study.

When enterprise intelligence is enhanced, productivity increases and standards can emerge. The only drawback is the assembly required: all systems need to talk to each other and data architecture must be normalized. What if an organization could establish the building blocks of AI, with no assembly required?

Announced today, IBM Cloud Private for Data is an engineered solution for doing data science, data engineering and application building, with no assembly required. As an aspiring data scientist, anyone can find relevant data, do ad-hoc analysis, build models, and deploy them into production, within a single integrated experience.

For the first time ever, data has superpowers. Consider the following, which only IBM Cloud Private for Data provides: Seamless access to data across on-premises and all clouds; a cloud-native data architecture, behind the firewall; and data ingestion rates of up to 250 billion events per day.

What Kubernetes solved for application developers (dependency management, portability, etc), IBM Cloud Private for Data will solve for data developers and speed people’s journey to AI. Much like what McLean’s architecture did for commerce, this too will do for unleashing data for competitive advantage. Now is the time, to make your data ready for AI.

Related:

General Manager, IBM Analytics

More Think 2018 stories

Fueling the HPC Transformation with AI

As the annual Supercomputing conference celebrates its 30th birthday in Dallas this week, I’m reminded how far supercomputing has come, and how exciting the HPC industry is right now. With the Big Data boom, the immense amount of information represents tremendous opportunity for researchers who have new fuel for their projects. But it also provides […]

Continue reading

How Ancestry Discovers Hidden Forecasting Insights with IBM Cloud, Analytics

Ancestry has been helping people discover insights about their family histories and origins since 1983. The company, which operates the world’s largest consumer DNA network, applies both engineering and technical innovations around records research and consumer genomic data to help millions of people unearth family information – information that’s often buried deep in hard-to-find, hard-to-access […]

Continue reading

Using Data and Analytics to Combat Human Trafficking

After serving as Deputy Director of Intelligence for the UK Serious Organised Crime Agency – similar to the U.S. Federal Bureau of Investigation – I found that “retiring” really wasn’t an option for me. I couldn’t walk away from the fight against human trafficking and other forms of modern slavery. It’s just too important. And […]

Continue reading