Big Data

Introducing a Universal Translator for Big Data and Machine Learning

Share this post:

SystemMLPressRelease_Graphic-09-DSAnybody who travels to a foreign country or reads a book or newspaper written in a language they don’t speak understands the value of a good translation. Yet, in the realm of Big Data, application developers face huge challenges when combining information from different sources and when deploying data-heavy applications to different types of computers. What they need is a good translator.

That’s why IBM has donated to the open source community SystemML, which is a universal translator for Big Data and the machine learning algorithms that are becoming essential to processing it. System ML enables developers who don’t have expertise in machine learning to embed it in their applications once and use it in industry-specific scenarios on a wide variety of computing platforms, from mainframes to smartphones.

Today, we’re announcing that Apache, one of the leading open source organizations in the world, has accepted SystemML as an official Apache Incubator project—giving it the name Apache SystemML.

We open sourced SystemML in June when we threw our weight behind the Apache Spark project—which enables developers and data scientists to more easily integrate Big Data analytics into applications.

We believe that Apache Spark is the most important new open source project in a decade. We’re embedding Spark into our Analytics and Commerce platforms, offering Spark as a service on IBM Cloud, and putting more than 3,500 IBM researchers and developers to work on Spark-related projects.

Apache SystemML is an essential element of the Spark ecosystem of technologies. Think of Spark as the analytics operating system for any application that taps into huge volumes of streaming data. MLLib, the machine learning library for Spark, provides developers with a rich set of machine learning algorithms. And SystemML enables developers to translate those algorithms so they can easily digest different kinds of data and to run on different kinds of computers.

SystemML allows a developer to write a single machine learning algorithm and automatically scale it up using Spark or Hadoop, another popular open source data analytics tool, saving significant time on behalf of highly skilled developers. While other tech companies have open sourced machine learning technologies as well, most of those are specialized tools to train neural networks. They are important, but niche, and the ability to ease the use of machine learning within Spark or Hadoop will be critical for machine learning to really become ubiquitous in the long run.

In the coming years, all businesses and, indeed, society in general, will come to rely on computing systems that learn—what we call cognitive systems. This kind of computer learning is critical because the flood of Big Data makes it impossible for organizations to manually train and program computers to handle complex situations and problems—especially as they morph over time. Computing systems must learn from their interactions with data.

The Apache SystemML project has achieved a number of early milestones to date, including:

–Over 320 patches including APIs, Data Ingestion, Optimizations, Language and Runtime Operators, Additional Algorithms, Testing, and Documentation.

–90+ contributions to the Apache Spark project from more than 25 engineers at the IBM Spark Technology Center in San Francisco to make Machine Learning accessible to the fastest growing community of data science professionals and to various other components of Apache Spark.

–More than 15 contributors from a number of organizations to enhance the capabilities to the core SystemML engine.

One of the Apache SystemML committers, D.B.Tsai, had this to say about it: “SystemML not only scales for big data analytics with high performance optimizer technology, but also empowers users to write customized machine learning algorithms using simple domain specific language without learning complicated distributed programming. It is a great extensible complement framework of Spark MLlib. I’m looking forward to seeing this become part of Apache Spark ecosystem.”

We are excited too. We believe that open source software will be an essential element of big data analytics and cognitive computing, just at it has been critical to the advances that have come in the Internet and cloud computing. The more tech companies and developers share resources and combine our efforts, the faster information technology will transform business and society.

General Manager, IBM Analytics

More stories

Medicine and the Message: Consumer Engagement and the Power of Nontraditional Data

The journal Health Affairs estimates that the U.S. spends nearly $500 billion annually on pharmaceutical treatments. Focusing on clinical value alone isn’t enough to meet business growth targets — customer experience plays a critical role in long-term brand success. But experience doesn’t occur in isolation: To win trust and boost spending, pharma brands must use […]

Continue reading

Intelligent Tech Turns the Tide on Trafficking

Human trafficking and modern slavery are on the rise around the world. In its latest report, The International Labour Organisation found there are approximately 40.3 million victims of modern slavery. In the UK alone, authorities have seen the number of incidents reported rise by 36 percent in a year, with the Home Office estimating a […]

Continue reading

New Roche Study: Predicting Diabetes-Related Chronic Kidney Disease with Real-World Data

Chronic kidney disease (CKD) is one of the most severe secondary complications related to diabetes. It is characterized by the progressive loss of the kidney function, beginning with a decline in the glomerular filtration rate and/or albuminuria, eventually resulting in end-stage renal disease. This severe complication often requires dialysis or renal transplant therapy. In January […]

Continue reading