Apache Hadoop
An open-source software platform for the distributed processing of massive amounts of big data across clusters of computers
An open-source software platform for the distributed processing of massive amounts of big data across clusters of computers
Apache Hadoop® is an open source software framework that provides highly reliable distributed processing of large data sets using simple programming models. Hadoop, known for its scalability, is built on clusters of commodity computers, providing a cost-effective solution for storing and processing massive amounts of structured, semi-structured and unstructured data with no format requirements.
A data lake architecture including Hadoop can offer a flexible data management solution for your big data analytics initiatives. Because Hadoop is an open source software project and follows a distributed computing model, it can offer a lower total cost of ownership for a big data software and storage solution.
Hadoop can also be installed on cloud servers to better manage the compute and storage resources required for big data. Leading cloud vendors such as Amazon Web Services (AWS) and Microsoft Azure offer solutions. Cloudera supports Hadoop workloads both on-premises and in the cloud, including options for one or more public cloud environments from multiple vendors.
The Hadoop framework, built by the Apache Software Foundation, includes:
Enhance Hadoop with additional open source software projects
A web-based tool for provisioning, managing and monitoring Hadoop clusters
A scalable, NoSQL database designed to have no single point of failure
A data collection system for monitoring large distributed systems; built on top of HDFS and MapReduce
A service for collecting, aggregating and moving large amounts of streaming data into HDFS
A scalable, non-relational distributed database that supports structured data storage for very large tables
A data warehouse infrastructure for data query and analysis in a SQL-like interface
A scalable machine learning and data mining library
A Java-based workload scheduler to manage Hadoop jobs
A high-level data flow language and execution framework for parallel computation
A tool for efficiently transferring data between Hadoop and structured data stores such as relational databases
A unified AI platform for running machine learning and deep learning workloads in a distributed cluster
A generalized data flow programming framework, built on YARN; being adopted within the Hadoop ecosystem to replace MapReduce
A high performance coordination service for distributed applications
Apache Hadoop was written in Java, but depending on the big data project, developers can program in their choice of language, such as Python, R or Scala. The included Hadoop Streaming utility allows developers to create and execute MapReduce jobs with any script or executable as the mapper or the reducer.
Apache Spark is often compared to Hadoop as it is also an open source framework for big data processing. In fact, Spark was initially built to improve the processing performance and extend the types of computations possible with Hadoop MapReduce. Spark uses in-memory processing, which means it is vastly faster than the read/write capabilities of MapReduce.
While Hadoop is best for batch processing of huge volumes of data, Spark supports both batch and real-time data processing and is ideal for streaming data and graph computations. Both Hadoop and Spark have machine learning libraries, but again, because of the in-memory processing, Spark’s machine learning is much faster.
Drive real-time, self-service access for your data scientist, line of business (LOB) owners and developers. Hadoop can fuel data science, an interdisciplinary field that uses data, algorithms, machine learning and AI for advanced analysis to reveal patterns and build predictions.
Streamline costs in your enterprise data warehouse by moving “cold” data not currently in use to a Hadoop-based distribution for storage. Or consolidate data across the organization to increase accessibility and decrease costs.
Support predictive and prescriptive analytics for today’s AI. Combine Cloudera’s enterprise-grade Hadoop distribution with a single ecosystem of integrated products and services from both IBM and Cloudera to improve data discovery, testing, ad hoc and near real-time queries. Take advantage of the collaboration between IBM and Cloudera to deliver enterprise Hadoop solutions.
Use an enterprise-grade, hybrid ANSI-compliant, SQL-on-Hadoop engine to deliver massively parallel processing (MPP) and advanced data query.
Replicate data as it streams in so files don’t need to be fully written or closed before transfer.
Capitalize more cost effectively on big data with open source databases from leading vendors such as MongoDB and EDB.
See how they are driving advanced analytics with an enterprise-grade, secure, governed, open source-based data lake.
Add a data lake to your data management strategy to integrate more unstructured data for deeper insights.
Explore the storage and governance technologies needed for your data lake to deliver AI-ready data.
See how proven governance solutions can drive better data integration, quality and security for your data lakes.
Choose your learning path, based on skill level, from no-cost courses in data science, AI, big data and more.
Join the IBM community for open source data management for collaboration, resources and more.
Talk to an IBM big data specialist for 30 minutes at no cost.