The goal of Hadoop is to use commonly available servers in a very large cluster, where each server has a set of inexpensive internal disk drives.
For higher performance, MapReduce assigns workloads to those servers where the data is to be processed and stored. This is known as data locality. It’s because of this principle that using a storage area network (SAN), or network attached storage (NAS), in a Hadoop environment, is not recommended.
For Hadoop deployments using a SAN or NAS, the extra network communication overhead can cause performance bottlenecks, (especially for larger clusters.) Now take a moment and think of a 1000-machine cluster, where each machine has three internal disk drives; then consider the failure rate of a cluster composed of 3000 inexpensive drives + 1000 inexpensive servers!
We’re likely already on the same page here: The component mean time to failure (MTTF), you’re going to experience in a Hadoop cluster is likely anal-ogous to a zipper on your kid’s jacket: it’s going to fail (and poetically enough, zippers seem to fail only when you really need them). The cool thing about Hadoop is that the reality of the MTTF rates associated with inexpensive hardware is actually well understood (a design point if you will), and part of the strength of Hadoop is that it has built-in fault tolerance and fault compensation capabilities.
This is the same for HDFS, in that data is divided into blocks, and copies of these blocks are stored on other servers in the Hadoop cluster. That is, an individual file is actually stored as smaller blocks that are replicated across multiple servers in the entire cluster.