This week's developerWorks newsletter intro: Linux, Hadoop, and more data than you can possibly imagine
John Swanson 120000GK2E Visits (2894)
Sign up for your customized newsletter today!
I don't need to tell you that the Internet is big... but just how big? No one knows for sure, but some estimates include words like exabytes and zettabytes. And the volume of data out there just keeps growing -- which means the tools we use to manipulate that data had better be robust. Tools like Apache Hadoop. This Linux-based computing framework provides a reliable, scalable, and efficient framework for the distributed processing of large amounts of data. Our Linux zone hosts an excellent introduction to Hadoop, and this week we're launching a series that shows you how it's used in the real world: Part 1 of "Distributed data processing with Hadoop" explores the basics of the framework and shows you how to install and configure a single-node Hadoop cluster, as well as monitor and manage it through its core Web interface. See for yourself why this open source technology is now being used in much more than search engines.
And for the record, a zettabyte is about 1,00
Until next week,
John Swanson and the developerWorks editorial team
This week's top features on developerWorks: