Optimizing Java* and Apache Hadoop* for Intel® Architecture
With the ability to analyze virtually unlimited amounts of unstructured and semi-structured data, Apache Hadoop* has the potential to transform data analytics. Yet with Apache Hadoop only recently becoming a more mainstream technology, benchmarking and deployment tools are still catching up, which can make realizing the full potential of a performance-optimized Apache Hadoop cluster challenging.
As with most new technologies, Apache Hadoop has gone from being an interesting concept to an established technology. And as new technologies become more established, they become easier to optimize. For example, rail companies, whose technologies and processes are well known, realize the value of optimization. From the performance of the locomotive engines, to the routes that mile-long trains take, optimization is a key factor in lowering the cost per pound of cargo and getting that cargo to its destination efficiently. The same principle applies to choosing the right computing platform that is optimized to squeeze the last bit of performance out of Apache Hadoop. A non-optimized Apache Hadoop cluster might still get the job done, but data analytics tasks will take longer and be less efficient.
Read the full Optimizing Java* and Apache Hadoop* for Intel® Architecture White Paper.