The browser version you are using is not recommended for this site.Please consider upgrading to the latest version of your browser by clicking one of the following links.
We are sorry, This PDF is available in download format only
Intel IT Best Practices for Implementing Apache Hadoop* Software In an age when organizations such as Intel are rich in data, the true value of this data lies in the ability to collect, sort, and analyze it to derive actionable business intelligence (BI). Recognizing the need to add big data capabilities to our BI efforts, Intel IT formed a team to evaluate several Apache Hadoop* distributions and consider implementation options. Our goal was to deliver a production platform in 10 weeks or less.Intel IT’s “start small” strategy enabled us to take an iterative, agile methodology-like approach. We worked with Intel IT BI teams and other groups to design and implement a 16-server, 192-core Hadoop platform, including all software and data integration solutions, in just five weeks.Intel’s first internal big data compute-intensive production platform with the Intel Distribution of Hadoop launched at the end of 2012. This platform is already delivering value in our first three use cases, helping us to identify new opportunities as well as reduce IT costs and enabling new product offerings.Read the full Intel IT Best Practices for Implementing Apache Hadoop* Software White Paper.
Introducing an automation tool for rapidly preparing data for analysis so scientists can speed mining.
How businesses can use its versatility and scalability to mine answers through object relationships.
The Intel® Distribution for Apache Hadoop* Software
Apache Hive* overview
Linda Feldt highlights big data research—video
Apache HDFS* overview.