Overview of this book
Apache Hadoop is a widely used distributed data platform. It enables large datasets to be efficiently processed instead of using one large computer to store and process the data. This book will get you started with the Hadoop ecosystem, and introduce you to the main technical topics, including MapReduce, YARN, and HDFS. The book begins with an overview of big data and Apache Hadoop. Then, you will set up a pseudo Hadoop development environment and a multi-node enterprise Hadoop cluster. You will see how the parallel programming paradigm, such as MapReduce, can solve many complex data processing problems. The book also covers the important aspects of the big data software development lifecycle, including quality assurance and control, performance, administration, and monitoring. You will then learn about the Hadoop ecosystem, and tools such as Kafka, Sqoop, Flume, Pig, Hive, and HBase. Finally, you will look at advanced topics, including real time streaming using Apache Storm, and data analytics using Apache Spark. By the end of the book, you will be well versed with different configurations of the Hadoop 3 cluster.
Curriculum
- 1 Section
- 10 Lessons
- Lifetime
Expand all sectionsCollapse all sections
- Hadoop 3 Quick Start Guide10
- 2.000 Preface
- 2.101 Hadoop 3.0 – Background and Introduction
- 2.202 Planning and Setting Up Hadoop Clusters
- 2.303 Deep Dive into the Hadoop Distributed File System
- 2.404 Developing MapReduce Applications
- 2.505 Building Rich YARN Applications
- 2.606 Monitoring and Administration of a Hadoop Cluster
- 2.707 Demystifying Hadoop Ecosystem Components
- 2.808 Advanced Topics in Apache Hadoop
- 2.909 Other Books You May Enjoy