This is an inactive course webpage. Find the one for your current semester.
This course provides a comprehensive introduction to applied parallel computing using the MapReduce programming model facilitating large scale data management and processing. There will be an emphasis on hands-on experience working with the Hadoop architecture, an open-source software framework written in Java for distributed storage and processing of very large data sets on computer clusters. Further, we will derive and discuss various algorithms to tackle big data applications and make use of related big data analysis tools from the Hadoop ecosystem, such as Pig, Hive, Impala, and Apache Spark to solve problems faced by enterprises today. Check the Roadmap for more detailed information.
Prerequisites: CSE 247, CSE 131 (or a solid background in programming with Java), and CSE 330 (or basic knowledge in relational databases (RDMS) and SQL).
This class counts towards the Certificate in Data Mining and Machine Learning as applications course.
The content of this class is derived largely from the Cloudera Developer Training for MapReduce, the Cloudera Data Analyst Training: Using Pig, Hive, and Impala with Hadoop, and the Cloudera Developer Training for Apache Spark, which are made available to Washington University through the Cloudera Academic Parntership program. Further materials are adapted from the “Mining of Massive Data Sets” book and class taught at Stanford by Jure Leskovec.
Instructor: Marion Neumann
Office: Jolley Hall Room 222
Office Hours: MON 4-5pm
TA Office Hours:
MON 11am-1pm Grace in Jolley 224
TUE 11am-1pm Weijian in Jolley 224
THU 4pm-6pm Yu in Jolley 431
FRI 10am-12pm Krushna in Jolley 431
Lectures: MON/WED 2:30-4pm in Louderman / 458.
Lab sessions will occasionally replace the lectures and take place in Eads / 016. All lab sessions will be announced in advance in-class and on the course calendar.
Grades on BB
Resources and HowTos