首页 > > 详细

data程序讲解 、辅导 Python/Java编程

Assignment 5
 Hadoop and Spark, both developed by the Apache Software Foundation, are
widely used open-source frameworks for big data architectures. Both Hadoop and
Spark enables big data processing tasks to be split into smaller tasks. The small
tasks are performed in parallel by using an algorithm (i.e., MapReduce), and are
then distributed across a Hadoop cluster.
 Spark tends to perform faster than Hadoop and it uses random access memory
(RAM) to cache and process data instead of a file system in Hadoop. This enables
Spark to handle use cases that Hadoop cannot.
 In this assignment, you will run both Hadoop and Spark on your own computer:
 Task 1: preprocess an input dataset using Hadoop
 Task 2 and Task 3: analyze the preprocessed dataset (the output of Task 1)
using SparkSetup Hadoop
 Because Hadoop is open source, you can download and install it (see the
Hadoop webpage) on your own computer!
 Hadoop Single Node Installation Reference:
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoopcommon/SingleCluster.html

The conf/slaves file specifies the hostnames or IP addresses of all the
worker nodes. By default, it only contains localhost.
 Run the example WordCount application:
https://hadoop.apache.org/docs/stable/hadoop-mapreduce-client/hadoopmapreduce-client-core/MapReduceTutorial.html#Source_CodeExercise (Hadoop)
 Task 1: Preprocess data. Process the provided user query logs (search_data.sample).
Strip the clickUrls in the query log using Hadoop to leave only a specific part (the url
before the first ‘/’) of the clickUrls.
 Example input: google.com/docs/about/
 Example output: google.com
 You can start by modifying the WordCount application.
 The preprocessed search_data.sample is used as the input for the following two tasks.
3Setup Spark
 Apache Spark is an open-source unified analytics engine for large-scale
data processing. Spark provides an interface for programming clusters
with implicit data parallelism and fault tolerance.
 Download Spark: https://spark.apache.org/downloads.html
 Learn more about Spark: https://spark.apache.org/examples.html
 You need to analyze the user query logs of a search engine. Complete the
following two tasks:
 Task 2: Rank the tokens (e.g., blog and www) that appear most often
in the queried url.
 Task 3: Rank the time period(by minute) with the most queries.
4Setup pseudo-distributed Spark (cont.)
 Run a Spark cluster on your machine
 Start the master node and one worker node with Spark’s standalone mode
(Spark Standalone Mode).
 After starting the master node, you can check out master’s web UI at
http://localhost:8080 know the current setup
 Run the example application with Spark
https://spark.apache.org/docs/latest/submitting-applications.html
5Exercise (Spark)
 Task 2: Rank the tokens that appear most often in the queried url. Tokenlize
the clickUrls in the query log, then rank them according to the number of times they
appear. The output should be the top ten tokens and the number of times they
appear.
 Example output: (www, 4566) (question,743) (bbs,729) (blog,390)
 Task 3: Rank the time period (by minute) with the most queries. Count the
number of query at each minute, then rank them from more to less. The output
should be the top ten time period (by minute) with most queries and the number of
queries during that time period.
 Example output: (00:01,1045) (00:00,1043) (00:06,1033)
6Submission
 Submit all your source file(s) and a document. The document should
contain the screenshots of the running program and the output results.
7

联系我们
  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-21:00
  • 微信:codinghelp
热点标签

联系我们 - QQ: 99515681 微信:codinghelp
程序辅导网!