Introduction to PySpark | Distributed Computing with Apache Spark
Last Updated :
29 Apr, 2022
Improve
Datasets are becoming huge. Infact, data is growing faster than processing speeds. Therefore, algorithms involving large data and high amount of computation are often run on a distributed computing system. A distributed computing system involves nodes (networked computers) that run processes in parallel and communicate (if, necessary).
MapReduce - The programming model that is used for Distributed computing is known as MapReduce. The MapReduce model involves two stages, Map and Reduce.
Python
RDD transformations - Now, a SparkContext object is created. Now, we will create RDDs and see some transformations on them.
Python
One major advantage of using Spark is that it does not load the dataset into memory, lines is a pointer to the ‘file_name.txt’ ?file.
A simple PySpark app to count the degree of each vertex for a given graph -
Python
Understanding the above code -
- Map - The mapper processes each line of the input data (it is in the form of a file), and produces key - value pairs.
Input data → Mapper → list([key, value])
- Reduce - The reducer processes the list of key - value pairs (after the Mapper's function). It outputs a new set of key - value pairs.
list([key, value]) → Reducer → list([key, list(values)])
from pyspark import SparkConf, SparkContext
conf = SparkConf().setMaster("local").setAppName("Test")
# setMaster(local) - we are doing tasks on a single machine
sc = SparkContext(conf = conf)
# create an RDD called lines from ‘file_name.txt’
lines = sc.textFile("file_name.txt", 2)
# print lines.collect() prints the whole RDD
print lines.collect()
from pyspark import SparkConf, SparkContext
conf = SparkConf().setMaster("local").setAppName("Test")
# setMaster(local) - we are doing tasks on a single machine
sc = SparkContext(conf = conf)
def conv(line):
line = line.split()
return (int(line[0]), [int(line[1])])
def numNeighbours(x, y):
return len(x) + len(y)
lines = sc.textFile('graph.txt')
edges = lines.map(lambda line: conv(line))
Adj_list = edges.reduceByKey(lambda x, y: numNeighbours(x, y))
print Adj_list.collect()
- Our text file is in the following format - (each line represents an edge of a directed graph) 1 2 1 3 2 3 3 4 . . . . . .PySpark
- Large Datasets may contain millions of nodes, and edges.
- First few lines set up the SparkContext. We create an RDD lines from it.
- Then, we transform the lines RDD to edges RDD.The function conv acts on each line and key value pairs of the form (1, 2), (1, 3), (2, 3), (3, 4), ... are stored in the edges RDD.
- After this the reduceByKey aggregates all the key - pairs corresponding to a particular key and numNeighbours function is used for generating each vertex's degree in a separate RDD Adj_list, which has the form (1, 2), (2, 1), (3, 1), ...
-
The above code can be run by the following commands -
$ cd /home/arik/Downloads/spark-1.6.0/ $ ./bin/spark-submit degree.py
- You can use your Spark installation path in the first line.