Apache Spark™ is an open-source, fast and general engine for large-scale data processing. Spark provides an interface for programming entire clusters with implicit data parallelism and fault-tolerance.
Apache Spark™ provides programmers with an API centered on a data structure called the Resilient Distributed Dataset (RDD), a read-only multiset of data items distributed over a cluster of machines, that is maintained in a fault-tolerant way . It was developed in response to limitations in the MapReduce cluster computing paradigm, which forces a particular linear dataflow structure on distributed programs: MapReduce programs read input data from disk, map a function across the data, reduce the results of the map, and store reduction results on disk. Spark's RDDs function as a working set for distributed programs that offers a (deliberately) restricted form of distributed shared memory.
Spark™ Accelerates MapReduce
The availability of RDDs facilitates the implementation of both iterative algorithms, that visit their dataset multiple times in a loop, and interactive/exploratory data analysis, i.e., the repeated database-style querying of data. The latency of such applications may be reduced by several orders of magnitude compared to a MapReduce implementation (as was common in Apache Hadoop stacks). Among the class of iterative algorithms are the training algorithms for machine learning systems, which formed the initial impetus for developing Apache Spark.
Shuffling is the process of redistributing data across partitions (aka repartitioning) between stages of computation. Shuffle is a costly process that is avoided when possible.
In Hadoop shuffle writes intermediate files to disk, these files are pulled by the next step/stage.
With Spark Shuffle, RDDs are kept in-memory and allow data to be within reach, but when working with a cluster, network resources are required for fetching data blocks adding on overall execution time.
Accelerating the network fetch for data blocks with RDMA (RDMA for InfiniBand or RoCE when using Ethernet) reduces CPU usage and overall execution time - introducing the SparkRDMA plugin.
SparkRDMA is a high-performance, scalable and efficient ShuffleManager open-source plugin for Apache Spark.
It utilizes RDMA/RoCE technology to reduce CPU cycles needed for Shuffle data transfers, reducing memory usage by reusing memory for transfers instead of copying data multiple times as the traditional TCP-stack.
RDMA is supported on various types of networks, such as traditional Ethernet with RoCE (RDMA over Converged Ethernet), Infiniband and more.
SparkRDMA plugin is built to provide the best performance out of the box. Moreover, if one wishes to squeeze the most out of its installation, we provide multiple configuration properties to precisely tune SparkRDMA on a per-job basis.
Lab tests show x1.45 overall reduced execution time on TeraSort and x1.39 reduction with GroupByKey:
SparkRDMA plugin Benefits
Provides Improved Performance
Lower block transfer times
Lower memory consumption
Lower CPU utilization
Easy to deploy
Single JAR file
Enabled with simple configuration handle
Finer tuning available
Can be deployed incrementally
Can be limited to Shuffle-intensive jobs
For more information on how to tune a system, please refer to guides offered in this wiki:
- Reference Deployment Guide for RDMA over Converged Ethernet (RoCE) accelerated Apache Spark 2.2.0 over Mellanox 100GbE Network
- Description of SparkRDMA's configuration properties
- Recommended performance tuning steps for Mellanox Network Adapters
- General performance tips
- Troubleshooting guide
- How to configure RDMA / RoCE