Struggling to conquer Apache Spark?

Learning is hard enough as it is but when you bring in distributed computing frameworks in sophisticated programming languages - things don't get any easier. While self-study can certainly help, without a good guide, things are always more difficult than they should be. That's why I created Spark Tutorials, to make it easier to learn and use Apache Spark. is here to provide simple, easy to follow tutorials to help you get up and running quickly. You'll learn the foundational abstractions in Apache Spark from RDDs to DataFrames and MLLib. Start off with some of the articles below.

Building Spark for your Cluster to Support Hive SQL and YARN

This article will walk you through how to build Apache Spark to support the HIVE SQL execution engine as well as YARN. After that it should be ready to get up and running on your hadoop cluster.

Visit Article »

Getting Started with Apache Spark DataFrames in Python and Scala

In this easy to follow tutorial, learn the basics of Spark DataFrames, how they're composed of RDDs and what they allow you to do in Scala. They're a similar abstraction to pandas DataFrames or R's DataFrames.

Visit Article »

The Simplest Explanation of and Approaches to Optimizing Spark Shuffles

This post will dive into some of the details of the Spark Shuffle and what it means for you while using Apache Spark to perform your data analysis in a cluster setting.

Visit Article »