Struggling to conquer Apache Spark?

Learning is hard enough as it is but when you bring in distributed computing frameworks in sophisticated programming languages - things don't get any easier. While self-study can certainly help, without a good guide, things are always more difficult than they should be. That's why I created Spark Tutorials, to make it easier to learn and use Apache Spark. is here to provide simple, easy to follow tutorials to help you get up and running quickly. You'll learn the foundational abstractions in Apache Spark from RDDs to DataFrames and MLLib. Start off with some of the articles below.

Setup Your Zeppelin Notebook For Data Science in Apache Spark

Notebooks are quickly becoming the go to way of running and developing code in data science. While it's not the only way, it's certainly popular and is an Apache Incubating Project. In this tutorial, we'll walk through how to get a Zeppelin notebook setup on your machine or cluster for data science development.

Visit Article »

Spark Will Not Start with Spark Address already in use

This article will walk you through how to resolve the somewhat common Address already in use exception that can occur when you're trying to start Spark.

Visit Article »

Spark Clusters on AWS EC2 - Reading and Writing S3 Data - Predicting Flight Delays with Spark Part 1

In this tutorial we're gong to set up a complete predictive modeling pipeline in Spark using DataFrames, Pipelines and MLlib. The first part of this tutorial will explain some of the basic concepts that we're going to need to build this model, walk you through how to download the data we'll use, and lastly create our Spark Cluster on Amazon AWS and read and write from AWS S3!

Visit Article »