Flexiant Concerto

We recently looked at how you can use Flexiant Concerto to get going with Kubernetes in three simple steps, bypassing the investment in time and effort that would otherwise be needed when doing this manually. In that post, we took the commonly used guestbook example as our use case.

Today we will look at how to do this for another, very valuable use case, namely Apache Spark, the open source big data processing framework that runs up to 100x faster than Hadoop.

NB: It is assumed that the following is already in place:

  • A Flexiant Concerto account
  • Cloud credentials for those providers you’d like to deploy the example on
  • Beta features for your Flexiant Concerto account enabled (Under Settings -> Account)

Step 1 – Create the Kubernetes cluster

Create the Kubernetes cluster screenshot

Step 2 – Add nodes

  • Add a master node with at least 1GB of RAM:

Add-a-master-node-screenshot

  • Add one or more slave nodes, each with at least 2GB RAM:

Add-one-or-more-slave-nodes-screenshot

Step 3 – Deploy the Spark example

  • Download the Kubernetes Spark example json files
  • Upload the Spark files following the example’s instructions:
    1. Upload spark-master.json
    2. Upload spark-master-service.json
    3. Upload the spark-worker-controller.json and wait for the pods to go into a running state

Upload-the-Spare-files-following-the-example-screensho

  • Access the Spark cluster

Sign up today to try this out for yourself.

 

Tags: , ,