We recently looked at how you can use Flexiant Concerto to get going with Kubernetes in three simple steps, bypassing the investment in time and effort that would otherwise be needed when doing this manually. In that post, we took the commonly used guestbook example as our use case.
Today we will look at how to do this for another, very valuable use case, namely Apache Spark, the open source big data processing framework that runs up to 100x faster than Hadoop.
NB: It is assumed that the following is already in place:
- A Flexiant Concerto account
- Cloud credentials for those providers you’d like to deploy the example on
- Beta features for your Flexiant Concerto account enabled (Under Settings -> Account)
Step 1 – Create the Kubernetes cluster
Step 2 – Add nodes
- Add a master node with at least 1GB of RAM:
- Add one or more slave nodes, each with at least 2GB RAM:
Step 3 – Deploy the Spark example
- Download the Kubernetes Spark example json files
- Upload the Spark files following the example’s instructions:
- Upload spark-master.json
- Upload spark-master-service.json
- Upload the spark-worker-controller.json and wait for the pods to go into a running state
- Access the Spark cluster
Sign up today to try this out for yourself.