Webinar Series: Deploying Stateful Services in Kubernetes

This text dietary supplements a webinar collection on deploying and managing containerized workloads within the cloud. The collection covers the necessities of containers, together with managing container lifecycles, deploying multi-container functions, scaling workloads, and dealing with Kubernetes. It additionally highlights greatest practices for operating stateful functions.

This tutorial consists of the ideas and instructions within the fifth session of the collection, Deploying Stateful Companies in Kubernetes.


Kubernetes is an open-source container orchestration software for managing containerized functions. Within the earlier elements of this collection, you realized the constructing blocks of Kubernetes and packaged containers as Kubernetes ReplicaSets. Whereas ReplicaSets guarantee the provision of stateless Pods, they can’t be used with stateful workloads corresponding to database clusters.

Whereas it might be simple to bundle, deploy, handle, and scale up to date cloud-native functions in Kubernetes, deploying and managing conventional workloads corresponding to databases and content material administration techniques in a containerized surroundings requires a distinct strategy. StatefulSets convey the flexibleness of Kubernetes ReplicaSet to stateful workloads.

Within the last installment of this tutorial collection, you’ll deploy a extremely accessible MongoDB ReplicaSet in Kubernetes as a StatefulSet utilizing Helm, a preferred open supply bundle supervisor for Kubernetes.


To finish this tutorial, you will have:

Step 1 – Putting in the Helm Shopper on the Improvement Machine

With Helm, directors can deploy advanced Kubernetes functions with a single command. Functions are packaged as Charts that outline, set up, and improve Kubernetes functions. Charts present an abstraction over Kubernetes objects corresponding to Pods, Deployments, and Companies.

Helm has two elements–the server and the shopper. The server-side of Helm runs in Kubernetes as a Service known as Tiller. The shopper is a command line software that interacts with Tiller.

Since you’re going to deploy a MongoDB ReplicaSet Helm Chart, you want the CLI that talks to Tiller, the server-side part of Helm. StackPointCloud, which you will have used to arrange Kubernetes on DigitalOcean, comes with Tiller preinstalled.

Be aware: These directions are for macOS. In case you are utilizing different working techniques, please seek advice from the Helm set up information.

Assuming you will have Homebrew put in and configured in your Mac, run the next command to put in Helm:

  • brew set up kubernetes-helm


==> Downloading https://homebrew.bintray.com/bottles/kubernetes-helm-2.8.2.high_sierra.bottle.tar.gz ... ==> Abstract ? /usr/native/Cellar/kubernetes-helm/2.8.2: 50 information, 121.7MB

As soon as Helm is put in, confirm which you can run it by checking its present model.


Shopper: &model.Model{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"} Server: &model.Model{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}

This confirms that the shopper is put in correctly and is ready to speak to Tiller.

Within the subsequent step, we'll use Helm to deploy the MongoDB ReplicaSet in Kubernetes.

Step 2 – Deploying the MongoDB ReplicaSet in Kubernetes

A StorageClass in Kubernetes supplies a means for directors to explain the “classes” of storage they provide. For instance, when customers request a storage quantity, the StorageClass will decide what class of storage backend is provisioned from them. The courses might embrace normal HDD and a sooner SSD. Behind the scenes, the StorageClass interacts with the underlying infrastructure corresponding to a cloud supplier’s API, to provision storage.

Because you want persistent storage to retailer MongoDB knowledge, you might need to connect a DigitalOcean Block Storage quantity to a employee node, and level the MongoDB Pod to make use of the storage quantity for persistence.

On this case, the StorageClass acts because the interface between the Pod and the DigitalOcean block storage service. If you request a quantity of block storage, the StorageClass talks to the preconfigured driver that is aware of easy methods to allocate a block storage quantity.

StackPointCloud installs the DigitalOcean storage driver and registers the StorageClass with Kubernetes throughout the setup. This protects us from the steps concerned in putting in and configuring the motive force and the StorageClass.

Earlier than we deploy the MongoDB cluster, let’s make sure that the StorageClass for DigitalOcean volumes is configured:

The output confirms that StorageClass is configured and prepared.

[secondary_label Output
NAME                     PROVISIONER                            AGE
digitalocean (default)   digitalocean/flex-volume-provisioner   1d

Subsequent, you'll configure and deploy the MongoDB ReplicaSet primarily based on the DigitalOcean StorageClass.

Create a brand new listing to your venture and swap to the brand new listing:

  • mkdir ~/mongo-rs
  • cd ~/mongo-rs

Clone the Helm Chart repository from GitHub:

  • git clone https://github.com/kubernetes/charts.git

Navigate to the MongoDB ReplicaSet listing (charts/steady/mongodb-replicaset/) and confirm that the file values.yaml exists.

  • cd charts/steady/mongodb-replicaset/
  • ls values.yaml



This file comprises the parameters and configuration for the chart. It's essential modify this file to configure the MongoDB ReplicaSet to make use of the DigitalOcean StorageClass.

Edit values.yaml:

Discover and uncomment the next part:


# storageClass: "-" 

Change "-" with "digitalocean", like this:


storageClass: "digitalocean"

Save the file and exit your editor.

Now navigate to the ~/mongo-rs folder.

You are now able to deploy the MongoDB ReplicaSet to your Kubernetes cluster, powered by DigitalOcean’s block storage. Run the next command to launch the database cluster.

  • helm set up --name=todo -f charts/steady/mongodb-replicaset/values.yaml steady/mongodb-replicaset

Within the previous command, --name refers back to the identify of the Helm chart. The swap -f factors to the configuration settings saved in values.yaml.

You'll instantly see the output confirming that the chart creation has began.


NAME: todo LAST DEPLOYED: Sat Mar 31 10:37:06 2018 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE todo-mongodb-replicaset ClusterIP None <none> 27017/TCP 1s ==> v1beta1/StatefulSet NAME DESIRED CURRENT AGE todo-mongodb-replicaset 3 1 0s ==> v1/Pod(associated) NAME READY STATUS RESTARTS AGE todo-mongodb-replicaset-0 0/1 Init:0/2 0 0s ==> v1/ConfigMap NAME DATA AGE todo-mongodb-replicaset 1 1s todo-mongodb-replicaset-tests 1 1s NOTES: 1. After the statefulset is created fully, one can test which occasion is main by operating: $ for ((i = 0; i < 3; ++i)); do kubectl exec --namespace default todo-mongodb-replicaset-$i -- sh -c 'mongo --eval="printjson(rs.isMaster())"'; achieved 2. One can insert a key into the first occasion of the mongodb duplicate set by operating the next: MASTER_POD_NAME have to be changed with the identify of the grasp discovered from the earlier step. $ kubectl exec --namespace default MASTER_POD_NAME -- mongo --eval="printjson(db.test.insert({key1: 'value1'}))" 3. One can fetch the keys saved within the main or any of the slave nodes within the following method. POD_NAME have to be changed by the identify of the pod being queried. $ kubectl exec --namespace default POD_NAME -- mongo --eval="rs.slaveOk(); db.test.find().forEach(printjson)"

Let’s now run a collection of instructions to trace the standing of the cluster.

First, take a look at the StatefulSet:

This command confirms that the MongoDB ReplicaSet was created as a Kubernetes StatefulSet.


NAME DESIRED CURRENT AGE todo-mongodb-replicaset 3 2 2m

Now discover the Pods:

The variety of Pods and their naming conference signifies that the MongoDB ReplicaSet is efficiently configured:


NAME READY STATUS RESTARTS AGE todo-mongodb-replicaset-0 1/1 Operating 0 3m todo-mongodb-replicaset-1 1/1 Operating 0 1m todo-mongodb-replicaset-2 1/1 Operating 0 54s

Discover that every Pod has a suffix that ends with a sequential quantity, which is a particular characteristic of a StatefulSet.

Let’s now test if the MongoDB cases are speaking with one another. We'll do that by operating a command within the MongoDB shell inside one of many Pods.

Use kubectl to launch the mongo console on one of many hosts:

  • kubectl exec -it todo-mongodb-replicaset-0 mongo

After connecting, you may end up within the MongoDB shell:


MongoDB shell model v3.6.3 connecting to: mongodb:// MongoDB server model: 3.6.3 Welcome to the MongoDB shell. For interactive assist, sort "help". ... 2018-03-31T05:08:20.239+0000 I CONTROL [initandlisten]

Test the ReplicaSet’s configuration with the next command:

The output confirms that there are three cases of MongoDB operating as a ReplicaSet.


{ "_id" : "rs0", "version" : 3, "protocolVersion" : NumberLong(1), "members" : [ { "_id" : 0, "host" : "todo-mongodb-replicaset-0.todo-mongodb-replicaset.default.svc.cluster.native:27017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 }, { "_id" : 1, "host" : "todo-mongodb-replicaset-1.todo-mongodb-replicaset.default.svc.cluster.native:27017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 }, { "_id" : 2, "host" : "todo-mongodb-replicaset-2.todo-mongodb-replicaset.default.svc.cluster.native:27017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 } ], "settings" : { "chainingAllowed" : true, "heartbeatIntervalMillis" : 2000, "heartbeatTimeoutSecs" : 10, "electionTimeoutMillis" : 10000, "catchUpTimeoutMillis" : -1, "catchUpTakeoverDelayMillis" : 30000, "getLastErrorModes" : { }, "getLastErrorDefaults" : { "w" : 1, "wtimeout" : 0 }, "replicaSetId" : ObjectId("5abdb4f61d952afc4b0b8218") } }

Exit the MongoDB console:

This can disconnect you out of your distant host as properly.

Let’s swap gears and test the DigitalOcean management panel for the block storage volumes related to the cluster. Log in to your DigitalOcean account and choose the Volumes tab:

Dashboard showing volumes

You may see that three volumes of 10GB every are hooked up to Kubernetes employee nodes. Every Pod of the MongoDB StatefulSet is storing the info in one of many block storage volumes. The scale of 10GB is outlined in values.yaml underneath the persistentVolume part.


  enabled: true
  ## mongodb-replicaset knowledge Persistent Quantity Storage Class
  ## If outlined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, selecting the default provisioner.  (gp2 on AWS, normal on
  ##   GKE, AWS & OpenStack)
  storageClass: digitalocean
    - ReadWriteOnce
  dimension: 10Gi
  annotations: {}

You have got efficiently configured a extremely accessible MongoDB ReplicaSet operating in Kubernetes.

Now let's deploy the online utility that talks to the MongoDB cluster.

Step 3 – Deploying and Scaling the Web Utility in Kubernetes

Let's prolong the ToDo Node.js utility we utilized in earlier elements of this tutorial collection to make the most of the MongoDB cluster.

Be aware: You can even construct the container picture from the supply code or use the YAML information within the Kubernetes information immediately. Seek advice from the tutorial Deploying and Scaling Microservices in Kubernetes for steps on constructing the picture and deploying the appliance to Kubernetes.

Begin by creating a brand new working listing:

  • mkdir ~/web-app
  • cd ~/web-app

Then clone the ToDo utility's repository that comprises the code and Kubernetes artifacts.

  • git clone https://github.com/janakiramm/todo.git

Swap to the todo-app/kubernetes listing which comprises the Kubernetes configuration information.

Open the file web-rs-ss.yaml in your editor.

Discover the env part on the YAML file.


      - identify: internet 
        picture: janakiramm/todo
          - identify: "DBHOST"
            worth: "mongodb://todo-mongodb-replicaset-0.todo-mongodb-replicaset,todo-mongodb-replicaset-1.todo-mongodb-replicaset,todo-mongodb-replicaset-2.todo-mongodb-replicaset:27017"
        - containerPort: 3000

This passes the database connection string to the appliance at runtime as an surroundings variable. As a substitute of pointing the appliance to a easy MongoDB Pod, this model of the app makes use of the StatefulSet you created. Every entry within the worth part refers to one of many Pods of the MongoDB StatefulSet.

Use kubectl to deploy the internet ReplicaSet together with the internet Service

  • kubectl create -f web-rs-ss.yaml -f web-service.yaml

You will see that each are created:


replicaset "web" created service "web" created

Checklist the pods once more:

You now see all of the Pods belonging to MongoDB and the online app.


NAME READY STATUS RESTARTS AGE todo-mongodb-replicaset-0 1/1 Operating 0 26m todo-mongodb-replicaset-1 1/1 Operating 0 24m todo-mongodb-replicaset-2 1/1 Operating 0 23m web-t5zzk 1/1 Operating 0 17s web-x6dh8 1/1 Operating 0 17s Let’s try the Kubernetes providers ​```command kubectl get svc


NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP <none> 443/TCP 1d todo-mongodb-replicaset ClusterIP None <none> 27017/TCP 27m internet NodePort <none> 3000:31201/TCP 14s

The internet Pods speak to the MongoDB cluster by way of the todo-mongodb-replicaset Service. The online utility is offered by way of the internet Service on NodePort 31201.

Accessing this port on any of the employee nodes exhibits the online utility.

The live Todo list app

You may scale the online utility by growing the variety of Pods within the ReplicaSet.

  • kubectl scale rs/internet --replicas=10


replicaset "web" scaled

You may then scale the appliance again to 2 Pods.

  • kubectl scale rs/internet --replicas=2


replicaset "web" scaled

Now let's run some exams for availability.

Step 4 – Testing the MongoDB ReplicaSet for Excessive Availability

One of many benefits of operating a StatefulSet is the excessive availability of workloads. Let’s take a look at this by deleting one of many Pods within the MongoDB StatefulSet.

  • kubectl delete pod todo-mongodb-replicaset-2


pod "todo-mongodb-replicaset-2" deleted

Test the variety of Pods:

You will see that todo-mongodb-replicaset-2 is terminating:


NAME READY STATUS RESTARTS AGE todo-mongodb-replicaset-0 1/1 Operating 0 33m todo-mongodb-replicaset-1 1/1 Operating 0 32m todo-mongodb-replicaset-2 0/1 Terminating 0 31m web-t5zzk 1/1 Operating 0 8m web-x6dh8 1/1 Operating 0 8m

Inside a couple of minutes, you will note that Kubernetes initializes one other Pod to exchange the deleted one.

You will see todo-mongodb-replicaset-2 is initializing:

NAME                        READY     STATUS     RESTARTS   AGE
todo-mongodb-replicaset-0   1/1       Operating    0          34m
todo-mongodb-replicaset-1   1/1       Operating    0          33m
todo-mongodb-replicaset-2   0/1       Init:0/2   0          29s
web-t5zzk                   1/1       Operating    0          8m
web-x6dh8                   1/1       Operating    0          8m

Now that every thing works, you may clear issues up.

Delete all of the objects created throughout this tutorial with the next instructions:

  • kubectl delete -f web-rs-ss.yaml -f web-service.yaml


replicaset "web" deleted service "web" deleted

To delete the Kubernetes cluster itself, go to StackPointCloud and achieve this by way of their management panel.


On this tutorial, you deployed a sturdy, persistent, extremely accessible, MonogDB ReplicaSet as a Kubernetes StatefulSet. You additionally realized easy methods to entry the StatefulSet from different functions deployed in the identical Kubernetes cluster.

Methods to Set up and Configure Zabbix on CentOS 7

Previous article

Tricks to Enhance Your Web site’s Conversion Price Utilizing Photographs

Next article

You may also like


Leave a reply

Your email address will not be published. Required fields are marked *