I had to setup Kubernetes on Google Cloud in a hurry. This is a step by step setup that you can do in your own machine and understand some of the concepts.

First of all install kubectl and minikube.

Create a cluster

The first step is to create a cluster.

Usually you create the cluster outside the kubectl command, and then configure kubectl to access that cluster. If you are using Google Cloud I recommend that you have gcloud installed as well, for this example we will use minikube to create the cluster, it also configures kubectl.

You can find a lot of definitions of what a cluster is in Kubernetes, but I will give you mine:

A cluster is just a group of machines together. (Most of the time they are the same and can easily be destroyed)

You need a cluster even if you are going to use only one machine. And in the case of minikube it is just a VirtualBox VM.

Create the cluster:

$ minikube start
🎉  minikube 1.7.3 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.7.3
💡  To disable this notice, run: 'minikube config set WantUpdateNotification false'

🙄  minikube v1.7.2 on Arch rolling
✨  Automatically selected the virtualbox driver. Other choices: none, docker (experimental)
💿  Downloading VM boot image ...
    > minikube-v1.7.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s
    > minikube-v1.7.0.iso: 166.68 MiB / 166.68 MiB [-] 100.00% 29.20 MiB p/s 6s
🔥  Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.17.2 on Docker 19.03.5 ...
💾  Downloading kubectl v1.17.2
💾  Downloading kubelet v1.17.2
💾  Downloading kubeadm v1.17.2
🚀  Launching Kubernetes ...
🌟  Enabling addons: default-storageclass, storage-provisioner
⌛  Waiting for cluster to come online ...
🏄  Done! kubectl is now configured to use "minikube"
minikube start  13.39s user 17.70s system 23% cpu 2:09.66 total

After starting run this command on other terminal and leave it open:

$ minikube tunnel

Now you can start using kubectl!

Kubernetes concepts

  • Cluster: machines together to start docker containers (in general you don't say which machine will start which container)
  • Node: a machine inside the cluster, this is, for example, a Computer Engine VM (Google Cloud) or a Droplet (Digital Ocean)
  • Pod: from the documentation it is said that this is the basic unit of kubernetes. I prefer to say that a pod is a container running (aka, docker run)

Deploying the first container

For this tutorial I will not connect with the database, instead I will run a stateless container.

The only thing this container will get from the outside world is an ENV var. (And if you have done any deploy recently you know that you can connect with a database using an ENV var 😉).

Building the image

NOTE: If you don't want to build your own image you can skip this step and use mine.

I am using a similar code from https://cloud.google.com/kubernetes-engine/docs/quickstarts/deploying-a-language-specific-app

First we have to build our image, remember that Kubernetes run containers.

  1. Create a directory and inside it place this Dockerfile:
from ruby

run gem install sinatra
copy app.rb /app.rb
entrypoint ["ruby"]
  1. Create this file app.rb:
require "sinatra"

set :bind, "0.0.0.0"
set :port, ENV["PORT"] || "8080"

get "/" do
  target = ENV["TARGET"] || "World"
  "Hello #{target}!\n"
end
  1. Build the image:
$ docker build -t hello-target .
Sending build context to Docker daemon  3.072kB
Step 1/4 : from ruby
 ---> 2ff4e698f315
Step 2/4 : run gem install sinatra
 ---> Running in 7d64263bd742
Successfully installed rack-2.2.2
Successfully installed tilt-2.0.10
Successfully installed rack-protection-2.0.8.1
Successfully installed ruby2_keywords-0.0.2
Successfully installed mustermann-1.1.1
Successfully installed sinatra-2.0.8.1
6 gems installed
Removing intermediate container 7d64263bd742
 ---> 4b5946a34e1d
Step 3/4 : copy app.rb /app.rb
 ---> 4b84ef5972c6
Step 4/4 : cmd ["ruby", "/app.rb"]
 ---> Running in 827bc270be8a
Removing intermediate container 827bc270be8a
 ---> e12e46b470d6
Successfully built e12e46b470d6
Successfully tagged hello-target:latest

Pushing the image

NOTE: Again if you don't want to push your own image use mine 😉.

The master node of Kubernetes download the image from a registry.

For this step I am going to push the image to my Docker Hub account and leave it public. In the real world you have to deal with authentication.

Loading the image:

$ docker tag hello-target dmitryrck/hello-target
$ docker push dmitryrck/hello-target
The push refers to repository [docker.io/dmitryrck/hello-target]
746b10eafa9d: Pushed
31bde95e4b34: Pushed
3432f61a06d4: Mounted from dmitryrck/ruby
38a0b0e0037c: Mounted from dmitryrck/ruby
2dba91c4f4b7: Mounted from dmitryrck/ruby
9437609235f0: Mounted from dmitryrck/ruby
bee1c15bf7e8: Mounted from dmitryrck/ruby
423d63eb4a27: Mounted from dmitryrck/ruby
7f9bf938b053: Mounted from dmitryrck/ruby
f2b4f0674ba3: Mounted from dmitryrck/ruby
latest: digest: sha256:a8fc5d52012f61807e44ded0e18cfb8054662548926f51d4df709d351ca496f6 size: 2420

ProTip: You can build your image using docker build -t YOURUSERNAME/hello-target to avoid the step of tagging the image again 😊.

Setting up the ENV vars

Let's first create the configuration for the ENV vars.

Create this file configmap.yml:

apiVersion: v1
kind: ConfigMap

metadata:
  name: my-app-configmap
  namespace: default

data:
  TARGET: "you!"
  PORT: "3000"

And apply that configuration:

$ kubectl apply -f configmap.yml
configmap/my-app-configmap created

When you apply you basically tells the master node of your cluster to do that change.

Setting up the deployment

This is the most complex step/file in this tutorial. There is one reason for that, this file is the one I start writing all the other deploys. Its main features are:

  • Uses the ENV var from that configmap (This way you can git ignore your configmap and include only a sample if you store things on git)
  • Forces kubernetes to always fetch the image when deploying (I think this became the default behavior in recent releases)
  • Adds arguments in case you use the same image for more then one kind of deployment
  • Exposes one port, remember not to do this if your container does not need to be exposed to the external world
  • Two replicas of the deploy
  • Health check
  • And, because we have health check, zero downtime deployment (even if you have only one replica of your deployment)

Create the file puma-deploy.yml:

apiVersion: apps/v1
kind: Deployment

metadata:
  name: puma-deploy

spec:
  replicas: 2
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  selector:
    matchLabels:
      app: puma

  template:
    metadata:
      labels:
        app: puma

    spec:
      containers:
      - name: puma
        image: dmitryrck/hello-target
        imagePullPolicy: Always
        args: ["app.rb", "-e", "production"]
        envFrom:
        - configMapRef:
            name: my-app-configmap
        ports:
        - containerPort: 3000
        readinessProbe:
          initialDelaySeconds: 5
          periodSeconds: 30
          httpGet:
            port: 3000
            path: /

And apply the deployment:

$ kubectl apply -f puma-deploy.yml
deployment.apps/puma-deploy created

If you notice this command finishes extremely quick, the reason for that is:

kubectl does not do the work of deploying, instead it tells the master node to deploy, if any error happens you will see while asking the master node.

One of the ways to ask the master node if your deploy was a success is this:

$ kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
puma-deploy-7ccfbcd57c-8fcd4   1/1     Running   0          74s
puma-deploy-7ccfbcd57c-zm7w2   1/1     Running   0          74s

There you see that the pod is Running.

Expose your deployment with a Load Balance

Your container is running and the only one that knows where it is or how to reach it is the master node.

To expose it to the world we will create a service.

Create this file: load-balance.yml:

apiVersion: v1
kind: Service

metadata:
  name: puma-lb
  labels:
    app: puma

spec:
  type: LoadBalancer
  selector:
    app: puma
  ports:
  - port: 80
    targetPort: 3000
    protocol: TCP

And start your service:

$ kubectl apply -f load-balance.yml
service/puma-lb created

Once more this command finishes extremely quick, the same reason as before, kubectl only sends the information to the master node to create that service.

If you get the list of services you will see it is pending:

$ kubectl get services
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP        23m
puma-lb      LoadBalancer   10.97.214.203   <pending>     80:30783/TCP   1s

After your provider (in our case minikube) finishes creating the service you can get the IP address with that same command:

$ kubectl get services
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1       <none>          443/TCP        8h
puma-lb      LoadBalancer   10.97.214.203   10.97.214.203   80:30783/TCP   7h57m

Cleanning up

If you are just testing these steps will remove everything:

  • control+c with minikube tunnel and call: minikube tunnel --cleanup
  • control+c if you have started minikube dashboard
  • Call those two commands:
$ minikube stop
✋  Stopping "minikube" in virtualbox ...
🛑  "minikube" stopped.
$ minikube delete
🔥  Deleting "minikube" in virtualbox ...
💀  Removed all traces of the "minikube" cluster.

#Protips

  1. If you update the ENV vars in your configmap OR update your image, you have to deploy again. But Kubernetes only deploys again if your YAML file changes. Use these tips to force a new deploy
  2. Use secrets for passwords that you need to use as ENV vars
  3. Run this command to access a nice dashboard with minikube (leave the terminal open and cancel it with control+c):
$ minikube dashboard --url=true
🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...
http://127.0.0.1:34175/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/