This post is part 5 in the series “Hashing out a docker workflow”. I have resurrected this series from over a year ago, but if you want to checkout the previous posts, you can find the first post here. Although the beginning of this blog series pre-dates Docker Machine, Docker for Mac, or Docker for Window’s. The Docker concepts still apply, just not using it with Vagrant any more. Instead, check out the Docker Toolbox. There isn’t a need to use Vagrant any longer.
We are going to take the Drupal image that I created from my last post “Creating a deployable Docker image with Jenkins” and deploy it. You can find the image that we created last time up on Docker Hub, that is where we pushed the image last time. You have several options on how to deploy Docker images to production, whether that be manually, using a service like AWS ECS, or OpenShift, etc… Today, I’m going to walk you through a deployment process using Kubernetes also known as simply k8s.
Why use Kubernetes?
There are an abundance of options out there to deploy Docker containers to the cloud easily. Most of the options provide a nice UI with a form wizard that will take you through deploying your containers. So why use k8s? The biggest advantage in my opinion is that Kubernetes is agnostic of the cloud that you are deploying on. This means if/when you decide you no longer want to host your application on AWS, or whatever cloud you happen to be on, and instead want to move to Google Cloud or Azure, you can pick up your entire cluster configuration and move it very easily to another cloud provider.
Obviously there is the trade-off of needing to learn yet another technology (Kubernetes) to get your app deployed, but you also won’t have the vendor lock-in when it is time to move your application to a different cloud. Some of the other benefits to mention about K8s is the large community, all the add-ons, and the ability to have all of your cluster/deployment configuration in code. I don’t want to turn this post into the benefits of Kubernetes over others, so lets jump into some hands-on and start setting things up.
Setup a local cluster.
Instead of spinning up servers in a cloud provider and paying for the cost of those servers while we explore k8s, we are going to setup a cluster locally and configure Kubernetes without paying a dime out of our pocket. Setting up a local cluster is super simple with a tool called Minikube. Head over to the Kubernetes website and get that installed. Once you have Minikube installed, boot it up by typing
minkube start. You should see something similar to what is shown below:
$ minikube start Starting local Kubernetes v1.10.0 cluster... Starting VM... Downloading Minikube ISO 160.27 MB / 160.27 MB [============================================] 100.00% 0s Getting VM IP address... Moving files into cluster... Downloading kubeadm v1.10.0 Downloading kubelet v1.10.0 Finished Downloading kubelet v1.10.0 Finished Downloading kubeadm v1.10.0 Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster. Loading cached images from config file.
This command setup a virtual machine on your computer, likely using Virtualbox. If you want to double check, pop open the Virtualbox UI to see a new VM created there. This virtual machine has loaded on it all the necessary components to run a Kubernetes cluster. In K8s speak, each virtual machine is called a node. If you want to log in to the node to explore a bit, type
minikube ssh. Below I have ssh’d into the machine and ran
docker ps. You’ll notice that this vm has quite a few Docker containers running to make this cluster.
$ minikube ssh _ _ _ _ ( ) ( ) ___ ___ (_) ___ (_)| |/') _ _ | |_ __ /' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\ | ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/ (_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____) $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES aa766ccc69e2 k8s.gcr.io/k8s-dns-sidecar-amd64 "/sidecar --v=2 --lo…" 5 minutes ago Up 5 minutes k8s_sidecar_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0 6dc978b31b0d k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 "/dnsmasq-nanny -v=2…" 5 minutes ago Up 5 minutes k8s_dnsmasq_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0 0c08805e8068 k8s.gcr.io/kubernetes-dashboard-amd64 "/dashboard --insecu…" 5 minutes ago Up 5 minutes k8s_kubernetes-dashboard_kubernetes-dashboard-5498ccf677-hvt4f_kube-system_3abef591-a637-11e8-894d-0800273ca679_0 f5d725b1c96a gcr.io/k8s-minikube/storage-provisioner "/storage-provisioner" 6 minutes ago Up 6 minutes k8s_storage-provisioner_storage-provisioner_kube-system_3acd2f39-a637-11e8-894d-0800273ca679_0 3bab9f953f14 k8s.gcr.io/k8s-dns-kube-dns-amd64 "/kube-dns --domain=…" 6 minutes ago Up 6 minutes k8s_kubedns_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0 9b8306dbaab7 k8s.gcr.io/kube-proxy-amd64 "/usr/local/bin/kube…" 6 minutes ago Up 6 minutes k8s_kube-proxy_kube-proxy-dwhn6_kube-system_3a0fa9b2-a637-11e8-894d-0800273ca679_0 5446ddd71cf5 k8s.gcr.io/pause-amd64:3.1 "/pause" 7 minutes ago Up 7 minutes k8s_POD_storage-provisioner_kube-system_3acd2f39-a637-11e8-894d-0800273ca679_0 17907c340c66 k8s.gcr.io/pause-amd64:3.1 "/pause" 7 minutes ago Up 7 minutes k8s_POD_kubernetes-dashboard-5498ccf677-hvt4f_kube-system_3abef591-a637-11e8-894d-0800273ca679_0 71ed3f405944 k8s.gcr.io/pause-amd64:3.1 "/pause" 7 minutes ago Up 7 minutes k8s_POD_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0 daf1cac5a9a5 k8s.gcr.io/pause-amd64:3.1 "/pause" 7 minutes ago Up 7 minutes k8s_POD_kube-proxy-dwhn6_kube-system_3a0fa9b2-a637-11e8-894d-0800273ca679_0 9d00a680eac4 k8s.gcr.io/kube-scheduler-amd64 "kube-scheduler --ad…" 7 minutes ago Up 7 minutes k8s_kube-scheduler_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_0 4d545d0f4298 k8s.gcr.io/kube-apiserver-amd64 "kube-apiserver --ad…" 7 minutes ago Up 7 minutes k8s_kube-apiserver_kube-apiserver-minikube_kube-system_2057c3a47cba59c001b9ca29375936fb_0 66589606f12d k8s.gcr.io/kube-controller-manager-amd64 "kube-controller-man…" 8 minutes ago Up 8 minutes k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_ee3fd35687a14a83a0373a2bd98be6c5_0 1054b57bf3bf k8s.gcr.io/etcd-amd64 "etcd --data-dir=/da…" 8 minutes ago Up 8 minutes k8s_etcd_etcd-minikube_kube-system_a5f05205ed5e6b681272a52d0c8d887b_0 bb5a121078e8 k8s.gcr.io/kube-addon-manager "/opt/kube-addons.sh" 9 minutes ago Up 9 minutes k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0 04e262a1f675 k8s.gcr.io/pause-amd64:3.1 "/pause" 9 minutes ago Up 9 minutes k8s_POD_kube-apiserver-minikube_kube-system_2057c3a47cba59c001b9ca29375936fb_0 25a86a334555 k8s.gcr.io/pause-amd64:3.1 "/pause" 9 minutes ago Up 9 minutes k8s_POD_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_0 e1f0bd797091 k8s.gcr.io/pause-amd64:3.1 "/pause" 9 minutes ago Up 9 minutes k8s_POD_kube-controller-manager-minikube_kube-system_ee3fd35687a14a83a0373a2bd98be6c5_0 0db163f8c68d k8s.gcr.io/pause-amd64:3.1 "/pause" 9 minutes ago Up 9 minutes k8s_POD_etcd-minikube_kube-system_a5f05205ed5e6b681272a52d0c8d887b_0 4badf1309a58 k8s.gcr.io/pause-amd64:3.1 "/pause" 9 minutes ago Up 9 minutes k8s_POD_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0
When you’re done snooping around the inside the node, log out of the session by typing
Ctrl+D. This should take you back to a session on your local machine.
Interacting with the cluster
Kubernetes is managed via a REST API, however you will find yourself interacting with the cluster mainly with a CLI tool called
kubectl, we will issue it commands and the tool will generate the necessary Create, Read, Update, and Delete requests for us, and execute those requests against the API. It’s time to install the CLI tool, go checkout the docs here to install on your OS.
Once you have the command line tool installed, it should be automatically configured to interface with the cluster that you just setup with minikube. To verify, run a command to see all of the nodes in the cluster
kubectl get nodes.
$ kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready master 6m v1.10.0
We have one node in the cluster! Lets deploy our app using the Docker image that we created last time.
Writing Config Files
kubectl cli tool, you can define all of your Kubernetes objects directly, but I like to create config files that I can commit in a repository and mange changes as we expand the cluster. For this deployment, I’ll take you through creating 3 different K8s objects. We will explicitly create a Deployment object, which will implicitly create a Pod object, and we will create a Service object. For details on what these 3 objects are, check out the Kubernetes docs.
In a nutshell, a Pod is a wrapper around a Docker container, a Service is a way to expose a Pod, or several Pods, on a specific port to the outside world. Pods are only accessible inside the Kubernetes cluster, the only way to access any services in a Pod is to expose the Pod with a Service. A Deployment is an object that manages Pod’s, and ensures that Pod’s are healthy and are up. If you configure a deployment to have 2 replicas, then the deployment will ensure 2 Pods are always up, and if one crashes, Kubernetes will spin up another Pod to match the Deployment definition.
Head over to the API reference and grab the example config file https://v1-10.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#deployment-v1-apps. We will modify the config file from the docs to our needs. Change the template to look like below (I changed the image, app, and name properties in the yml below):
apiVersion: apps/v1beta1 kind: Deployment metadata: # Unique key of the Deployment instance name: deployment-example spec: # 3 Pods should exist at all times. replicas: 3 template: metadata: labels: # Apply this label to pods and default # the Deployment label selector to this value app: drupal spec: containers: - name: drupal # Run this image image: tomfriedhof/docker_blog_post
Now it’s time to feed that config file into the Kubernetes API, we will use the CLI tool for this:
$ kubectl create -f deployment.yml
You can check the status of that deployment by asking the k8s for all Pod and Deployment objects:
$ kubectl get deploy,po
Once everything is up and running you should see something like this:
$ kubectl get deploy,po NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/deployment-example 3 3 3 3 3m NAME READY STATUS RESTARTS AGE po/deployment-example-fc5d69475-dfkx2 1/1 Running 0 3m po/deployment-example-fc5d69475-t5w2j 1/1 Running 0 3m po/deployment-example-fc5d69475-xw9m6 1/1 Running 0 3m
We have no way of accessing any of those Pods in the deployment. We need to expose the Pods using a Kubernetes Service. To do this, grab the example file from the docs again and change it to the following: https://v1-10.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#service-v1-core
kind: Service apiVersion: v1 metadata: # Unique key of the Service instance name: service-example spec: ports: # Accept traffic sent to port 80 - name: http port: 80 targetPort: 80 selector: # Loadbalance traffic across Pods matching # this label selector app: drupal # Create an HA proxy in the cloud provider # with an External IP address - *Only supported # by some cloud providers* type: LoadBalancer
Create this service object using the CLI tool again:
$ kubectl create -f service.yml
You can now ask Kubernetes to show you all 3 objects that you created by typing the following:
$ kubectl get deploy,po,svc NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/deployment-example 3 3 3 3 7m NAME READY STATUS RESTARTS AGE po/deployment-example-fc5d69475-dfkx2 1/1 Running 0 7m po/deployment-example-fc5d69475-t5w2j 1/1 Running 0 7m po/deployment-example-fc5d69475-xw9m6 1/1 Running 0 7m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1h svc/service-example LoadBalancer 10.96.176.233 <pending> 80:31337/TCP 13s
You can see under the services at the bottom that port 31337 was mapped to port 80 on the Pods. Now if we hit any node in the cluster, in our case it’s just the one VM, on port 31337 we should see the Drupal app that we built from the Docker image we created in the last post. Since we are using Minikube, there is a command to open a browser on the specific port of the service, type
minikube service <name-of-the-service>:
$ minikube service service-example
This should open up a browser window and you should see the Installation screen for Drupal. You have successfully deployed the Docker image that we created to a production-like environment.
What is next?
We have just barely scratched the surface of what is possible with Kubernetes. I showed you the bare minimum to get a Docker image deployed on Kubernetes. The next step is to deploy your cluster to an actual cloud provider. For further reading on how to do that, definitely check-out the KOPS project.
If you have any questions, feel free to leave a comment below. If you want to see a demo of everything that I wrote about on the ActiveLAMP YouTube channel, let us know in the comments as well.