Create an account on Microsoft Azure
We need a cluster where we want to run our application.
You can create the cluster both in the Portal view in your browser or by using Azure command line tool.
We will use the Portal.
cv-cluster
. Remember the name of your resource group and cluster name for later.In order to explore the Kubernetes cluster on Azure Kubernetes Service you need to install the Azure command line tool.
Install the Azure CLI from here.
az aks install-cli
The cloud SDK installs the tool for you. Kubectl is used by both Google Cloud Platform and Microsoft Azure and is used to operate Kubernetes clusters regardless of where they are hosted.
To be able to view your components, you need to login
az login
Then we want to view our cluster.
az aks get-credentials --resource-group [INSERT RESOURCE GROUP FROM SETUP] --name [INSERT CLUSTER NAME FROM SETUP]
What this does is to write credentials to the file ~/.kube/config
. You can take a look at that file too see what is written to it.
If you have multiple subscriptions, you will have to set default subscription to view your clusters:
az account set --subscription [SUBSCTIPTION NAME]
kubectl get nodes
If the status of your nodes are Ready
, you are ready for next step! Otherwise try setting some default config for your project.--resource-group
in our commands every time.`az configure --defaults group=[RESOURCE GROUP NAME]`
Extra task: If you want bash autocompletion for kubectl, follow these steps.
Now that we are authenticated, we can look at the components in our cluster by using the kubectl command.
kubectl get nodes
kubectl describe nodes <INSERT_NODE_NAME>
A node is a worker machine in Kubernetes. A node may be a VM or physical machine, depending on the cluster.
git clone [REPO NAME]
.It is possible to complete the workshop without cloning the repo to your laptop by doing the changes directly in Github and apply the files in the terminal.
To create a deployment on Kubernetes, you need to specify at least one container for your application.
Kubernetes will on a deploy pull the image specified and create pods with this container.
Docker is the most commonly used container service in Kubernetes.
In this repository you will find code for both applications in the backend and frontend directories.
Each of these folders also have their own Dockerfile.
Take a look at the docker files too see how they are built up:
Notice the .dockerignore
files inside both the frontend directory and the backend directory as well.
This file tells the Docker daemon which files and directories to ignore, for example the node_modules
directory.
One way to create Docker images is to manually create ands build images on your own computer with the Docker daemon. Instead, we are going to automatically build images by using build triggers in Google Cloud Platform.
Container registry is where we are going to push our docker images.
Go to Azure Container registry(or search for Container Registry in Azure Portal).
We want to automatically build our code ready for deploy with Azure pipelines.
Github
Configure your pipeline for the fronted-application
If you selected your project to be private and have problems configuring your pipeline built, change it to public for now. This is just for less configuration 😊
Configure your pipeline for the fronted-application
Now, do the same thing for the backend applicatipon.
Remember to change the path for your Docker file ($(Build.SourcesDirectory)/backend/Dockerfile), and give it a new name ex. cvbackend
Click on pipelines and have a look at your builds. Verify that they go green.
When we created the Azure pipelines, we added a yaml file for each of our projects to Github with a commit. Notice that when we added the second pipeline the first pipeline started building again. The reason for this is in the file azure-pipelines.yml (pull new changes to look at the file):
trigger:
- master
To specify builds based only on some branches, simple change or add branches.
We want to change to code to see if it triggers a new build.
Open the file backend/data.js and edit the JSON responses to your name, workplace and education.
If you want, you can also change the background color in frontend/index.css.
You can either change the code in an editor or in GitHub directly. Commit and push your commit.
Then
Go back to the Azure pipelines and click on your Recent pipelines to see whether the build starts building. Notice that you can follow the build log if you want to see whats going on during the building of the image.
It's time to deploy the frontend and backend to your cluster!
The preferred way to configure Kubernetes resources is to specify them in YAML files.
In the folder yaml/ you find the YAML files specifying what resources Kubernetes should create.
There are two files for specifying services and two files for specifying deployments. One for the backend application (backend-service.yaml) and one for the frontend application (frontend-service.yaml).
Same for the deployments.
spec.template.spec.containers.image
insert the full name of your backend Docker image created in the previous step.The name should be on the form [CONTAINER REGISTRY ID]/azurecr.io/[IMAGE NAME]:VERSION
.
You can find the correct path of your image by going to Azure Portal and searching for Container registry. Select your registry, then select Repositories. Latest version can be found under repository under Container registry.
There are a few things to notice in the deployment file:
app: backend
is defined three places:metadata
is used by the service, which we will look at laterspec.selector.matchLabels
is how the Deployment knows which Pods to managespec.template.metadata
is the label added to the Podsspec.template.spec.containers.image
.Add ACR_NAME
(the name of your Azure Container Registry) and SERVICE_PRINCIPAL_NAME
(Must be unique within your AD tenant) to the script located in yaml/create-service-principal.sh.
Run the script:
sh yaml/create-service-principal.sh
Store the variables printed from the script and generate a secret for accessing your Azure Container Registry:
kubectl create secret docker-registry <secret-name> \
--namespace <namespace> \
--docker-server=https://<container-registry-name>.azurecr.io \
--docker-username=<service-principal-ID> \
--docker-password=<service-principal-password>
Let secret-name be acr-docker-secret
and use the principal service-principal-id and service-principal-password be the ones you got by running the scripts above.
Verify that you now have a secret:
kubectl get secret.
A secret is only available for resources within the cluster and is a great way to store passwords and tokens.
It did not work? Alternative ways for accessing your build images here.
kubectl apply -f ./yaml/backend-deployment.yaml
kubectl apply -f ./yaml/frontend-deployment.yaml
watch kubectl get pods
If you don't have watch
installed, you can use this command instead:
kubectl get pods -w
When all pods are running, quit by ctrl + q
(or ctrl + c
when on Windows).
Pods are Kubernetes resources that mostly just contains one or more containers,
along with some Kubernetes network stuff and specifications on how to run the container(s).
All of our pods contains only one container. There are several use cases where you might want to specify several
containers in one pod, for instance if you need a proxy in front of your application.
The Pods were created when you applied the specification of the type Deployment
, which is a controller resource.
The Deployment specification contains a desired state and the Deployment controller changes the state to achieve this.
When creating the Deployment, it will create ReplicaSet, which it owns.
The ReplicaSet will then create the desired number of pods, and recreate them if the Deployment specification changes,
e.g. if you want another number of pods running or if you update the Docker image to use.
It will do so in a rolling-update manner, which we will explore soon.
The Pods are running on the cluster nodes.
Did you noticed that the pod names were prefixed with the deployment names and two hashes? - The first hash is the hash of the ReplicaSet, the second is unique for the Pod.
kubectl get deployments
Here you can see the age of the Deployment and how many Pods that are desired in the configuration specification,
the number of running pods, the number of pods that are updated and how many that are available.
kubectl get replicaset
The statuses are similar to those of the Deployments, except that the ReplicaSet have no concept of updates.
If you run an update to a Deployment, it will create a new ReplicaSet with the updated specification and
tell the old ReplicaSet to scale number of pods down to zero.
Now that our applications are running, we would like to route traffic to them.
targetPort
if the port on the Pods are different from the incoming traffic portapp: backend
defines that it should route requests to our Deployment with the same labelkubectl apply -f ./yaml/backend-service.yaml
kubectl apply -f ./yaml/frontend-service.yaml
kubectl get service
As you can see, both services have defined internal IPs, CLUSTER-IP
. These internal IPs are only available inside the cluster. But we want our frontend application to be available from the internet. In order to do so, we must expose an external IP.
Ok, so now what? With the previous command, we saw that we had two services, one for our frontend and one for our backend. But they both had internal IPs, no external. We want to be able to browse our application from our browser.
Let's look at another way. The Service resource can have a different type, it can be set as a LoadBalancer.
type
to be LoadBalancer
kubectl apply -f ./yaml/frontend-service.yaml
watch kubectl get service frontend
or
kubectl get service frontend -w
kubectl delete replicaset -l app=frontend
By doing this, the Deployment will create a new ReplicaSet which will again create new Pods.As you read earlier, Kubernetes can update your application without down time with a rolling-update strategy.
You will now update the background color of the frontend application, see that the build trigger creates a new image and
update the deployment to use this in your web application.
background-color
to your favourite color (or the color you hate the most?) watch kubectl get pods
Don't close this window. kubectl apply -f ./yaml/frontend-deployment.yaml
Watch how the Pods are terminated and created in the other terminal window.
Notice that there are always at least one Pod running and that the last of the old Pods are first terminated when on of the new ones has the status running.
Ok, everything looks good!
But what if you need to inspect the logs and states of your applications?
Kubernetes have a built in log feature.
Let's take a look at our backend application, and see what information we can retrieve.
kubectl get pods -l app=backend
l
is used to filter by pods with the label app=backend
.kubectl logs <INSERT_THE_NAME_OF_A_POD>
kubectl logs -l app=backend
kubectl exec -it <INSERT_THE_NAME_OF_A_POD> -- printenv
Here you can see that we have IP addresses and ports to our frontend service.
These IP addresses are internal, not reachable from outside the cluster.
You can set your own environment variables in the deployment specification.
They will show up in this list as well.
kubectl describe deployment backend
Notice that StrategyType: RollingUpdate
that we saw when we applied an updated frontend is set by default.
A cool thing in Kubernetes is the Kubernetes DNS.
Inside the cluster, Pods and Services have their own DNS record.
For example, our backend service is reachable on backend.<NAMESPACE>.svc.cluster.local
. If you are sending the request from the same namespace, you can also reach it on backend
.
We will take a look at this.
kubectl config view | grep namespace:
If there is no output, your namespace is default
.
kubectl get pods -l app=frontend
curl
from one of our frontend containers to see that we can reach our backend internally on http://backend.<NAMESPACE>.svc.cluster.local:5000
kubectl exec -it INSERT_FRONTEND_POD_NAME -- curl -v http://backend.<NAMESPACE>.svc.cluster.local:5000
The HTTP status should be 200 along with the message "Hello, I'm alive"
curl
from the same container to see that we can reach our backend internally on the shortname http://backend:5000
as wellkubectl exec -it INSERT_FRONTEND_POD_NAME -- curl -v http://backend:5000
The output should be the same as above.
Right now we have exposed our frontend service by setting the service type to LoadBalancer
.
Another option would be to use an ingress.
An ingress is a resource that will allow traffic from outside the cluster to your services. We will now create such a resource to get an external IP and to allow requests to our frontend service.
frontend
service on port 8001
.kubectl apply -f ./yaml/ingress.yaml
watch kubectl get ingress cv-ingress
or
kubectl get ingress cv-ingress -w
It may take a few minutes for Kubernetes Engine to allocate an external IP address and set up forwarding rules until the load balancer is ready to serve your application. In the meanwhile, you may get errors such as HTTP 404 or HTTP 500 until the load balancer configuration is propagated across the globe.
The LoadBalancer type is dependent on your cloud provider. Google Cloud Platform supports these features, but other providers might not.
Another way to expose our app is with the service type NodePort
. If we look at our frontend service, we can see that it already is defined as this type. So we are good to go then? No, not yet.
type
to be NodePort
and savekubectl apply -f ./yaml/frontend-service.yaml
kubectl get service frontend
We see that our service doesn't have an external IP. But what it do have is two ports, port 80 and a port in the range 30000-32767. The last port was set by the Kubernetes master when we created our service. This port we will use togheter with an external IP.
kubectl get nodes -o wide
curl -v <EXTERNAL_IP>:<NODE_PORT>
This will output Connection failed
. This is because we haven't opened up requests on this port. Lets create a firewall rule that allows traffic on this port:
NODE_PORT
with the node port of your service:gcloud compute firewall-rules create cv-frontend --allow tcp:NODE_PORT
6
again.How does this work? The nodes all have external IPs, so we can curl them. By default, neither services or pods in the cluster are exposed to the internet, but Kubernetes will open the port of NodePort
services on all the nodes so that those services are available on :.
The Ingress resource is dependent on your cloud provider. Google Cloud Platform supports these features, but other providers might not.
Kubernetes is using health checks and readiness checks to figure out the state of the pods.
If the health check responds with an error status code, Kubernetes will asume the container is unhealthy and kill the pod. Simliary, if the readiness check is unsuccessful, Kubernetes will asume it is not ready, and wait for it.
You can define your own.
The first way to define a health check is to define which endpoint the check should use. Our backend application contains the endpoint /healthz
. Go ahead and define this as the health-endpoint in the backend deployment file, under backend container in the list spec.containers
:
livenessProbe:
httpGet:
path: /healthz
port: 8001
httpHeaders:
- name: X-Custom-Header
value: Awesome
initialDelaySeconds: 3
periodSeconds: 3
When applying the new deployment file, run kubectl get pods
to see that that the deployment has created a new Pod. Describe it to see the new specification.
We can also specify a command to execute in the container. Lets do this for the frontend application:
livenessProbe:
exec:
command:
- ls
- /
initialDelaySeconds: 5
periodSeconds: 5
The command can be any command available in the container. The commands available in your container depends on the base image and how you build your image.
E.g. if your container has curl
installed, we could define that the probe is to curl the /healtz
endpoint from the container. This wouldn't make much sence, though...
You can always look at the pricing for resources here and your remaining credits by searching for Free trial in the portal.
Delete your cluster
Be careful and only delete the cluster we have made during the workshop 😉 This may take some time
az aks delete --name [CLUSTER NAME] --resource-group [RESOURCE GROUP NAME]
Close your billing account
Follow the steps here to close your Azure Subscription.
Verify that everything is closed on https://dev.azure.com/ alo.
And that's it! ⎈
And you are done and your credit card will not be charged.
Contact us on @linemoseng or @ingridguren.