Getting started with gitkube
Gitkube
This time we will see how to get started with Gitkube, it’s a young project but it seems to work fine and it has an interesting approach compared to other alternatives, since it only relies on git and kubectl, other than that it’s just a CRD and a controller, so you end up with 2 pods in kube-system one for the controller and the other for gitkubed, gitkubed is in charge of cloning your repos and also build the docker images, it seems that the idea behind gitkube is for the daily use in a dev/test environment where you need to try your changes quickly and without hassle. You can find more examples here, also be sure to check their page and documentation if you like it or want to learn more.
In the examples I will be using minikube or you can check out this repo that has a good overview of minikube, once installed and started (minikube start
) that command will download and configure the local environment, if you have been following the previous posts you already have minikube installed and working, but in this post be sure to use minikube tunnel if you configure gitkube with a load balancer (or if you configure any service type as load balancer):
Let’s get started
We’re going to deploy or re-deploy our echo bot one more time but this time using gitkube. You can find the chat bot: article here, and the repo: here
First of all we need to install the gitkube binary in our machine and then the CRD in our kubernetes cluster:
$ kubectl create -f https://storage.googleapis.com/gitkube/gitkube-setup-stable.yaml
customresourcedefinition.apiextensions.k8s.io "remotes.gitkube.sh" created
serviceaccount "gitkube" created
clusterrolebinding.rbac.authorization.k8s.io "gitkube" created
configmap "gitkube-ci-conf" created
deployment.extensions "gitkubed" created
deployment.extensions "gitkube-controller" created
$ kubectl --namespace kube-system expose deployment gitkubed --type=LoadBalancer --name=gitkubed
service "gitkubed" exposed
Note that there are 2 ways to install gitkube into our cluster, using the manifests as displayed there or using the gitkube binary and doing gitkube install
.
To install the gitkube binary, the easiest way is to do:
curl https://raw.githubusercontent.com/hasura/gitkube/master/gimme.sh | sudo bash
This will download and copy the binary into: /usr/local/bin
, as a general rule I recommend reading whatever you are going to pipe into bash in your terminal to avoid potential dangers of the internet.
Then we need to generate (and then create it in the cluster) a file called remote.yaml
(or any name you like), it’s necessary in order to tell gitkube how to deploy our application once we git push
it:
$ gitkube remote generate -f remote.yaml
Remote name: minikube
namespace: default
SSH public key file: ~/.ssh/id_rsa.pub
Initialisation: K8S YAML Manifests
Manifests/Chart directory: Enter
Choose docker registry: docker.io/kainlite
Deployment name: echobot
Container name: echobot
Dockerfile path: Dockerfile
Build context path: ./
Add another container? [y/N] Enter
Add another deployment? [y/N] Enter
And this will yield the following remote.yaml
file that we then need to create in our cluster as it is a custom resource it might look a bit different from the default kubernetes resources.
The actual file remote.yaml
:
apiVersion: gitkube.sh/v1alpha1
kind: Remote
metadata:
creationTimestamp: null
name: minikube
namespace: default
spec:
authorizedKeys:
- |
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA8jvVVtDSVe25p2U2tDGQyVrnv3YcWjJc6AXTUMc0YNi+QDm6s+hMTwkf2wDRD7b6Y3kmgNSqLEE0EEgOkA69c8PgypM7AwbKZ51V9XcdPd7NyLabpomNiftpUwi01DGfBr25lJV9h2MHwsI/6w1izDvQyN7fAl+aTFgx+VGg1p4FygXWeBqm0n0DfHmBI7PDXxGbuFTJHUmRVS+HPd5Bi31S9Kq6eoodBWtV2MlVnZkpF67FWt2Xo2rFKVf4pZR4N1yjZKRsvIaI5i14LvtOoOqNQ+/tPMAFAif3AhldOW06fgnddYGi/iF+CatVttwNDWmClSOek9LO72UzR4s0xQ== gabriel@kainlite
deployments:
- containers:
- dockerfile: Dockerfile
name: echobot
path: ./
name: echobot
manifests:
helm: {}
path: ""
registry:
credentials:
secretKeyRef:
key: ""
secretRef: minikube-regsecret
url: docker.io/kainlite
status:
remoteUrl: ""
remoteUrlDesc: ""
There are a few details to have in mind here, the deployment name because gitkube expects a deployment to be already present with that name in order to update/upgrade it, the path to the Dockerfile, or helm chart, credentials for the registry if any, I’m using a public image, so we don’t need any of that. The wizard will let you choose and customize a few options for your deployment.
The last step would be to finally create the resource:
$ gitkube remote create -f remote.yaml
INFO[0000] remote minikube created
INFO[0000] waiting for remote url
INFO[0000] remote url: ssh://default-minikube@10.98.213.202/~/git/default-minikube
# add the remote to your git repo and push:
git remote add minikube ssh://default-minikube@10.98.213.202/~/git/default-minikube
git push minikube master
After adding the new remote called minikube we have everything ready to go, so let’s test it and see what happens:
$ git push minikube master
Enumerating objects: 10, done.
Counting objects: 100% (10/10), done.
Delta compression using up to 8 threads
Compressing objects: 100% (10/10), done.
Writing objects: 100% (10/10), 1.92 KiB | 1.92 MiB/s, done.
Total 10 (delta 2), reused 0 (delta 0)
remote: Gitkube build system : Tue Jan 1 23:50:58 UTC 2019: Initialising
remote:
remote: Creating the build directory
remote: Checking out 'master:a0265bc5d0229dce0cffc985ca22ebe28532ee95' to '/home/default-minikube/build/default-minikube'
remote:
remote: 1 deployment(s) found in this repo
remote: Trying to build them...
remote:
remote: Building Docker image for : echobot
remote:
remote: Building Docker image : docker.io/kainlite/default-minikube-default.echobot-echobot:a0265bc5d0229dce0cffc985ca22ebe28532ee95
remote: Sending build context to Docker daemon 7.68kB
remote: Step 1/12 : FROM golang:1.11.2-alpine as builder
remote: ---> 57915f96905a
remote: Step 2/12 : WORKDIR /app
remote: ---> Using cache
remote: ---> 997342e65c61
remote: Step 3/12 : RUN adduser -D -g 'app' app && chown -R app:app /app && apk add git && apk add gcc musl-dev
remote: ---> Using cache
remote: ---> 7c6d8b9d1137
remote: Step 4/12 : ADD . /app/
remote: ---> Using cache
remote: ---> ca751c2678c4
remote: Step 5/12 : RUN go get -d -v ./... && go build -o main . && chown -R app:app /app /home/app
remote: ---> Using cache
remote: ---> 16e44978b140
remote: Step 6/12 : FROM golang:1.11.2-alpine
remote: ---> 57915f96905a
remote: Step 7/12 : WORKDIR /app
remote: ---> Using cache
remote: ---> 997342e65c61
remote: Step 8/12 : RUN adduser -D -g 'app' app && chown -R app:app /app
remote: ---> Using cache
remote: ---> 55f48da0f9ac
remote: Step 9/12 : COPY --from=builder --chown=app /app/health_check.sh /app/health_check.sh
remote: ---> Using cache
remote: ---> 139250fd6c77
remote: Step 10/12 : COPY --from=builder --chown=app /app/main /app/main
remote: ---> Using cache
remote: ---> 2f1eb9f16e9f
remote: Step 11/12 : USER app
remote: ---> Using cache
remote: ---> a72f27dccff2
remote: Step 12/12 : CMD ["/app/main"]
remote: ---> Using cache
remote: ---> 034275449e08
remote: Successfully built 034275449e08
remote: Successfully tagged kainlite/default-minikube-default.echobot-echobot:a0265bc5d0229dce0cffc985ca22ebe28532ee95
remote: pushing docker.io/kainlite/default-minikube-default.echobot-echobot:a0265bc5d0229dce0cffc985ca22ebe28532ee95 to registry
remote: The push refers to repository [docker.io/kainlite/default-minikube-default.echobot-echobot]
remote: bba61bf193fe: Preparing
remote: 3f0355bbea40: Preparing
remote: 2ebcdc9e5e8f: Preparing
remote: 6f1324339fd4: Preparing
remote: 93391cb9fd4b: Preparing
remote: cb9d0f9550f6: Preparing
remote: 93448d8c2605: Preparing
remote: c54f8a17910a: Preparing
remote: df64d3292fd6: Preparing
remote: cb9d0f9550f6: Waiting
remote: 93448d8c2605: Waiting
remote: c54f8a17910a: Waiting
remote: df64d3292fd6: Waiting
remote: 2ebcdc9e5e8f: Layer already exists
remote: 6f1324339fd4: Layer already exists
remote: 3f0355bbea40: Layer already exists
remote: bba61bf193fe: Layer already exists
remote: 93391cb9fd4b: Layer already exists
remote: 93448d8c2605: Layer already exists
remote: cb9d0f9550f6: Layer already exists
remote: df64d3292fd6: Layer already exists
remote: c54f8a17910a: Layer already exists
remote: a0265bc5d0229dce0cffc985ca22ebe28532ee95: digest: sha256:3046c989fe1b1c4f700aaad875658c73ef571028f731546df38fb404ac22a9c9 size: 2198
remote:
remote: Updating Kubernetes deployment: echobot
remote: deployment "echobot" image updated
remote: deployment "echobot" successfully rolled out
remote: NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
remote: echobot 1 1 1 1 31s
remote:
remote: Removing build directory
remote:
remote: Gitkube build system : Tue Jan 1 23:51:16 UTC 2019: Finished build
remote:
remote:
To ssh://10.98.213.202/~/git/default-minikube
* [new branch] master -> master
Quite a lot happened there, first of all gitkubed checked out the commit from the branch or HEAD that we pushed to /home/default-minikube/build/default-minikube
and then started building and tagged the docker image with the corresponding SHA, after that it pushed the image to docker hub and then updated the deployment that we already had in there for the echo bot.
The last step would be to verify that the pod was actually updated, so we can inspect the pod configuration with kubectl describe pod echobot-654cdbfb99-g4bwv
:
$ kubectl describe pod echobot-654cdbfb99-g4bwv
Name: echobot-654cdbfb99-g4bwv
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Tue, 01 Jan 2019 20:51:10 -0300
Labels: app=echobot
pod-template-hash=654cdbfb99
Annotations: <none>
Status: Running
IP: 172.17.0.9
Controlled By: ReplicaSet/echobot-654cdbfb99
Containers:
echobot:
Container ID: docker://fe26ba9be6e2840c0d43a4fcbb4d79af38a00aa3a16411dee5e4af3823d44664
Image: docker.io/kainlite/default-minikube-default.echobot-echobot:a0265bc5d0229dce0cffc985ca22ebe28532ee95
Image ID: docker-pullable://kainlite/default-minikube-default.echobot-echobot@sha256:3046c989fe1b1c4f700aaad875658c73ef571028f731546df38fb404ac22a9c9
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 01 Jan 2019 20:51:11 -0300
Ready: True
Restart Count: 0
Liveness: exec [/bin/sh -c /app/health_check.sh] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SLACK_API_TOKEN: really_long_token
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-ks4jx (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-ks4jx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-ks4jx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 39m default-scheduler Successfully assigned default/echobot-654cdbfb99-g4bwv to minikube
Normal Pulled 39m kubelet, minikube Container image "docker.io/kainlite/default-minikube-default.echobot-echobot:a0265bc5d0229dce0cffc985ca22ebe28532ee95" already present on machine
Normal Created 39m kubelet, minikube Created container
Normal Started 39m kubelet, minikube Started container
As we can see the image is the one that got built from our git push
and everything is working as expected.
And that’s it for now, I think this tool has a lot of potential, it’s simple, nice and fast.
Errata
If you spot any error or have any suggestion, please send me a message so it gets fixed.
Also, you can check the source code and changes in the generated code and the sources here
-
Comments
Online: 0
Please sign in to be able to write comments.