Zero cost preview environments on Kubernetes with Jenkins X and Osiris

Step by step integration of Osiris in Jenkins X with Helm, to enable an automatic “scaling down to 0” setup for your Jenkins X Preview Environments.

Vincent Behar
7 min readFeb 5, 2019

One of Jenkins X’s killer feature is the Preview Environments: deploying a Pull Request’s code in its own isolated environment, using Helm. It’s so great that you can quickly end up with a lot of them: a few open PRs in all your repositories, and before you know it you need a bigger Kubernetes cluster for Jenkins X, to host both your preview environments and your build pods.

Sure, there are a lot of solutions to handle it. You can try to limit the number of opened PRs or to use small requested resources for your preview environments. But we’ll try to do something way cooler: we will automatically scale down to 0 your preview environments when they are not used. And of course, bring them up when they are needed. Because you will usually need your Preview Environment to run your integration tests — as part of your CI/CD pipeline — but most of the time they will just sit there, eating resources, and waiting for someone to come.

Osiris

Osiris is a Kubernetes controller that will manage some of your deployments, and scale them down to 0 if they didn’t receive any requests recently — thus deleting your application’s pod, and freeing resources for other pods. Of course, if you try to send a request to your application’s endpoint, Osiris will take care of scaling the deployment up, to start a new pod to answer to the request. You can read their README for a detailed explanation of how it works internally, or how to install it. For the moment, we’ll focus on how to use it, in the context of a Jenkins X Preview Environment. Per Osiris documentation, we just need to add specific annotations to the deployment and service manifests. Sounds easy, right?

Jenkins X, Helm and Preview Environments

To adjust our manifests, we first need to understand how Jenkins X uses Helm to deploy a Preview Environment. If you check out a build pack, you can see that there are 2 charts:

  • the “real” one, which contains the templates used to define the Kubernetes manifests
  • an umbrella “preview” chart, which doesn’t contain any template, but has dependencies — including one on the “real” — application — chart.

Jenkins X deploys the preview chart to create the preview environment. The advantage over installing the application’s chart directly, is a new indirection layer, which allows to customise the preview environment easily: we just need to change the preview chart.

Deployment Manifest

Our first step will be to add the following annotation to our deployment’s manifest, so that Osiris will know that it needs to manage our deployment.

osiris.deislabs.io/enabled: "true"

To do that, we need to start by making sure our deployment’s template supports adding annotations:

kind: Deployment
metadata:
{{- with .Values.deployment.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}

and then we’ll update the preview chart’s values.yaml file to add our annotation:

preview:
deployment:
annotations:
osiris.deislabs.io/enabled: "true"

If you check the preview chart’s requirements.yaml file, you will see that the application’s chart is imported using the preview alias. Therefore, we are defining the annotation in the preview section in the values. Perhaps it would have been better to alias it as app to avoid some confusion… But that’s not something we can easily change, because Jenkins X depends on this preview alias to generate an extra values.yaml file with the right container image repository and tag.

Service Manifest

Now, let’s have a look at the requirements on the service manifest. It starts exactly as the deployment one, with the same activation flag, defined as an annotation:

osiris.deislabs.io/enabled: "true"

and the good news is that the service template that comes with the build pack by default already supports adding custom annotations. So, nothing new to do here.

The second requirement is more complex: Osiris requires an annotation with the deployment name, to link a deployment to its service.

osiris.deislabs.io/deployment: DEPLOYMENT_NAME

The issue for us is that the deployment name is dynamic, so we can’t just hardcode it. It’s defined as follow in our deployment’s template:

kind: Deployment
metadata:
name: {{ include "fullname" . }}

The fullname “named template” is usually defined in the _helpers.tpl file in your chart, and is based on the name of the release — amongst others.

Side note: one of Helm’s best practice is to prefix the “named templates” with your chart name, to avoid collision when adding dependencies. It’s not the case in the “build packs” for the moment, but it’s easy to change if you’d like to follow Helm’s best practices.

So, we need to use {{ include "fullname" . }} as the value for our annotation, and write something like the following in our preview chart’s values.yaml file:

preview:
service:
annotations:
osiris.deislabs.io/deployment: '{{ include "fullname" . }}'

But it can’t work, because the values are used as-is, and are not processed by the rendering engine — see this Helm issue for more information. So, we need to find a way to process these values — using the tpl Helm function, as explained in the very good Art of the Helm Chart: Patterns from the Official Kubernetes Charts article. We’ll change our template as follow:

kind: Service
metadata:
{{- with .Values.service.annotations }}
annotations:
{{ tpl (toYaml .) $ | indent 4 }}
{{- end }}

You can notice the introduction of the tpl function, used to process our annotations before indenting them. We could have just used tpl .Values.service.annotations $, but it expects the annotations to be a string, forcing us to write our values as follows:

preview:
service:
annotations: |
osiris.deislabs.io/deployment: '{{ include "fullname" . }}'

Notice the extra | that is now needed, to say that the annotations value is a string and not a map. Who said writing YAML is simple?

But we can fix that, using the toYaml function in the template, to convert our map to a YAML string representation, before passing it to the tpl function. The template gets a little more complex, but at least people using your chart won’t have to fight against the YAML syntax to convert their annotations to a string.

Ingress Hostname

The last annotation we need to add to our service manifest, is the hostname of our ingress — so that Osiris can map an incoming request to a service.

osiris.deislabs.io/ingressHostname: xxx.example.com

It sounds easy enough, but once again, for our preview environments, hostnames are dynamically generated, by a tool called exposecontroller. The exposecontroller tool run as a post-install and post-upgrade Helm hook — which means after your deployment and service are created from the manifests — and automatically creates an ingress resource for your service. You can see it in the values.yaml file of your preview chart.

At that point, we have 2 solutions:

  • stop using the exposecontroller, and generate the ingress resource ourselves, so that we can control the hostname. But if we do that, we’ll need to add knowledge of our domain name in all our preview charts in all our repositories.
  • or we can just hack the exposecontroller, so that it can inject the hostname as an annotation on the service itself. This way, we keep the benefit of the automatic ingress creation, including the central configuration of the domain name — in a config map in the Kubernetes cluster.

Of course we’re going to select the second solution, and after this little PR — more tests and documentation than code — we have a new release v2.3.85 of the exposecontroller that supports the injection of the hostname in a service annotation, just by adding yet another annotation on our service:

preview:
service:
annotations:
fabric8.io/exposeHostNameAs: osiris.deislabs.io/ingressHostname

This annotation instructs the exposecontroller to update the service’s annotations as follow:

kind: Service
metadata:
annotations:
osiris.deislabs.io/ingressHostname: xxx.example.com

Before using it, don’t forget to ensure that you are using a recent enough version of exposecontroller, by checking the requirements.yaml file of your preview chart. It should use at least version 2.3.85.

Binding it all together

Let’s review all the changes. In the application’s chart, you need to write the deployment and service templates as follow:

---
kind: Deployment
metadata:
name: {{ include "myapp.fullname" . }}
{{- with .Values.deployment.annotations }}
annotations:
{{ tpl (toYaml .) | trim | indent 4 }}
{{- end }}
---
kind: Service
metadata:
name: {{ include "myapp.fullname" . }}
{{- with .Values.service.annotations }}
annotations:
{{ tpl (toYaml .) $ | trim | indent 4 }}
{{- end }}

And in the preview chart, you need to write the values.yaml file as follow:

preview:
deployment:
annotations:
osiris.deislabs.io/enabled: "true"
service:
annotations:
osiris.deislabs.io/enabled: "true"
osiris.deislabs.io/deployment: '{{ include "myapp.fullname" . }}'
fabric8.io/exposeHostNameAs: osiris.deislabs.io/ingressHostname

If you open a Pull Request with these changes, Jenkins X will deploy a Preview Environment as usual, but if you don’t make any HTTP request to the ingress endpoint, after a few minutes Osiris will scale your deployment down to 0 replicas. The ingress, service and deployment will still be there, just no pod — and so no resources used. And when you’ll make a new HTTP request to the ingress endpoint, Osiris will scale up your deployment, thus creating a pod to answer your request.

Conclusion

This is a very good example of why I like to work with Kubernetes: it’s easy to extend and to combine tools together, to build a workflow that match your needs. And of course, it wouldn’t be possible without open-source components. Thank you Kubernetes, Jenkins X, Helm and Osiris!

--

--

Vincent Behar
Vincent Behar

Written by Vincent Behar

I’m a developer, and I love it ;-) My buzzwords of the moment are Go, Kubernetes, Observability, Continuous Delivery, and everything open-source

No responses yet