Continuous Integration is now a common practice in software engineering. But not yet in the more recent field of “GitOps”. Let’s see why, and how we can improve this situation.

A “modern” definition of Continuous Integration in 2021, is to ensure that the changes pushed to the main branch are “valid”. We usually do that by proposing changes through a Pull Request — or Merge Request, depending on your git provider — and configuring one or more pipelines to build the project, execute various tests, run static analyzers and security scanners. Any failure would be seen early in the process and fixed before merging the changes in the main branch. …


Gitops is often defined as “operations by pull request”. But how do you create these pull requests? With Octopilot!

When we started practicing gitops at work, we used updatebot to automate the creation of pull requests on the git repositories that represents our environments. Why updatebot? Because it was part of the Jenkins X project, and we’re using Jenkins X for our Continuous Delivery platform.

But very soon we wanted to use our — successful — gitops workflow for more projects, and we felt limited by the tool. So we started writing our own CLI tool to automate the creation of pull requests: Octopilot. This was more than one year ago. Although the codebase was open-source from day one…


This is a step-by-step guide to enable correlation between metrics and traces, using Prometheus, Grafana, Tempo, OpenTelemetry, and OpenMetrics.

Exemplars are coming! But what are exemplars? A few highlighted values in a time series, with a set of key-value pairs (labels). The main use-case is to include the request’s trace ID in the labels so that we can jump from a metric time series to the interesting traces directly. This is also known as correlation between metrics and traces.

Screenshot of a Grafana time series, with Exemplars linking to Tempo to visualize related traces

Enabling Exemplars in an already instrumented Go application requires:


Design and implementation of Cloud-Native CI/CD Pipelines on Kubernetes for the enterprise, using Jenkins X.

Photo by Victor Garcia on Unsplash

The cloud-native era has opened up a whole new level for CI/CD, and it’s impacting the design of our pipelines. The days of the big monolith pipeline that only one person can decipher are gone. Let’s see how we can design and implement cloud-native CI/CD pipelines on Kubernetes, for the enterprise, using Jenkins X.

Let’s review our requirements first. I said “for the enterprise”, because writing pipelines for a single open-source project or hundreds/thousands of enterprise projects is not the same.

The first requirement is the conventions or standards you might have in your company around code quality, testing, packaging


Dailymotion’s journey from Jenkins to Jenkins X

Part of the Dailymotion AdTech Team, with the “Most Innovative Jenkins X Innovation” Award

One year ago, we wrote about our journey from Jenkins to Jenkins X. It’s time to take a step back and see where we are now and how this journey has impacted the way we write and deliver software at Dailymotion.

We’ve been using Jenkins X for more than one year now to handle all the build and delivery of our ad-tech platform at Dailymotion — with great results. Applying the practices described in the Accelerate book and implemented in Jenkins X allowed us to break the silos, to “shift left” some responsibilities and move faster. …


In this blog post, we’ll see how easy it is to “protect” a web app behind Okta, using Nginx as a reverse-proxy in front of it. In a Kubernetes environment.

We’re using both Kubernetes to deploy our applications and Okta as a company SSO. We’re also big fans of Jenkins X. Jenkins X comes with a few UI, which unfortunately don’t have native authentication/authorization implementation yet:

Jenkins X relies on Nginx for its ingress controller, and it uses the basic auth feature to protect its UI by default. The issue with this solution is that you either need to manually manage all your users (and passwords), or give them a shared set of credentials.

As we’re…


This is the fourth part of a series of blog posts on the internals of the Jenkins X Pipelines. We’ll focus on the “steps” that compose a pipeline, and we’ll see how they are implemented using Tekton.

In the previous part of this series, we’ve walked through the internals of the stages that compose a pipeline. Now we’ll see how the Jenkins X Steps — that compose a stage — are implemented.

First, a few pointers:

These steps — we’ll call them Jenkins X Steps — are converted by the meta pipeline into Tekton Steps. And a Tekton Step is just a Kubernetes Container, as you can see in the source code definition of a Tekton Step.

So:

  • Jenkins X Stages are converted into…


This is the third part of a series of blog posts on the internals of the Jenkins X Pipelines. We’ll focus on the “stages” that compose a pipeline, and we’ll see how they are implemented using Tekton.

In the first part of this series, we’ve walked through everything that happened in the cluster, from the incoming GitHub WebHook event to a running Tekton pipeline. In the second part, we’ve talked about the “Meta Pipeline”, which is responsible for converting your Jenkins X Pipeline into a Tekton Pipeline. Now it’s time to dive into the internals of the Tekton pipelines, and we’ll start with an investigation of how the Jenkins X Stages — that compose a pipeline — are implemented.

So a pipeline has stages, and each stage has steps. But why do we need stages? Well, for…


This is the second part of a series of blog posts on the internals of the Jenkins X Pipelines. We’ll dive into the “Meta Pipeline”: the part of the workflow which is responsible for retrieving the right pipeline to execute, and to translate it into a format that Tekton can understand.

In the first part of this series, we’ve walked through everything that happened in the cluster, from the incoming GitHub WebHook event to a running Tekton pipeline. But pipelines can be complex, and built from multiple levels of inheritance. This is the case with the Jenkins X Pipelines, which supports Build Packs to abstract away most of the complexity for the developers. The downside is that building the whole pipeline is now more complex because it requires retrieving parts of it from multiple git repositories — before combining them all together. Doing all that work is too much for the…


This is a series of blog posts on the internals of the Jenkins X Pipelines, implemented using Prow and Tekton.

As I am starting to migrate from the Jenkins Declarative Pipelines to the (new) Jenkins X Pipelines — defined in YAML format and implemented using Prow and Tekton — I am diving into the internals of the system, because I like to understand how things works.

So I am now sharing my findings here, as a series of blog posts. I’ll add links to source code so you can dive deeper into a specific topic if you want.

  • The first part is a walk-through of a GitHub WebHook event coming to the cluster until it results in a running Tekton…

Vincent Behar

I’m a developer, and I love it ;-) My buzzwords of the moment are Go, Kubernetes, Jenkins X, CI/CD, Gitops, Open-source, …

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store