Git Ops: git with operations powers

I’ve always had the philosophy that everything that is needed to run a project should be in your repository. A developer must be able to deliver their release with all reliability steps (CI/CD) performing all operations if they form self-contained.

This philosophy is very similar to the feature that some projects have called zero configuration. Projects have dependencies. Notice when you have a package.json you don’t have to configure each dependency manually, just run npm install and you will have everything you need in your repository.

What about the CI/CD? Well, today we can create the pipelines through files in your project, this works for Jenkins, GitLab, Travis and others. But you will still get to rely on the external tool and configured properly to run the pipelines. This creates one of the biggest drawbacks of today’s projects: lock-in.

If I configure .gitlab-ci.yaml how do I run the same pipeline if I need my project to go through Jenkins? And worse, if I want to test my pipeline locally do I need to have GitLab send run on my machine? I’m not saying that these tools are not good for this reason, it’s a matter of choice about having a self-contained project or not.

How I solved this problem

Anyone who has had experience with continuous integration, deployment, delivery knows what it takes to run a pipeline to ensure your testing, builds, releases, deployments, and so on. In all these steps we deal with various configurations, variables, constants, sensitive data (think about SSH keys, certificates, passwords, API keys).

So I listed some common attributes to create a self-contained CI/CD tool in the project:

  • Security: Sensitive data or dependencies can be generated from or used by pipelines, for that I used the format KDBX.
  • Versioned database: Each pipeline execution can generate data that must be stored and versioned in the repository itself (think about changelogs), once again the KDBX can be stored in the repository itself.
  • Ease of management: No one wants something complicated or difficult or with a list of huge commands to decorate to simply create their pipelines, so I created the “ops pack” a way to create the KDBX through a project that informs how the database file should be composed.
  • Scripting: Of course, this is the heart of the pipelines, you need to be able to create the scripts to perform the confidence steps, so I added support for shell scripting and Javascript (NodeJS 10).
  • Friendly: To have a painless adoption by developers, devops and SREs should be something very friendly, for this I created this software as a git subcommand.

What is the result?

The git-ops project in the project README in 2 minutes you will be able to have all this in your project without getting stuck with any tool.

Unleash Redis's full potential

I think most people don’t use Redis’s full potential. The same goes for Elasticsearch, but I’ll tell you about it in another post. In the present market I have observed many teams and companies using wide scope tools to complete small parts of their solutions or problems. It is very simple to think of a tool that solves a specific problem without thinking of all the burden generated to maintain it and make it sustainable in the long run.

How do you know Redis?

If you were to list the keywords about it what would be the first? By experience, most people would say: “cache”, “key-value”, “in memory database”.

It isn’t all wrong, they are features of Redis. Now I’ll tell you how I would describe Redis: an application for storing and managing machine state cycles.

Stateless applications are not useful without inputs and outputs, this can be thought as an external machine state that is handled by your application. Note: Being stateless to me is almost a commandment when we are talking about SOA.

Now I’ll show you a little more of Redis can do.

Have you read Redis In Action eBook? You should, this will open your horizons.

Scheduled jobs, per example, which you can easily build with the Redis Sorted Sets. Tip: Think about the criteria to rank the timestamp of your job scheduling.

Now you’re probably ready to get to know KeyDB if you want to achieve stratospheric levels of performance.

One question that I am asked most oftenly is about persistence: “I am not ok with losing my states stored in Redis, so I think it is only useful for caching.” What do you think?

Strongly disagree, have you read about AOFs and RDBs? I imagine you need performance and reliability, how to solve this if the AOF guarantees me persistence, but it destroys the performance and the RDB is just the opposite? Simple: Create two Redis (or KeyDB) implementation tiers, one for high performance (RDB) and one for high reliability (AOF), so your application can connect with when you need one feature.

Do you use Kafka or RabbitMQ?

Remember that what I am going to say now is my thought bias: you probably don’t need them or you would get a better result with Redis Streams (implemented in Redis 5.0). Think about the possibilities for microservices, SOA, event-sourcing, CQRS, SCS.

In this last point, to exploit that potential I created HFXBus that will help you create a battle-ready SOA environment.

Yes, I am a big fan of Redis/KeyDB.

I have no problem with alternatives or implementations, all that I am talking about depends a lot on the architecture of your environment. I know there are weaknesses in it, as the implementation of Redis Cluster could have a better architecture (I’ll also comment on this in another post).

I hope I’ve helped you think outside the box with Redis, going way beyond the cache.