When writing software, we’re using types to represent the information we’re manipulating. Values are express through primitive types like bool, int or string. We’re building complex data representation by composing these types. This composition is done by defining our own types, usually with classes or tuples.
Even if there are several ways to store and represent the same information, these ways are not all equivalent. Some are too permissive and allows states that should be considered as illegal regarding our business rules.
To develop software, as developers we have to choose between several architectures. Our choice must be based on various constraints like the type of problem we’re trying to solve, but also the load or the level of reliability and resiliency targeted. We also have to consider the available skills in the team.
In this blog post, I want to iterate through several back-end architectures I’ve encountered and used during my career.
Few months ago, I’ve read this article: How We Built a Self-Healing System to Survive a Terrifying Concurrency Bug At Netflix.
What I loved is how unconventional the solution was. However, unconventional doesn’t mean irrelevant, their solution kept the software running during the weekend, and this without any human intervention. The solution wasn’t perfect, but it was “good enough” and even more, respectful of people’s time.
By the end, it concludes with a concept that somehow inspired me: “technological adulthood”.
Recently, Antoine Caron published a blog post about his AI usage. Thanks to him, I’ve discovered the /ai ‘manifesto’ and I want to do the same here. Maybe you’ve reached this post by typing /ai in the URL.
MY POSTS All the posts in this blog are written by myself, I don’t use any generative AI to produce content.
Creating a blog post is an interesting activity, it requires me to challenge and organize my thoughts before trying to write on a topic.
In early 2020, I’ve read the book Programming Elixir 1.6. At that time I had one goal: to have an introduction to the actor model with a language that supports it by design, in this case Elixir. I think it was a good read and I achieved my goal, even though I didn’t feel able to design a complete system using this pattern.
However, I realized I’m using some actor model concepts for a few years now.
If you’ve already developed a software using the event sourcing pattern, you’ve probably faced difficulty: How-to design good events? What is a good event granularity?
Indeed it’s difficult to produce good events that will not harm our design. As a seasoned developer with event sourcing, I’m still struggling with this, even if I’ve developed several heuristics over time.
In this blog post, I will share with you these heuristics. But keep in mind this is not some kind of best practices.
Nowadays, most of the services we’re using are online and available 24/7. If, like me, you’re working on a company that provide this kind of service, you’re probably aiming for such availability. As I’ve already highlighted it, it has a huge influence on how you should code and deploy your software. Indeed, to maximize availability, you’re probably aiming for a zero downtime deployment.
Zero downtime deployment includes several topics. Today I want to focus on how to achieve a database migration without service interruption.
I’ve recently gave a talk with my friend Aurélien about the heuristics we’ve developed after using CQRS/ES for several years.
After our talk, we had a chat with some developers. We concluded that choosing a state-based oriented approach (like CRUD) seems to be the default solution, such choice seems to remain unchallenged. On the opposite side, choosing an event-based systems (event sourced or event driven) will very often be heavily challenged.
I had the chance to work few months in Agicap, an enterprise producing a cashflow management SAAS for businesses.
It was a great mission, my team worked a way that I consider to be, so far, the most efficient and pleasant in my career. We managed to produce value at a constant speed while keeping a full control of our code, not allowing any kind of quality depreciation over time.