In many Belgian public administrations, deployment pipeline modernization is still treated as a one-off event rather than a continuous practice. Going to production remains a risky operation that mobilizes several teams, requires maintenance windows, and sometimes blocks a service while everything settles back into place. Developers ship their code, then they wait.
This wall between development and production has a real cost. Not only in delays, but in confidence: the more you fear each release, the fewer you do, and the harder it gets to deliver continuously.
We worked with a large Belgian federal public institution on a complete modernization of its deployment pipeline. The outcome: 98% of applications migrated to the new platform, zero downtime in production, and development teams that are genuinely autonomous from code to prod. This post walks through what we did, and more importantly the choices that allowed us to get there without a Big Bang.
Source for the figures cited: 5th floor, client case “From code to production, in full autonomy”, 2026. For more information, contact us via sales@5thfloor.be.
The starting point: silos, manual deployments, JBoss applications
The institution had several thousand staff and dozens of business applications. On paper, a solid IT organization. In practice, five structural blockers came up at every release:
- A strict separation between developers and the team in charge of production deployments. Code passed from one to the other, with different timelines and priorities at each step.
- Manual deployments, slow and prone to errors.
- No guarantee that what had been validated in test was exactly what shipped to production.
- An aging JBoss base, hard to evolve.
- No safety net when something went wrong: rolling back was an operation in itself, with no automation.
We see this pattern regularly in the public sector. Not because teams lack skills, but because the organization and tooling haven’t kept up with the growing complexity of the application portfolio.
Why we didn’t rewrite everything
The temptation, faced with a situation like this, is the full rebuild project. Freeze the existing system, run a big parallel migration, then switch over all at once. The famous Big Bang.
On paper, it’s appealing. In practice, it often fails. During the EU AI Week 2026 webinar hosted by our co-founder Gilles Stragier on March 19, 2026, the figures shared on Big Bang rewrite projects were telling: around 70% go over budget or schedule, 17% are cancelled mid-flight.
Transparency note: these orders of magnitude were cited during our internal webinar. For external publication, I recommend tracing back to the primary source (Standish Group / CHAOS Report or equivalent) before quoting them more precisely.
We made a different choice: migrate application by application, build the new pipeline on top of the tools the teams were already using, and keep production live throughout. A transition, not a transformation.
The platform: containerization, OpenShift, Helm
Here is what was put in place:
- Docker containerization of applications. An image built once, deployed identically everywhere. No more drift between test and production environments.
- Orchestration via OpenShift, chosen for its alignment with public sector security requirements and its reliability at scale.
- Custom Helm Charts. Every new application automatically inherits monitoring (Grafana), centralized logging (Kibana), and security (Keycloak). No more manual reconfiguration for each project.
- A zero-downtime mechanism. The new version comes up in the background, and the switchover only happens once the deployment is validated. If something goes wrong, the current version stays active. No user sees anything.
- A library of reusable scripts integrated into Bamboo and Bitbucket, the tools the teams already knew. Pipeline templates, image promotion across environments, automated deployment.
- Ephemeral test environments. Each automated test session runs in its own environment, created on the fly and destroyed after use, with no interference between teams. What the tests validate is exactly what ships to production.
And a point that often gets overlooked: database migrations and security configurations are applied automatically from the application itself, with no human intervention.
The numbers after migration
On the scope covered by this platform, here is what it produces today
| 98% | 100% | 0 | Automatic rollback |
| of applications run on the new platform, including legacy JBoss applications, containerized in a second phase. | of applications automatically benefit from monitoring, security, and database migrations through the Helm Charts. | downtime during production deployments. | in case of failure, with no manual intervention. |
These results belong to a specific project, in a specific context. Replicating them elsewhere requires assessing the context of each organization. See the “Who is this for?” section below.
What really made the difference (and it wasn’t the tech)
If I had to keep three things from this project, none of them would be technical.
First, the choice to build on what existed rather than impose a new stack. Bamboo and Bitbucket were already in place. Rather than replacing them, we extended them. Developers didn’t have to relearn everything, which avoided the classic resistance of transformation projects.
Then, co-construction. Every brick of the platform (pipelines, Helm Charts, deployment scripts) was built with the development teams, not for them. The difference shows in daily use: teams take ownership of what they helped design.
Finally, autonomy as a daily objective, not a final deliverable. By the end of the project, developers test, build, and deploy themselves. The automatic safeguards (tests, rollback, monitoring) take over from human checks. The wall between dev and prod is gone, and no one lost their role. The team historically in charge of deployments saw their work shift toward platform expertise and coaching, rather than repetitive execution.
Who is this for?
This approach works when certain conditions are in place. An internal IT team large enough to take ownership of the platform. Management willingness to invest in team autonomy, not just to declare it. Existing tools that can be evolved rather than thrown away.
If the context is very different (minimal IT team, heavy dependency on a single integrator, application portfolio scattered across multiple vendors with no shared governance), the same recipe does not apply as is. You have to adapt, sometimes start with other workstreams (governance, team structuring) before the technical modernization.
That is also why we talk about transition rather than transformation: a gradual movement, anchored in the reality of each organization.
FAQ
Going further
This project is part of our IT Mission approach, one of three pillars at 5th floor along with L4F (application portfolio analysis and steering through AI) and organizational and cultural transformation.
5th floor has been a partner in the digital transformation of Belgian public institutions since 2017, and B Corp certified since 2025 (score 89.5, public assessment available).
If you recognize your situation in this post (a wall between dev and prod, deployments that scare people, or a wish to modernize without knowing where to start), get in touch. A one-hour conversation is often enough to map out the first leads.
