Legacy System Modernization: Can You Afford to Keep Waiting?
Interview with Haris Muharemović, Senior Engineer
Everyone knows it needs to happen. Few know where to start, and even fewer have the capacity to try. Legacy modernization is one of those permanent topics in the engineering backlog. Klika's Senior Engineer, Haris Muharemović, breaks down how AI-assisted delivery is changing things, from risk and documentation management to building a business case that actually lands with company leadership; a worthwhile read for teams caught between keeping the lights on and shipping what matters.

How do you even begin a conversation about legacy modernization when there's no breathing room in the sprint cycle? Is there any trigger teams should look out for?
Engineering teams often spend more time keeping the lights on than actually shipping. There's a stat that stuck with me: the average developer spends something like 17 hours a week on maintenance. That's nearly half the workweek gone before you even open a feature ticket.
The real conversation starts when someone finally does the math out loud. "Why are we patching this same module for three sprints?" That's an opening. This represents a real threat to a company's competitive advantage, agility, and speed, and at some point, you need to address the root cause rather than the symptoms.

There's a common fear around touching "working" systems. How does AI-assisted delivery change what teams are willing to touch in that kind of environment?
This fear is real, and honestly, it's rational. Organizations develop a culture of avoiding the "working" system because documentation gets lost over time, and the developers who originally built it have often moved on, taking the business domain knowledge with them.
I've seen this firsthand. Codebases where people walk on eggshells around certain files. Nobody wants to be the one who broke the thing that was technically fine. But that's not how reliable systems should work. Reliable systems should be testable and verifiable, not just untouched.
Pre-AI, the decision to refactor legacy code was heavy. Long planning cycles, hunting down whoever still remembered how it worked, slow and careful execution. The unknown was the risk.
What AI enables is reducing the unknown before you touch anything. AI code agents can navigate the codebase, surface hidden dependencies, and document business logic that was never written down. And once you're ready to migrate, you can run the modernized code in parallel alongside the legacy system and compare outputs. That changes the risk profile completely.
There's a well-documented industry pattern: legacy systems often have no documentation, or what exists is incomplete and outdated, with business logic buried in code that even the team sometimes doesn't fully understand. How do you handle that gap? Does AI help create the missing specs, or does it require some baseline to already be in place?
This is probably the most underrated capability in this whole AI space. The playbook we had before was: find whoever wrote this, hope they remember, document as you go. This doesn't scale, and half the time the person who wrote it is gone.
AI can do actual code review and documentation with a very high success rate. It reads through the system flow, maps data and dependencies, and documents its findings. It doesn't need a spec to start; it derives the spec from the code itself.
While the success rate is high, we are still in a phase where we need human validation. AI can surface what the code does, but sometimes it might go off the rails, and we need to validate the output.
Many organizations are already using AI in fragmented, individual ways. What's the difference between a developer using an AI coding tool on their own versus establishing a governed, repeatable AI-first delivery workflow across the whole engineering team?
Individual usage is very valuable, but it's essentially optimization for a single engineer. This engineer might move faster; their PR quality improves, but that doesn't scale for an entire organization.
In engineering, we want to establish a playbook that works for everyone and optimizes everyone's workflow, ultimately making the whole organization faster and more agile.
These playbooks should contain patterns for different cases in engineering, such as:
This is our testing checklist
This is the pattern for decomposing a monolithic service
This is how we migrate legacy code into modern, well-structured code
While AI can't fully replace an engineer, it becomes part of the delivery process.
If a technical leader wanted to build a business case for legacy system modernization internally, what data actually lands with executives, and which data points tend to get dismissed as AI hype?
Executives are mainly responding to costs and risks, which should be the main focus when you bring this topic up.
What really lands:
Costs of maintenance VS costs of modernization in the long term
Amount of engineering hours spent on maintenance and incidents
Potential security risks
Time-to-market new features
Features you couldn't ship due to legacy limitations, and your competitors shipped
AI is not the reason to do code modernization, and that shouldn't be the main focus. AI is a tool that enables it to be done more efficiently and faster than was previously possible, and it should be presented as such.
What gets dismissed is anything that sounds like AI hype: "AI will make our developers 10x faster", "AI will automate all of our processes". These claims are hard to believe.
Where in the modernization journey does AI-assisted delivery have the most immediate impact: early in planning and assessment, or later in actual migration and testing?
Both. AI is the tool, and it affects the whole modernization journey. Early on, through planning and assessment, you can quickly analyze the project and the code, and the impact on confidence and clarity, especially around the unknowns. This helps us understand what we're actually dealing with.
Later in the process, you're still having that same impact with a lot of repetitive work: translating patterns, migrating a huge class to a well-structured multiple classes, and validating outputs. AI handles the mechanical parts, and the engineers can focus on the judgment calls.
What does "human-in-the-loop" actually mean in practice when AI agents are doing things like analyzing a million lines of code, identifying service boundaries, or generating test suites from specs?
In practice, it means the engineer is reviewing proposals rather than writing code. The temptation is to send one big prompt: "modernize this legacy system to be the best one out there" and let it run. You'll often end up with a total mess and regret that prompt. The AI was confident the whole way through, but confident isn't the same as correct.
The correct way to do it is to:
Analyze with AI, then validate, and do the sanity check
Plan with AI, then approve and review the strategy
Execute with AI, and keep it on track. AI agents might drift
Verify with AI, then validate yourself. If tests are passing, it doesn't mean they're testing the right thing
This is what "human-in-the-loop" actually means in practice.

Choosing not to modernize is often framed as "playing it safe," but what are the real consequences of postponing it? And how do you make that argument land with leadership that has other priorities?
Engineering teams are always under pressure to ship the next feature. Modernizing the codebase rarely makes it to the top of the list, but that's not just a technical debt problem; it's a competitive one.
Every year you wait, the system gets harder to change, and the ongoing costs grow. Engineers who know how to quickly fix things move on, and the gap between what you can ship and what the market expects gets wider.
Put the numbers in front of leadership: hours per week lost to maintenance, features you couldn't ship because the legacy system wouldn't support them, and how hard it is to hire engineers willing to work on the stack. Once you frame it that way, the risk calculation flips. The question stops being "can we afford to modernize" and becomes "can we afford to keep waiting."
Once a team comes out the other side of a successful modernization, what actually changes in their day-to-day? What does a good outcome feel like in practice?
The thing people notice first is that the fear goes away. That feeling when something breaks, and nobody wants to be the one who touches it, it stops being a crisis and becomes just normal work.
Releasing new updates stops being a stressful, carefully planned event. New team members can get up to speed on their own, rather than spending months learning from whoever built the original system.
But the biggest win is that the knowledge is finally written down. You now have a system that's easy to understand and well-documented. You can ship new features faster, fix bugs faster, and deliver things that weren't possible before.









