You can’t really treat a patient without a diagnosis. Your doctor can brush you off with aspirin or antibiotics a few times, but if the symptoms persist, it’s time for blood work and scans.
Legacy systems are like that. We’ll add a quick-fix here and there. We’ll patch a bug and hope not to introduce another one. The feeling dawns on us that there is something wrong with the patient. She’s sickly, but we can’t define exactly what it is.
In a lot of cases, we’ll ask our best developers to look into it. Can you open up this ancient codebase and design an architectural document that shows the AS-IS situation?
This is a flawed approach. We’ll timebox the investigation because we need those devs. It becomes a side-project. We’ll end up with a high-level overview that never contains the complete story. This architectural helicopter view will tell us which databases are used but neglects to inform us that there is a mounted filesystem somewhere that will come back to bite us in the future. It’s impossible to analyse such a system top-down because it’s impossible to scope. When do we know we’ve covered everything?
The answer lies in virtualisation. It’s as close to a silver bullet as we’re going to get.
Develop a provisioning script that builds a copy of the legacy system from scratch.
My technology of choice would be Docker, but if you’re unable to use that because of e.g. licensing, any Virtual Machine with provisioning will work (tools like Ansible e.g.).
Ask a developer to write such as script and give them the time to do it well. While this might sound like a lot of work, you’re achieving multiple goals at the same time:
Thorough analysis
No more high-level architectural helicopter view that omits the expensive surprises. If it works in the Docker container, you’ve covered everything. You can literally be sure when you’re done.
User testable
There’s a difference between a developer feeling they’ve covered everything and a group of users claiming the same. Get real users to test the copy and you’ll figure out all the dependencies you’ve missed.
A good sense of the pain
If it takes a week to write that script, your system is under control. If after a month your devs are still struggling to get it up and running, the complexity becomes visible. That pain is already there, but better the devil you know…
Modern version control
Those Docker or Ansible scripts need a place to live, so why not use what your devs are currently using: Git. And since you’re using it for the provisioning, why not add the source code? Copying the legacy code to Git now opens up a bunch of modern-day tools. Branching strategies, CI/CD, build pipelines, automated e2e tests, Gitops…
All dependencies
Older systems rely on a lot of dependencies that the internet forgot. Finding the exact version of that one ancient Perl script can be detective work. We can’t rely on package managers to still have everything on tap. In the course of writing the script, we’ll identify all the necessary dependencies. It’s often a good idea to add the rarest ones to the Git repo.
A development environment
Instead of risky trial-and-error in production, your developers can now check out the Git repo, start the VM locally and test/fix the bug. While we can often just add the database to a docker-compose file, we might have to mock some of the external dependencies.
Technical documentation
There is no more complete technical documentation than a provisioning script. It contains all versions, all technologies and all configurations.
Reliable and pain-free staging environments
Need another UAT environment? Deploy another VM.
Perfectly outsourceable
Don’t want to pluck your best developers from their current projects? No problem! Dockerizing an existing application is one of those things that can be easily outsourced since there is no need for any in-depth functional knowledge. An experienced freelancer is a perfect match for this.
No interference with the real thing
We’re building a copy, so our end users are never-ever impacted. From a risk management perspective, that’s a great bonus.
The first step towards being Cloud-ready
Planning to Lift & Shift? This Docker file has done most of the heavy lifting already.
Virtualising a legacy application can be timeconsuming, but it’s the best way to define the AS-IS. When you’re able to spin up a local version of your legacy app, you’ve got it under control.
Now you can start thinking about the TO-BE situation.