Here’s the workflow I prefer these days when implementing new work, be it a change to code, a change to process or a design for a system:
- Specify your intent – You need to explain what you are building to yourself and others. Doing so with an executable specification framework will also allow you test that your code meets your expectations given your assumptions even as you make changes. A basic layout (in comments) or a set of tasks may also help you (and others) record assumptions and be sure that specs were met.
- Make it work – well enough to meet your existing functional requirements. Do not invent requirements, obsess over clarity, details or performance. Just meet the spec.
- Make it good – Refactor your code until it’s clear enough that you can pass off maintenance or come back to repair and extend the functionality in a year’s time. If it’s for a customer, make it handsome enough that they will want to use it. If it’s for a service, make it succeed (and fail) in ways that are auditable and consistent.
- Make it fast – fast enough to meet your expected capacity needs. Test it! With modern hardware prices, serial performance is rarely a concern and a spot check is good enough.
I don’t know about you, but the ordering of these phases is almost precisely the opposite of how I worked when I was junior — to the extent to which they’re discrete at all. I’d almost always start with the view that the implementation of the feature (still hazy in my mind, it’ll materialize as I build it) must be optimally and provably fast, complete with short circuit logic and extensive variable re-use. Then I’d focus on building as much flexibility into the functionality as I could — more is better, more features for re-use! Finally, as the deadline whizzed past I’d wire up the feature before dropping it on QA’s doorstep like a cat with a dead bird.
The danger of producing software in this manner is that the activity that provides the highest value — testing the feature — arrives late in the process with little direction and receives the least amount of attention. The activities that matter less — implementation choices — consume the majority of the software budget. Code designed to be clever and efficient is now a more complex system to debug, and fixes are likely to remove either or both of these qualities. The end result is frequently a fast, extensible piece of shit.
Inverting the pattern is sort of like eating your vegetables first and your dessert last: you ensure health (a proof of correctness) without sacrificing enjoyment (the satisfying feeling of legible, optimized code). But the real key to this workflow is keeping the phases separate and actually investing time in each phase. Knowing what you are doing, and that you can always clean up and optimize later, is the key to focusing on an accurate implementation. If you always make it good and if necessary make it fast before calling a cycle complete, you should carry less technical debt from your implementation choices.
Now, there are times when the entire reason for a change is to optimize a quality such as API clarity or system performance. This same workflow should apply there, but under these circumstances the specification should describe the improvement you’re hoping to make (such as “improve request throughput by 10x from baseline”) and the kinds of out-of-phase optimization you should avoid would be improvements that aren’t directly related to or that only contribute lightly to the spec.
Finally: when working with VERY old code, I tend to add an additional phase before specification: clean it up. A modern IDE will offer a number of safe non-functional refactoring operations (such as identifying and removing dead branches, removing stub documentation, renaming poorly named methods or adjusting scope) that take little time to apply and make functional rework more successful.