Maximizing simplicity is the only guaranteed way to minimize software maintenance. Other techniques exist, but are situational. No complex system will be cheaper to maintain than a simple one that meets the same goals.
‘Simple’, pedantically, means ‘not composed of parts’. However! Whatever system you are working on may already be a part of a whole. Your output should reduce the number and size of parts over all, not just in your own project domain.
I’ve started asking myself, “does this add the least amount of new code?” A system in isolation may be quite simple, but if it duplicates existing functionality, it has increased complexity. The ideal change is subtractive, reducing the total amount of code: by collapsing features together, removing configuration, or merging overlapping components.
Better to put your configuration in version control you already understand, than introduce a remote discovery server. Better to use the crufty RPC library you already have, than introduce a new one with a handy feature—unless you entirely replace the old one.
Beware the daughter that aspires not to the throne of her mother.
I think maximizing overall simplicity is the best way to minimize maintenance only if you can keep it below the threshold of human comprehension. Otherwise, you have to minimize maximum local complexity for every viewpoint, and low coupling becomes more important than DRY.
I don’t agree that a maximally simple system is antagonistic to loose coupling among its necessary parts.
This is a lot more subtle than “don’t repeat yourself”, where complexity is falsely hidden behind a macro, or increasing your (non-domain) method surface area is considered better than writing the same damn line twice.
What I don’t like is the “merging overlapping components” parts.
Say you need X and Y to do A, B and C. If you combine X, Y, A, B and C because they overlap you end up depending on A and C to do B. A good example of this is “I can do everything” libraries, eg. OpenCV or Boost.
I think that if you’re working on a project with a reasonable number of moving parts, then fine, reduce duplication as much as possible and make it possible for somebody to understand how the system works. But if the system becomes larger, focus on simple, well-defined APIs and work with black boxes. If that means that you have to duplicate code (eg. in the server and its client(s) for a web service) then that’s the price to pay.
I think we agree on this: paradoxically, simplicity matters are complicated.
Boost consolidates the overhead of package management, linking, and documentation. These are secondary, not primary concerns, and hopefully the package manager would solve this problem, but there’s a reason people use it.
I think you should be concerned more about multiplicative complexity than whether the coupling is loose or tight. Software is a multidimensional world where nice API boundaries are often insufficient for productivity. If your system is above the threshold of human comprehension, hope that your early encapsulation choices were right, because now you’re stuck with them forever.
I have inflicted some awesome performance improvements on the world by tightly coupling two components into a single component that serves simultaneous, orthogonal goals.
I get your point about multiplicative complexity. About improving performance through increased coupling too, I have had to do that more than once.
It’s the same kind of discussion than Tanenbaum vs. Torvalds (micro-kernels vs. macro-kernels). History proved Torvalds right so far. I still think Tanenbaum’s vision is theoretically better but sometimes we have to trade off local simplicity, elegance and low coupling for performance, productivity or global simplicity.
I’m originally a network engineer so the best example I have for this is the TCP/IP stack vs. the ISO stack. TCP/IP won because it broke design rules and increased coupling, resulting in a simpler architecture overall. Yet it’s still a layered architecture at its core.
Ultimately I think you’re right: it’s obviously better to re-use what you have than to re-invent the wheel all the time or duplicate things. We just have to be careful not to end up having all our blocks depending on all the other blocks just because it allows optimal re-use.
To sum up I’d say overall complexity arises from three main things: the number of parts, the size of parts and the number of links (dependencies) between the parts. Because you don’t have to think of all the system all the time, what really matters is global complexity (number of parts + total number of links) and maximum local complexity (for each part, size of the part and number of outgoing + incoming links). It’s up to the system designer to assign weights to those things and make trade-offs.