Common advice for software developers is that composing your app from loosely coupled things is good
"Prefer composition over inheritance" was one of the things I understood least at the start of my career. People were happy to repeat it to me, and were happy to tell me that inheritance produced messy and hard to maintain software, but as a fledgling developer, I found it a very difficult concept to grasp.
Wikipedia's definition is fairly succinct:
"Composition over inheritance (or Composite Reuse Principle) in object-oriented programming is a technique by which classes may achieve polymorphic behavior and code reuse by containing other classes that implement the desired functionality instead of through inheritance."
But this is hard to understand for new programmers - people only understand after feeling the pain
Which is a pretty high science way of saying "make your software by gathering classes together that do little things, into a bigger thing that does what you want". Which seems like reasonable advice, but I still never really *got* what was wrong with inheritance. People would say things like HAS-A is better than IS-A and I'd nod blindly. I honestly think it's quite a difficult concept to understand until you've felt the pain of maintaining a large system, with lots of inheritance that was starting to atrophy and becoming difficult to change. Until you've had to change a class that's half way down an inheritance chain, and then validate that all the things that depend on it haven't been broken. Until you've updated a common component only to discover the behaviour of your code has inexplicably changed, you just don't really feel the negative impacts of inheritance in your software.
In practice...
You build your software against interfaces that describe very tight responsibilities, often only single things, and then your application code orchestrates calls to whatever implementation of this interface you have to hand. It's great for test driven development, it helps drive out behaviour and keep your code focused. You tend to avoid a tangled set of dependencies between components, and you can compose new functionality simply by making use of these defined behaviours. This avoids having to add a method somewhere in a tree of inherited classes and hoping you've implemented the right things at the right times. It focuses your software around behaviour.
This doesn’t just apply to your code - you can apply this approach to how you structure your application and it’s dependant libraries
Modern software tends to be made up of code dealing with many different concerns and types of functionality. So where you construct features in your classes by depending on a series of interfaces that expose specific functionality, if you raise the level of abstraction, you construct software from your own code making use of libraries, both internal and external chained together to produce features.
When dealing with external dependencies, it’s preferable to describe them by their behaviour and compose your application of them accordingly, giving you the flexibility to test, explore and replace these dependencies at will. When you’re dealing with writing code for self-contained non-core aspects of your system (a small SDK for some API, a discrete set of code for dealing with a common scenario), it’s best to split out these dependencies into versioned packages of their own.
Look to open source for guidance
A huge amount of open source software is built in this way. Over the last two decades as open source has risen to be a dominant software philosophy, oss applications are frequently composed of code that the authors neither wrote nor understand the internals of. This is a good thing, as it lets everyone focus on getting things done and shipping software, rather than deep diving into detail of minor functionality. It's the free-outsourcing-that-works of the software world, and it's good for everybody involved.
As a result, people working with large amounts of open source software started developing package management tools to rationalise the growing chain of dependencies in their applications, but these tools also served the purpose of isolating their applications from change, with new versions of dependencies only ever used at the time the application developer choose.
Conversely, in enterprise and internal software there's a trend to push dependencies down towards core libraries in order to share code between projects and teams. This is a bad thing. This "owned" code by the organisation is much more likely to be both core in business function, and to be depended on directly by application code.
Shared source code and base classes seem like the easy way to have a technology asset shared by a team of people but in actually ends up as "glue" keeping unrelated applications couple together due to some subtle correlation in feature requirements.
These dependencies are frequently incoherent "Core" projects that ends up as a dependency magnet. This code rots and people are fearful of changing or cleaning the code in question because they simply don't know where it's used. As your organisation or codebase grows, this rotten code code, tying your applications together becomes a liability. Conversely, if you were to version these common dependencies in their own smaller libraries that could be used, rather than at the foundations of the application, this coupling can be avoided.
I’ll deal with the topic in a later chapter, but there's little reason to ever share code between applications aren’t provided in explicitly supplied packages. Shared code file references will couple your applications and restrict their growth. Shared source “core” or “common” projects are an antipattern.
The best software, is the software not set in stone, the software that can most appropriately react to change, and can most easily be modified. The perfect architecture, is the one that makes change safe, painless and fast. Low level common dependencies impede rapid change.
Why is this similar to inheritance?
An application is "built with" these core buisness components, where oss applications tend to be "built using" external components. This is similar to a class being "inherited from" another class, vs a class being "composed of" some functionality.
The application built with common classes is inextricably coupled to them, and any other application that is coupled to them. An application "built using" only relies on described functionality of the component, and this described functionality is owned by the application, not the component.
In all practicality, the software is still built using external libraries / packages or dlls, but the way people treat them is different. External libraries..
- tend to be distributed as binaries rather than as source
- are linked to, rather than built as part of a deployment
- are updated on their own schedule
- don't change due to a change in another consuming system
- that appear volatile are wrapped and pushed to the edges of the system using adapters
- this relationship parallels to composition, as oss projects are composed of their application code, and a group of packaged, known, modules.
- are most frequently used, rather than inherited from in code.
- tend to be smaller and singular in purpose, and as a result, easier to manage and understand.
These qualities are beneficial, as they reduce the friction in making changes to your software.
When you’re developing, prefer composing your application “top down” from packages, rather than bottom up, using shared base classes and you’ll keep your code clean, and the intent of your shared libraries distinct.