I am a programmer and architect (the kind that writes code) with a focus on testing and open source; I maintain the PHPUnit_Selenium project. I believe programming is one of the hardest and most beautiful jobs in the world. Giorgio is a DZone MVB and is not an employee of DZone and has posted 636 posts at DZone. You can read more from them at their website. View Full User Profile

SOLID principles: are they enough for OO?

01.02.2013
| 8574 views |
  • submit to reddit

Developers fight for a lot of time over the right way to develop software. An useful exercise instead can be to try to define one approach without moral judgment over its correctness.

Carlo Pescio tried to do so with his Ask now what an object is, but... article, which spawns interesting thoughts about how to make decisions while designing an object-oriented structure. Again, this is not about pushing objects as the only ethical way to make a living; more about defining a common understanding of OOP which is based upon an underlying model of software, instead of advocacy and hype for a technology, language, or paradigm.

Background

In Carlo Pescio's model of software development, there are several spaces (in the mathemathical meaning of the term) where entities live:

  1. a decision space, where design decisions about the function (features) and form of the software are made.
  2. An artifact space, where the necessary classes and functions are created during implementation.
  3. A runtime space, where memory is instantiated, processes are started, and CPUs get some work.

Moreover, it's important to consider the concept of entanglement between spaces, a more pervasive definition of coupling:

Two clusters of information are entangled when performing a change on one immediately requires a change on the other.

For example, an encoder and decoder for an MPEG format exhibit very low coupling: they do not share code at all, and are even executed on different types of processors. Yet they are entangled because a change in one always spawns a change in the other; in this case, there is a standard in place to avoid maintenance hell. For more on the concept, and the dampening strategies for the kind of entanglements that bring high risk, read the original post.

For the purpose of this post, it suffices to say the entanglement can be present in various form: Create, Read, Update, and Delete. If a new feature (C) in the decision space makes you modify a procedure (U), we say there is a C/U entanglement between the decision and artifact space.

The definition of an OO structure is that of a structure that minimizes the weighted (by probability) sum of */U between the decision and the artifact space; */C and */D entaglement are favored. -- Original

Some point out this definition is very similar to OCP, and we'll see later a comparison with all the SOLID principles. Some point out instead that this definition favors extensibility (or maintainability) over other non-functional properties, but Pescio itself recognized that OO may be at odd, or ignore, some properties of your desired system. When is the last time that you said, "I'm glad we used OO here, because without it we would have lowered much the security/scalability/usability of this application"?

Exercise: SOLID

How do the SOLID principles conform or disagree with this definition? These principles are either fundamental, astronautical, nice-to-have, or obsolete, depending on who you talk to.

The SRP states: a class must have a single reason to change. This is in line with minimizing updated to the artifact space, and propagating a change in features or a refactoring to a single source file.

The OCP is very much consistent with this definition. However, the original version of the principle talks about closing down a class after it has been developed and reusing or extending it only through inheritance; it was later redefined over interfaces and new implementation.

The definition takes the principle to the level of the system as a whole, not just about classes and interfaces and other materials used to build the application. Would you talk about bricks (or walls, for an higher level of abstraction) to an architect of skyscrapers? Sure, it is necessary and part of his knowledge, but not the most important thing to discuss. Talking about single objects does not scale to large structures, because you start to build pyramids by piling up bricks with brute force instead of gothic cathedrals by architecting a structure, to say it with Alan Kay.

The OCP is also weighted by the probability of changes in this definition: you have to choose strategic closures as you can't orthogonally protect against every possible update. The goal is weighted minimization, not total elimination. It's usually perfectly fine to modify a single class for a new feature, and it's preferrable to a wave of changes that ripple throughout the codebase.

The LSP says that objects in a program should be replaceable with instances of their subtypes without altering correctness. Again, we are dampening changes to a single object by saying that subclasses or new implementation should only replace that single object, lowering entanglement with other parts of the system.

The ISP says that it is bad to depend on fat interfaces, and that role interfaces containing just the methods that are depended on by other classes lower coupling. What lowers coupling also lowers entanglement (not as much as we would like), so the ISP may be necessary but not sufficient.

Finally, the DIP  says that classes should depend on abstractions instead of other concrete classes, as a way to hopefully stop changes in the implementation to the nearest interface instead of propagating in the object graph.

Thus dampening change is a common property of these principles; however, they should never be followed mechanistically as they can help lowering coupling metrics while having no effect on entanglement. The principle minimize some of the roads where changes can travel: abstractions are like customs (the authorities); but if you choose abstractions which are not in line with the most probable changes, you're going to continuously add or remove new methods to existing interfaces: this has happened to all of us.

The wrong design

I'm forcing myself never saying the word wrong; just talking about abstraction and structures not aligned with likely updates to the decision space. Often a religious application of the SOLID principles makes someone criticize your wrong design, because you and your colleague want to protect against different changes. So you introduced a Value Object and instantiated it pervasively becaue you deem this implementation never to change, while your colleague may inject it to allow for a different class to be used (swap you and your colleague anywhere in this paragraph.)

Incidentally, this is also why I don't like to perform katas for the sake of it. If you're not cultivating an objective or setting a constraint beforehand, a kata leaves you with no non-functional requirements to satisfy, and with no idea about likely changes like there are in the real world. Should I cover myself against new rules of bowling, just against the number of rolls changing, or about entire new games being introduced? This can only be decided by opinions, as there is no context or client to talk to.

If I got you to read Carlo, this article is worth the time spent on it. :) It's one of those blogs whose new articles go directly into my Kindle for calm and thoughtful reading.

Published at DZone with permission of Giorgio Sironi, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Tags: