Tony Marston's Blog About software development, PHP and OOP

From Oop to Poop, from Excellent to Excrement

from Gold to Garbage, from Glitter to Shitter, from Magic to Tragic

Posted on 1st February 2022 by Tony Marston

Amended on 1st July 2022

Introduction
What is OOP and what are the benefits?
What are "Best Practices"?
What is my definition of "Excellent"?
What is my definition of "Excrement"?
Ideas which score highly on the Faecal Scale (Shit List)
Confusion over what the term "Object Oriented Programming" actual means.
OOP requires a totally different thought process
OO programming is totally different from Procedural programming
What is the meaning of Encapsulation?
What is the meaning of Inheritance?
What is the meaning of Polymorphism?
Databases are not Object Oriented
Confusion about Object-Relation Mappers (ORM)
Confusion about the word "type"
Confusion about the word "state"
Confusion about the word "interface"
Confusion about the words "use", "overuse", "misuse" and "abuse"
Confusion about the words "format", "transform" and "convert"
Confusion about the words "can" and "should"
Confusion about Design Patterns.
Confusion about the words "Information Hiding"
Confusion about the words "responsibility" and "concern"
Confusion about the Single Responsibility Principle (SRP)
Confusion about Separation of Concerns (SoC)
Confusion about the meaning of "God Object"
Confusion about Dependency Inversion and Dependency Injection
Confusion about Inversion of Control (IoC)
Confusion about Simplicity
Using OOP has no benefits until the first rewrite
Conclusion
References
Amendment History
Comments

Introduction

The purpose of this document is to answer my critics who insist on claiming that my methods of implementing the principles of OOP, as shown in my RADICORE framework, are inferior, wrong, impure and "not proper OOP" despite the fact that I can prove that my results are superior to theirs. They claim that I am not following "best practices" when what they actually mean is that I am not following the same set of practices as them. By constantly promoting their questionable versions of "best practices" and miscomprehension of basic programming principles, by constantly criticising me for having the audacity to voice an opinion which is different from theirs, by constantly promoting dogmatism over pragmatism, these people are steering the next generation of computer programmers down the wrong path. This will, in my humble opinion, prevent them from creating efficient and cost-effective software, so can have nothing but a detrimental effect on the whole of our industry.

I did not create my RADICORE framework in a single moment of inspiration, it evolved over a period of several decades after building one database application after another while being part of different teams in different organisations using several different programming paradigms. I was exposed to several different variations of "best practice", but after concentrating on those ideas which produced the best results with the fewest problems I graduated from creating libraries of reusable code to creating frameworks which increased the productivity of all the programming teams which I led. You can read the full story in Evolution of the RADICORE framework.

What is OOP and what are the benefits?

All of today's novice programmers are taught that object-oriented programming languages are better than non-OO languages, but is this true? While they are different (as explained in What is the difference between Procedural and OO programming?) their use does not guarantee that the results will be automatically superior. Only someone who has spent time developing software in a non-OO language and then switched to developing the same type of software in an OO language is actually qualified to make that kind of judgement. I spent the first 20 years of my software career in the development of database applications using non-OO languages such as COBOL and UNIFACE, and then the last 20 years developing the same type of application using PHP with its OO capabilities. I know from personal experience that simply using a language that has certain capabilities does not guarantee that you will automatically produce good software, it is how you make use of those capabilities which counts. It is possible for a crap programmer to write crap code in any language just as it is possible for a good programmer to write good code in the same languages. It is not what you use that counts, it is the way that you use it.

It is simply because that I have been developing database applications for 40 years using different languages and different paradigms that I feel qualified in judging if one one language is better than another. Because each new language has new capabilities it should be possible to use these new capabilities to be more productive. If this is not the case then you must be doing something wrong.

Switching from one programming paradigm and/or language to another should not be undertaken lightly. There should be some discernible benefits otherwise you will be taking a step backwards. In some cases your employer may want to switch to a new language as the current one may not provide the facilities that make your applications more attractive to customers. This is why in the 1990s my employer switched from COBOL, with its simple green screen technology, to UNIFACE which provided a GUI which supported additional controls such radio buttons, checkboxes and dropdown lists as well as being able to access quite easily a variety of relational databases. I switched from UNIFACE to PHP in 2002 after I saw that the future lay in web-based applications, and a particular project convinced me that UNIFACE was too clunky to be of practical use. I looked for a replacement language and chose PHP as it was better suited to the development of web-based applications. I was also very impressed with how easy it was to produce results with simple code.

Using a language with OO capabilities is no good unless you learn how to make proper use of those capabilities. I had heard about this new paradigm called Object-Oriented Programming, but I didn't know exactly what it was nor why it was supposed to be better. I read various descriptions, as shown in What is Object Oriented Programming (OOP)?, but none of them seemed to hit the nail on the head as much as the following:

Object Oriented Programming is programming which is oriented around objects, thus taking advantage of Encapsulation, Polymorphism, and Inheritance to increase code reuse and decrease code maintenance.

Using a new paradigm just because it is different is one thing, but this new object-oriented approach was claimed to be better because it supposedly provided benefits such as:

The power of object-oriented systems lies in their promise of code reuse which will increase productivity, reduce costs and improve software quality.
OOP is easier to learn for those new to computer programming than previous approaches, and its approach is often simpler to develop and to maintain, lending itself to more direct analysis, coding, and understanding of complex situations and procedures than other programming methods.

So much for the promises, but what about the reality? Could OOP live up to all this hype? How easy would it be to write programs which are Object Oriented? Would I actually be able to achieve the objectives which were promised?

I did not go on any training courses to learn about such things as Object-Oriented Programming (OOP), Object-Oriented Design (OOD), Object-Oriented Analysis and Design (OOAD) and Domain Driven Design (DDD). None of these pages existed at that time, so all I had to go on was the PHP manual plus some simple examples of OO code which I found on the internet. This taught me how to create classes and how to use inheritance, but adequate descriptions of polymorphism were nowhere to be found. I eventually found this wikipedia page which identified different flavours such as Ad hoc polymorphism, Parametric polymorphism and Subtyping, but these all confused me as they were concerned with the provision of a single interface to entities of different types. In a database application the business layer does not contain objects of different "types" as they are all of the same type - they are all database tables. Because of this I have a single abstract table class which is then inherited by every one of my 400+ concrete table classes. It was not until I came across the description same interface, different implementation and the description of the Dependency Inversion Principle (DIP) that I realised that my framework contained huge amounts of polymorphism which I use to Inject a Model into a Controller.

My objective was to produce as many reusable components as possible, thus being able to achieve results by having to write less code. This I have done, as shown by the Levels of Reusability which are provided in my framework. However, my critics (of whom there are many) insist that my results are invalid simply because I am not following their interpretations of "best practices". I am results-oriented, not rules-oriented, and my customers pay me for the results I achieve, not the rules which I follow. I refuse to follow their rules for the simple reason that it would degrade the quality of my work.

What are "Best Practices"?

The idea that there exists a single set of "best practices" which all programmers are required to follow is complete and utter nonsense. Different groups have their own ideas on what is best for them, so I choose to follow those ideas which are best for me and the types of application which I write, ideas which have been tried and tested over several decades using several programming languages. This is in keeping with the following definitions:

Best practices are a set of guidelines, ethics, or ideas that represent the most efficient or prudent course of action in a given business situation.

Best practices may be established by authorities, such as regulators, self-regulatory organizations (SROs), or other governing bodies, or they may be internally decreed by a company's management team.

Investopedia

Notice here that it uses the term guidelines and not rules which are set in concrete and must be followed by everybody. Note also the phrase represent the most efficient or prudent course of action in a given business situation - if my business situation is different to yours then why should I follow your practices? I also do not recognise any governing bodies who have the authority to dictate how software should be written. There is no such thing as a "one size fits all" style as each team or individual is free to use whatever style fits them best.

A best practice is a method or technique that has been generally accepted as superior to any alternatives because it produces results that are superior to those achieved by other means or because it has become a standard way of doing things, e.g., a standard way of complying with legal or ethical requirements.

Wikipedia

When it comes to software development the only "requirement" that I recognise is the production of the most cost-effective results for the benefit of the paying customer. The term "cost-effective" includes the ability to both create software in a timely manner as well as to maintain it afterwards. I have seen code which has been written to conform to some of these weird ideas of "best practices", and I can immediately see the convoluted code which it produces and the productivity levels which are far below mine, so I feel confident in saying that my results are superior which means that my development methodology, my personal programming style, is also superior to theirs. My personal style has been the product of reviewing many different styles in my many decades of experience, and I have elected to adopt those styles which produce the best results and to ignore those which don't.

My reply to these critics can be summed up as follows:

If I were to follow the same methods as you then my results would be no better than yours, and I'm afraid that your results are simply not good enough. The only way to become better is to innovate, not imitate, and the first step in innovation is to try something different, to throw out the old rules and start from an unbiased perspective. Progress is not made by doing the same thing in the same way, it requires a new way, a different way.

If my development methodology and practices allow me to be twice as productive as you, then that must surely mean that my practices are better than yours. If my practices are better then how can you possibly say that your practices are best?

What is my definition of "Excellent"?

Any methodology which helps a programmer to achieve high rates of productivity should be considered as a good methodology. This can be done by providing large amounts of reusable code, either in the form of libraries, frameworks or tools, which the programmer can utilise without having to take the time to write his own copy. It should be well understood that the less code you have to write then the less code you have to test and the less time it will take. Being able to produce cost-effective software in less time than your competitors means that your Time to Market (TTM) is quicker and, as time is money, your costs will be lower. Both of these factors will be appreciated by your paying customers much more than the "purity" of your development methodology.

So how much reusable code do I have? Take a look at a simplified diagram of my application structure in Figure 1:

Figure 1 - A simplified view of my application structure

model-view-controller-03a (5K)

As you should be able to see this is a combination of the 3-Tier Architecture and the Model-View-Controller (MVC) Design Pattern. A more detailed diagram, with explanations, can be found in RADICORE - A Development Infrastructure for PHP.

The ability to write code to create classes and objects would be totally wasted unless you create classes for the right things. It would appear that my original choices were correct because, quite by accident, I created objects which matched the categories identified in the article How to write testable code:

The components in the RADICORE framework fall into the following categories:

It should also be noted that:

This arrangement helps me to provide these levels of reusability. This means that after creating a new table in my database I can do the following simply by pressing buttons on a screen:

Note that the whole procedure can be completed in just 5 minutes without having to write a single line of code - no PHP, no HTML, no SQL. If you cannot match THAT level of productivity then any criticisms that my methods are wrong and will always fall on deaf ears.

Note that some people claim that my framework can only be used for simple CRUD applications. These people have obviously not studied my full list of Transaction Patterns which provides over 40 different patterns which cover different combinations of structure and behaviour. I have used these patterns to produce a large ERP application which currently contains over 4,000 user transactions which service over 400 database tables in over 20 subsystems. While some of these user transactions are quite simple there are plenty of others which are quite complex.

What is my definition of "Excrement"?

I have often been told that you shouldn't be doing it that way, you should be doing it this way. When I look at what they are proposing and see immediately that it would have a negative impact on either the elegance of my code and/or my levels of productivity then I simply refuse to follow their "advice". I have yet to see any suggestion which would improve my code, but I have seen many that would degrade it beyond all recognition. Rather than saying "let's run it up the flagpole and see who salutes it" these ideas fall into the category of "let's drop it in the toilet bowl and see who flushes it".

What is the cause of these bad ideas? Ever since I have been perusing the internet reading other people's ideas and posting ideas of my own on various forums I have come to the conclusion that there are two contribution factors - poor communication and poor comprehension.

These problems are exacerbated by peculiarities in the English language:

This means that when being read by a clueless newbie the meaning of a sentence can be ambiguous instead of exact, and if there is any room for ambiguity then some people will always attach the wrong meaning to a word and therefore end up with a corrupted meaning for the sentence which contains that word. If that same person uses different words to explain that corrupted meaning to a third party then it can become even more corrupt. The more people through whom the message passes the more likely it is that it gets more and more corrupted, just like it does in the children's game called Chinese Whispers.

There are also some other problems which can add to the confusion:


Ideas which score highly on the Faecal Scale (Shit List)

Listed below are some of the stupid ideas, or misinterpretations of good ideas, which I believe can result in crap code:

Confusion over what the term "Object Oriented Programming" actual means.

If you ask 10 different programmers what OOP really means you will get 10 different answers, and almost all of these will be so far off the mark I am truly amazed that their authors can get away with calling themselves "professional OO programmers". I have highlighted some of these goofy ideas in What OOP is NOT.

The only simple description I have ever found for OOP goes like this:

Object Oriented Programming is programming which is oriented around objects, thus taking advantage of Encapsulation, Polymorphism, and Inheritance to increase code reuse and decrease code maintenance.

Note here that I am only describing those three features which differentiate an OO language from a Procedural language. Some people seem to think that OO has four parts, with the fourth being abstraction. I do not agree. If the language manual does not show the keyword(s) which implement one of these parts the that part cannot be a feature of the language, it is only a design concept.

Many other descriptions of the "requirements" of OOP refer to additions which were made at a later date to different OO languages. I do not regard these additions as being part of the founding principles, so I ignore them if I don't like them.

Anybody who takes the time to study my framework in detail, either by reading the copious amounts of documentation or by examining and running the sample application or the full framework which can be downloaded, should very quickly see that the amount of reusable code which is provided means that new user transactions can be created very quickly. More reusable code equates to higher levels of productivity, and methodologies which directly contribute to higher productivity should be regarded as being superior to those which don't.

When I am told that I should be following the same set of rules and practices as other "proper" OO programmers just be "be consistent" and not "rock the boat" I refuse as, in my humble opinion, the dogmatic adherence to a series of bad rules will do nothing but make me "consistently bad". Further thoughts on this topic can be found at:


OOP requires a totally different thought process

I disagree. OO programming and Procedural programming are exactly the same except the former supports encapsulation, inheritance and polymorphism while the latter does not. They are both concerned with writing imperative statements which are executed in a linear fashion with the only difference being the way in which code can be packaged - one uses plain functions while the other uses classes and objects. Further thoughts can be found at:


OO programming is totally different from Procedural programming

I disagree. A person called Yegor Bugayenko made the following dubious statements in various blog articles:

Inheritance is bad because it is a procedural technique for code reuse. .... That's why it doesn't fit into object-oriented programming.

Inheritance Is a Procedural Technique for Code Reuse

I disagree. Inheritance does not exist in procedural languages, therefore it cannot be "a procedural technique for code reuse". Inheritance is one of the 3 pillars of OOP, so to claim that "it doesn't fit into object-oriented programming" is ridiculous beyond words.

Code is procedural when it is all about how the goal should be achieved instead of what the goal is.

Are You Still Debugging?

I disagree. You are confusing imperative programming, which identifies the steps needed to perform a function, with declarative programming, which expresses the rules that a function should follow without defining the steps which implement those rules.

A method is procedural if the name is centered around a verb, but OO if it is centered around a noun.

Are You Still Debugging?

I disagree. Classes in the business/domain layer, which represent the Model in MVC, represent entities, and entities are nouns, Methods represent the operations that can be performed on that entity, and operations (functions which can be performed) are always verbs.

Having used both procedural and OO languages for nearly 20 years apiece I have observed the following:

OO programming is exactly the same as procedural programming except for the addition of encapsulation, inheritance and polymorphism.

In his article All evidence points to OOP being bullshit John Barker says the following:

Procedural programming languages are designed around the idea of enumerating the steps required to complete a task. OOP languages are the same in that they are imperative - they are still essentially about giving the computer a sequence of commands to execute. What OOP introduces are abstractions that attempt to improve code sharing and security. In many ways it is still essentially procedural code.

In his paper Encapsulation as a First Principle of Object-Oriented Design (PDF) Scott L. Bain wrote the following:

Object Orientation (OO) addresses as its primary concern those things which influence the rate of success for the developer or team of developers: how easy is it to understand and implement a design, how extensible (and understandable) an existing code set is, how much pain one has to go through to find and fix a bug, add a new feature, change an existing feature, and so forth. Beyond simple "buzzword compliance", most end users and stakeholders are not concerned with whether or not a system is designed in an OO language or using good OO techniques. They are concerned with the end result of the process - it is the development team that enjoys the direct benefits that come from using OO.

This should not surprise us, since OO is routed in those best-practice principles that arose from the wise dons of procedural programming. The three pillars of "good code", namely strong cohesion, loose coupling and the elimination of redundancies, were not discovered by the inventors of OO, but were rather inherited by them (no pun intended).

What is the meaning of Encapsulation?

I have seen many descriptions of encapsulation which insist on including data hiding which, in my humble opinion, is derived from a misunderstanding of the term implementation hiding. Encapsulation, when originally conceived, did not specify data hiding as a requirement, and when PHP4 was released with its OO capabilities it not not include property and method visibility, so I never used it. Even though it was added in later versions of PHP I still don't use it. Why not? Simply because my code works without it, and adding it would take time and effort with absolutely no benefit.

The true definition of encapsulation is as described in What is Encapsulation? as follows:

In object-oriented computer programming languages, the notion of encapsulation (or OOP Encapsulation) refers to the bundling of data, along with the methods that operate on that data, into a single unit. Many programming languages use encapsulation frequently in the form of classes. A class is a program-code-template that allows developers to create an object that has both variables (data) and behaviors (functions or methods). A class is an example of encapsulation in computer science in that it consists of data and methods that have been bundled into a single unit.

Encapsulation may also refer to a mechanism of restricting the direct access to some components of an object

Note that the second paragraph includes the word may, so the ideas of object visibility are optional, not a requirement.

My own personal definition of encapsulation is concise and precise:

The act of placing data and the operations that perform on that data in the same class. The class then becomes the 'capsule' or container for the data and operations. This binds together the data and functions that manipulate the data.

Note that this definition states that ALL the data and ALL the methods which operate on that data should be defined in the SAME class. Some clueless newbies out there use a warped definition of the Single Responsibility Principle to say that a class should not have more than a certain number of properties and methods where that number differs depending on whose definition you are reading.


What is the meaning of Inheritance?

This has a fairly simple definition:

The reuse of base classes (superclasses) to form derived classes (subclasses). Methods and properties defined in the superclass are automatically shared by any subclass. A subclass may override any of the methods in the superclass, or may introduce new methods of its own.

Note that I am referring to implementation inheritance (which uses the "extends" keyword) and not interface inheritance (which uses the "implements" keyword).

While it is possible to create deep inheritance hierarchies this is a practice which should be avoided due to the problems noted in Issues and Alternatives. People who encounter problems say that the fault lies with the concept of inheritance itself when in fact it lies with their implementation. This is why they Favour Composition over Inheritance. In Object Composition vs. Inheritance I found the following statements:

Most designers overuse inheritance, resulting in large inheritance hierarchies that can become hard to deal with. Object composition is a different method of reusing functionality. Objects are composed to achieve more complex functionality. The disadvantage of object composition is that the behavior of the system may be harder to understand just by looking at the source code. A system using object composition may be very dynamic in nature so it may require running the system to get a deeper understanding of how the different objects cooperate.
[....]
However, inheritance is still necessary. You cannot always get all the necessary functionality by assembling existing components.
[....]
The disadvantage of class inheritance is that the subclass becomes dependent on the parent class implementation. This makes it harder to reuse the subclass, especially if part of the inherited implementation is no longer desirable. ... One way around this problem is to only inherit from abstract classes.

This tells me that the practice of inheriting from one concrete class to create a new concrete class, of overusing inheritance to create deep class hierarchies, is a totally bad idea. This is why in MY framework I don't do this and only ever inherit from an abstract class. While inheritance itself is a good technique for sharing code among subclasses, the use of an abstract class opens up the possibility of being able to use the Template Method Pattern which is described in the The Gang of Four book as follows:

Template methods are a fundamental technique for code reuse. They are particularly important in class libraries because they are the means for factoring out common behaviour.

Template methods lead to an inverted control structure that's sometimes referred to as "The Hollywood Principle" that is, "Don't call us, we'll call you". This refers to how a parent class calls the operations of a subclass and not the other way around.

Not only do I *NOT* have any problems with inheritance, by using it wisely I not only have the benefits of sharing code contained in an abstract class I have the added benefit of increasing the amount of code I can reuse by implementing the Template Method Pattern for every method which a Controller calls on a Model.

If I am following the advice of experts and only inheriting from an abstract class and making extensive use of the Template Method Pattern, both of which provide a great deal of reusable code, and you are not, then how can you possible say that your practices are better?

Some clueless newbies claim that inheritance is now out of date and that I am not following the latest "best practices" by still using it. I dismissed these ridiculous claims in Your code uses Inheritance. Other ridiculous claims can be found at the following:

Inheritance breaks encapsulation.

I read this statement in several places but immediately dismissed it as hogwash as there was no explanation as to how inheritance breaks encapsulation. The author obviously had the wrong end of the stick regarding either one or both of these terms. I eventually found this description in wikipedia:

The authors of Design Patterns discuss the tension between inheritance and encapsulation at length and state that in their experience, designers overuse inheritance. They claim that inheritance often breaks encapsulation, given that inheritance exposes a subclass to the details of its parent's implementation. As described by the yo-yo problem, overuse of inheritance and therefore encapsulation, can become too complicated and hard to debug.

This description immediately tells me that inheritance itself is not a problem, it is the overuse of inheritance, especially when it results in hierarchies which go down to many levels, which is the problem. The notion that "inheritance exposes a subclass to the details of its parent's implementation" also strikes me as being hogwash. If you have class "B" which inherits from class "A" then all those methods and properties defined in class "A" are automatically available in class "B". That is precisely how it is supposed to work, so where exactly is the problem? If you have any bugs in your code which use inheritance then I'm afraid that there must be bugs in your code and not the concept of inheritance itself.

Inheritance produces tight coupling.

A clueless newbie called TomB criticised my use of inheritance with the following statement:

Your abstract class has methods which are reusable only to classes which extend it. A proper object is reusable anywhere in the system by any other class. It's back to tight/loose coupling. Inheritance is always tight coupling.

I debunked his claim in this article.

Favour Composition over Inheritance.

The first time I came across the Composite Reuse Principle (CRP) several questions jumped into my mind:

I could see no answers to any of these questions, so I dismissed this principle as being unsubstantiated.

This is supposed to be a solution to the problem that Inheritance breaks encapsulation. As that problem does not exist in my framework I have no need for that solution. Object Composition would not give me anywhere near the same amount of reusable code with so little effort, and as one of the principle aims of OOP is to increase the amount of reusable code I fail to see why I should employ a method which does the exact opposite.

My main gripe about object composition is that it plays against one of the primary aims of OOP which is to decrease code maintenance by increasing code reuse. Directly related to this is the amount of code you have to write in order to reuse some other code - if you have to write lots of code in order to take an empty object and include its composite parts then how is this better than using inheritance which requires nothing but the single word extends?

Another advantage to inheritance is that inheriting from an abstract class enables you to use the Template Method Pattern which the Gang of Four describe as:

Template methods are a fundamental technique for code reuse. They are particularly important in class libraries because they are the means for factoring out common behaviour.

As this powerful pattern cannot be provided with object composition then, for me at least, object composition is a useless idea.


What is the meaning of Polymorphism?

When attempting to discover a meaningful definition of this concept which showed how it could be used to provide reusable code I came across the following statement in this wikipedia article:

In programming languages and type theory, polymorphism is the provision of a single interface to entities of different types or the use of a single symbol to represent multiple different types.

I did not find this description useful at all due to my confusion with the words interface and type. I was even more confused with the different flavours of polymorphism. Which flavors could I use in PHP? Which flavors should I use in PHP? Which flavour provided the most benefits? Instead of attempting to implement polymorphism according to a published definition (which I could not find) I just went ahead and implemented the concepts of encapsulation and inheritance to the best of my ability and hoped that opportunities for polymorphism would appear somewhere along the line.

It just happened that after building my first Page Controller which operated on a particular table class, where the name of that table class was hard-coded into that Controller, I wanted to use the same Controller on a different table class but without duplicating all that code. I quickly discovered a method whereby, instead of hard-coding the table's name I could inject the name of the table class by using a separate component script which then passes control to one of the standard Page Controllers which are provided by the framework, one for each Transaction Pattern.

Eventually I came across a less confusing description (I forget where) which simply defined polymorphism as:

Same interface, different implementation

This enabled me to expand it into the following:

The ability to substitute one class for another. This means that different classes may contain the same method signature, but the result which is returned by calling that method on a different object will be different as the code behind that method (the implementation) is different in each object.

This meant that, because of the architecture which I had designed, and the way in which I implemented it, I had stumbled across, by accident and not by design, an implementation of polymorphism which provided me with enormous amounts of reusable components. The mechanics of the implementation are as follows:

In this way I satisfy the "same interface" requirement by having the same methods available in every concrete table class via inheritance from an abstract table class.

I satisfy the "different implementation" requirement by having the constructor in each concrete table class load in the specifications of its associated database table, with various "hook" methods being available in every subclass to provide any custom processing.

This means that if I have 45 Page Controllers which can be used with any of my 400 table classes then I have 18,000 (yes, EIGHTEEN THOUSAND) opportunities for polymorphism. How many do you "experts" have?

Yet there are still some clueless newbies out there whose understanding of polymorphism is so warped that they have the audacity to criticise my implementation. If you take a look at What Polymorphism is not you will see false arguments such as:


Databases are not Object Oriented

Many years ago I remember someone saying in a newsgroup posting that Object Oriented Programming (OOP) was not suitable for database applications. As "evidence" he pointed out the fact that relational databases were not object oriented, that it was not possible to store objects in a database, and that it required the intervention of an Object Relational Mapper (ORM) to deal with the differences between the software structure and the database structure. I disagreed totally with this opinion for the simple reason that I had been using OOP to build database applications for several years, and I had encountered no such problems. As far as I was concerned it was the blind following of OO theory which was the root cause of this problem. OO theory states that, in a database application, it is the design of the software which takes precedence and that the design of the database should be left until last as it is nothing more than "an implementation detail". You then end up with the situation that, after using two different design methodologies, you end up with two parts of your application which are supposed to communicate seamlessly with one another yet cannot because they are incompatible. This incompatibility is so common that it has been given the name Object-Relational Impedance Mismatch for which the only cure is to employ that abomination called as Object-Relational Mapper (ORM).

As far as I was concerned it was not that OOP itself was not suitable for writing database applications, it was his lack of understanding of how databases work coupled with his questionable method of implementing OO concepts in his software which was causing the problem. You have to understand how databases work before you can build software that works with a database. I had the distinct impression that all these "rules" for OOP were written by people who had never written large numbers of components in a database application therefore had no clue as to how to achieve the best result.

I was designing and building database applications for 20 years before I switched to an OO language, so I knew how to design databases by following the rules of data normalisation. I also learned, after attending a course on Jackson Structured Programming (JSP), that it was far better to design the database first, then design the software to fit that database structure, than it was to design the software by completely disregarding the database structure. This echoes the words of Eric S. Raymond, the author of The Cathedral and the Bazaar who put it like this:

Smart data structures and dumb code works a lot better than the other way around.

The incompatibilities between OO design and database design manifest themselves in the different ways they deal with such things as associations, aggregations and compositions which are described in UML modeling technique as follows:

How you identify and then deal with the classes which are affected by these relationships is then covered by the following:

The wrong way to deal with IS-A relationships

This topic is described in this wikipedia article. It comes into play when you recognise the situation where something IS-A something else, in which case you are supposed to create an inheritance hierarchy where something else is the superclass and something is the subclass. Below are some examples which I have encountered:

The wrong way to deal with HAS-A relationships

This topic is described in this wikipedia article. It comes into play when you recognise the situation where a thing HAS-A (is a container for) one or more other things, in which case the aim is to create a compound object which deals with several entities instead of the usual one entity. This can be quite complex as there can be several different types of containment:

Note that in databases the term "relationship" is used only to associate one table with another, not to associate a table with its columns. While a table may be treated as an object/entity its columns may not - they are treated as primitive values and not objects with properties and methods.

Below are some examples which I have encountered: