This is the exact opposite of the KISS principle which dates from the 1960s, and the Do the Simplest Thing That Could Possibly Work principle which dates from 2001 with the creation of the Agile/Extreme Programming movement. It may also be known as KICK (Keep It Complex, Knucklehead).
It's followers believe that if a job really was that simple then anybody could do it and they would either be on a much lower salary or made redundant altogether. It is therefore in their own interests to create as many obstacles as possible and make the job as hard as possible so as to keep the number of practitioners artificially low and their salaries artificially high. They do this by creating ever more obscure and impractical rules and brand anyone who refuses to follow their rules as a heretic, an outsider, "not one of us", someone who should be ignored or even shunned.
This "let's-make-it-more-complicated" principle is prevalent in the world of Object Oriented Programming which, when it was started, was supposed to deliver on various promises, such as:
OOP is easier to learn for those new to computer programming than previous approaches, and its approach is often simpler to develop and to maintain, lending itself to more direct analysis, coding, and understanding of complex situations and procedures than other programming methods.
As a long-time practitioner of several of those previous paradigms - I programmed for 16+ years in COBOL and 10+ years in UNIFACE - I therefore expect, nay demand, that it live up to that promise. I only write enterprise applications which do nothing but put data into and get data out of a database (usually relational) using electronic forms, and having written thousands of user transactions in dozens of different applications I have gradually refined my development process so that I can now create new transactions with basic functionality very quickly indeed. What used to take weeks to achieve in COBOL I reduced to days in UNIFACE and then minutes in PHP, but for the OO purists this is still not good enough. A common argument I hear is this:
If you have one class per database table you are relegating each class to being no more than a simple transport mechanism for moving data between the database and the user interface. It is supposed to be more complicated than that.
Why on earth should it be more complicated than that? What you have described is the basic pattern for every user transaction in every database application that has ever been written. Data moves between the User Interface (UI) and the database by passing through the business/domain layer where the business rules are processed. This is achieved with a mixture of boilerplate code which provides the transport mechanism and custom code which provides the business rules. All I have done is build on that pattern by placing the sharable boilerplate code in an abstract table class which is then inherited by every concrete table class. This has then allowed me to employ the Template Method Pattern so that all the non-standard customisable code can be placed in the relevant "hook" methods in each table's subclass. After building a user transaction it can be run immediately to access the database, after which the developer can add business rules my modifying the relevant subclass.
Some developers still employ a technique which involves starting with the business rules and then plugging in the biolerplate code. My technique is the reverse - the framework provides the boilerplate code in an abstract table class after which the developer plugs in the business rules in the relevant "hook" methods within each concrete table class. Additional boilerplate code for each task (user transaction, or use case) is provided by the framework in the form of reusable page controllers.
I have been building database applications for several decades in several different languages, and in that time I have built thousands of programs. Every one of these, regardless of which business domain they are in, follows the same pattern in that they perform one or more CRUD operations on one or more database tables aided by a screen (which nowadays is HTML) on the client device. This part of the program's functionality, the moving of data between the client device and the database, is so similar that it can be provided using boilerplate code which can, in turn, be provided by the framework. Every complicated program starts off by being a simple program which can be expanded by adding business rules which cannot be covered by the framework. The standard code is provided by a series of Template Methods which are defined within an abstract table class. This then allows any business rules to be included in any table subclass simply by adding the necessary code into any of the predefined hook methods. The standard, basic functionality is provided by the framework while the complicated business rules are added by the programmer.
The trouble with a lot of people nowadays is that they confuse "simple" as being the same as "not clever". Albert Einstein, a true genius of the 20th century, had this to say on the topic:
Everything should be made as simple as possible, but not simpler.
Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius - and a lot of courage - to move in the opposite direction.
Abelson and Sussman, in their book Structure and Interpretation of Computer Programs which was first published in 1985, wrote this:
Programs must be written for people to read, and only incidentally for machines to execute.
Martin Fowler, the author of Patterns of Enterprise Application Architecture (PoEAA) wrote:
Any fool can write code that a computer can understand. Good programmers write code that humans can understand.
Another person, whose name escapes me at the moment, wrote:
Any idiot can write code than only a genius can understand. A true genius can write code that any idiot can understand.
The mark of genius is to achieve complex things in a simple manner, not to achieve simple things in a complex manner.
Being a good programmer is not just a simple matter of following the rules or ideas laid down by other programmers in the mistaken belief that it will automatically make you as good as them, you have to be smart enough to know when a particular rule or idea is relevant to your current circumstances, and how to employ it to improve your code. Just because the GoF book contains 23 design patterns does not mean that it would be a good idea to put all 23 patterns in a single program. This is what Eric Gamma, one of its authors, had to say in How to Use Design Patterns when he heard of such programmers:
Trying to use all the patterns is a bad thing, because you will end up with synthetic designs - speculative designs that have flexibility that no one needs. These days software is too complex. We can't afford to speculate what else it should do. We need to really focus on what it needs.
This sentiment was echoed in the blog post When are design patterns the problem instead of the solution? in which T. E. D. wrote:
My problem with patterns is that there seems to be a central lie at the core of the concept: The idea that if you can somehow categorize the code experts write, then anyone can write expert code by just recognizing and mechanically applying the categories. That sounds great to managers, as expert software designers are relatively rare.
The problem is that it isn't true. You can't write expert-quality code with only "design patterns" any more than you can design your own professional fashion designer-quality clothing using only sewing patterns.
The world is full of far too many dogmatic programmers who believe that it is only by following every rule to the letter that acceptable software can be produced. By acceptable I mean acceptable to them. At the other end of the spectrum is the pragmatic programmer whose sole aim is to produce cost-effective software which is acceptable to the customer who will be paying the bill. This type of programmer will use his skill and experience to achieve his goals, and is wise enough to know when, where and how to apply a rule and when to ignore an inappropriate rule completely.
A problem I have with this "to the letter" approach is that virtually all of the principles of OOP are so badly defined that they are imprecise and open to large amounts of interpretation, and therefore mis-interpretation. It is therefore impossible to follow every interpretation of every rule, so no matter what you do there will always be someone somewhere saying "you are wrong!"
The problem with copying what experts do without understanding what exactly it is they are doing and why can lead to a condition known as Cargo Cult Programming or Cargo Cult Software Engineering. Just because you use the same procedures, processes and design patterns that the experts use does not guarantee that your results will be just as good as theirs. If you follow the same set of practices as those around you it means that you are jumping on the bandwagon or joining a fashionable cult It does not automatically mean that you will produce perfect code, it just means that you will be echoing the mistakes of others without realising that they are mistakes. In this context the word "mistake" is not meant to mean code that does not work at all, but code that is more complicated than it need be, which then makes it more difficult to read, to understand and therefore to maintain.
My software works, therefore it cannot be wrong. I can create new components at a faster rate and with more features than you can, therefore my methodology is better than yours. If I can do this by ignoring your over-complicated rules, then perhaps you should consider the fact that it is either those silly rules, or your silly interpretation of those silly rules, or even your silly implementation of your silly interpretation of those silly rules, which is wrong.