Tony Marston's Blog About software development, PHP and OOP

Evolution of the RADICORE framework

Posted on 1st June 2022 by Tony Marston

Amended on 1st November 2022

Introduction
Starting with COBOL
Switching to UNIFACE
Switching to PHP
Design Decisions which I'm glad I made
Practices which I do not follow
How using OOP increased my productivity
From personal project to open source
Building a customisable ERP package
Levels of customisation
Maintaining the unmaintainable
Summary
References
Amendment History
Comments

Introduction

I did not pull the design of my RADICORE framework out of thin air when I started programming in PHP, it was just another iteration of something which I first designed and developed in COBOL in the 1980s and then redeveloped in UNIFACE in the 1990s. I switched to PHP in 2002 when I realised that the future lay in web applications and that UNIFACE was not man enough for the job. PHP was the first language I used which had Object-Oriented capabilities, but despite the lack of formal training in the "rules" of OOP I managed to teach myself enough to create a framework which increased my levels of productivity to such an extent that I judged my efforts to be a great success. In the following sections I trace my path from being a junior programmer to the author of a framework that has been used to develop a web-based ERP application that is now used by multi-national corporations on several continents.


Starting with COBOL

When I joined my first development team as a junior COBOL programmer we did not use any framework or code libraries, so every program was written completely from scratch. As I wrote more and more programs I noticed that there was more and more code that was being duplicated. The only way I found to deal with this when writing a new program was to copy the source code of an existing program which was similar, then change all those parts which were different. It was not until I became a senior programmer in a software house that I had the opportunity to start putting this duplicated code into a central library so that I could define it just once and then call it as many times as I liked. Once I had started using this library it had a snowball effect in that I found more and more pieces of code which I could convert into a library subroutine. This is now documented in Library of Standard Utilities. I took advantage of an addition to the language by writing Library of Standard COBOL Macros which allowed a single line of code to be expanded into multiple lines during the compilation process. Later on my personal programming standards were adopted as the company's formal COBOL Programming Standards.

By using standard code from a central library it made each programmer more productive as they had less code to write, and it eliminated the possibility of making some common mistakes. One of the common types of mistake that was eliminated was keeping the definition of certain data buffers, such as those for formsfiles and database tables, in line with their physical counterparts. This was taken care of with the COPYGEN utility which took the external definitions and generated text files which could then be added to a copy library so that the buffer definitions could be included into the program at compile time. Incorporating changes into the software therefore became much easier - change the formsfile or database, run the COPYGEN utility, rebuild the copy library from the generated text files, then recompile all programs to include the latest copy library entries.

One of the first changes I made to what my predecessors had called "best practice" was to change the way in which program errors were reported to make the process "even better". Some junior programmers were too lazy to do anything after an error was detected, so they just executed a STOP RUN or EXIT PROGRAM statement. The problem with this was that it gave absolutely no indication of what the problem was or where it had occurred. The next step was to display an error number before aborting, but this required access to the source code to find out where that error number was coded. The problem with both of these methods was that any files which were open, and this included the database, the formsfile and any KSAM files, which were not explicitly closed in the code would remain open. This posed a problem if a program failed during a database update which included a database lock as the database remained both open AND locked. This required a database administrator to logon and reset the database. The way that one of my predecessors solved this problem was to insist that whenever an error was detected in a subprogram that instead of aborting right then and there that it cascaded back up the stack to return control to the starting program (where the files were initially opened) so that they could be properly closed. This error procedure was also supposed to include some diagnostic information to make the debugging process easier, but it had one serious flaw. While the MAIN program could open the database before calling any subprograms, each subprogram had the data buffers for each table that it accessed defined within its own WORKING-STORAGE section, but when that subprogram performed an exit its WORKING-STORAGE area was lost. This was a problem because if an error occurred in a subprogram while accessing a database table then the database system inserted some diagnostic information into that buffer, but when the subprogram returned control to the place from which it had been called then this diagnostic information was lost, thus making the diagnostic information incomplete and virtually useless. This to me was unsatisfactory, so I came up with a better solution which involved the following steps:

This error report showed what had gone wrong and where it had gone wrong using all the information that was available in the communication areas. As it had access to the details for all open files it could close them before terminating. The database communication area included any current lock descriptors, so any locks could be released before the database was closed. Because of the extra details now included in all error reports this single utility helped reduce the time needed to identify and fix bugs.

Up until a particular project in 1985 it was common practice to develop each new application from scratch. This involved creating a single program which had numerous subprograms to deal with each user transaction (use case or unit of work). This then required a hierarchy of menu screens which listed the options which were available in the application and allowed the user to choose one. As the screen size was fixed the number of options in each page was limited. An option could either be a user transaction or another sub-menu. This required that each menu page be hard-coded, which meant that all the menu pages had to be defined and compiled up front, and any changes to these menus required changes to some code which in turn required that the changed code be recompiled and then re-linked into a new version of the program file. Although the application had a logon screen which only authorised users could pass through, every user always saw every option that existed on a menu screen which meant that they could select it. A simple Access Control List (ACL) identified those options which a particular user was allowed to access, but this was only checked after that option was activated. This led to the annoying situation where a user could see an option, but he was only told after he selected it that the option was disallowed.

This all changed in 1986 when a new client insisted on a system of dynamic menus where the menu screens could be changed on-the-fly and where the user could only see those options which he was allowed to access. This required a completely new design, so I spent a few hours on the following Sunday in designing a database structure which could support all these requirements. I began coding it on the Monday, and by Friday it was complete. The main points of this design were:

The client was satisfied that this design met all his requirements, but over time the following enhancements were made:

After that particular client project had ended, my manager, who was just as impressed with my efforts as the client, decided to make this new piece of software the company standard for all future projects as it instantly increased everyone's productivity by removing the need to write a significant amount of code from scratch. This piece of software is documented in the following:


Switching to UNIFACE

In the 1990s my employer switched to UNIFACE, so I rebuilt this framework in that new language. I first rebuilt the MENU database, then rebuilt the components which maintained its tables. After this I made adjustments and additions to incorporate the new features that the language offered. UNIFACE uses a proprietary Integrated Development Environment (IDE) which had a database Repository which consisted of an Application Model from which you could build Form Components for each use case. Inside the Application Model you defined entities (tables), fields (columns), keys (indexes) and relationships. You then ran a process which exported an entity's details from the Application Model to generate the CREATE TABLE script using those details. Using the built-in Graphical Form Painter (GFP) you built a form by starting with a blank page on which drew rectangular frame which you then related to an entity in the Application Model. Within this entity frame you painted fields which belonged to that entity. After compiling the form you could run it using the standard function keys to read, write, update and delete occurrences (rows) in that database table. You never had to write any SQL queries as they were generated automatically by the built-in database driver, with a separate driver for each supported DBMS.

UNIFACE was the first language I used which accessed a relational database using a new protocol called SQL. With COBOL running on Data General mini-computers I had used a hierarchical database called INFOS with a character mode user interface, and with Hewlett-Packard mini-computers I had used a network database called IMAGE (later called TurboIMAGE) and a block mode UI called VIEW (later called VPLUS). Both of these database used specialist subroutine calls instead of executing a query string, so changing to SQL was quite a leap. While an advantage with UNIFACE was that you did not have to write any SQL queries, the disadvantage was that you could not write any SQL queries. It was not possible to perform any JOIN operations, so instead on a single query where you could SELECT ... FROM tableA LEFT JOIN table B ON (...) you had to define an entity frame for TableB inside the entity frame for TableA in the Graphical Form Painter, then at runtime UNIFACE would read a set of occurrences (rows) from TableA, then for each of those rows it would issue a separate read operation for one row from TableB. This is known as the N+1 SELECT Problem and is grossly inefficient, and the only solution was to create a database view for the complex query and define the view in the Application Model. This would then allow UNIFACE to treat the view as if it were an ordinary table so a single read operation would cause the database to execute a JOIN without UNIFACE being aware of it.

I started with UNIFACE Version 5 which supported a 2-Tier Architecture with its form components (which combined both the GUI and the business rules) and its built-in database drivers. UNIFACE Version 7 provided support for the 3-Tier Architecture by moving the business rules into separate components called entity services, which then allowed a single entity service to be shared by multiple GUI components. Each entity service was built around a single entity in the Application Mode, which meant that each entity service dealt with a single table in the database. It was possible to have code within an entity service which accessed another database table by communicating with that table's entity service. That new version of UNIFACE also introduced non-modal forms (which cannot be replicated using HTML) and component templates. There is a separate article on component templates which I built into my UNIFACE Framework.

Whilst my early projects with UNIFACE were all client/server, in 1999 I joined a team which was developing a web-based application using recent additions to the language. Unfortunately this was a total disaster as their design was centered around all the latest buzzwords which unfortunately seemed to exclude "efficiency" and "practicality". It was so inefficient that after 6 months of prototyping it took 6 developers a total of 2 weeks to produce the first list screen and a selection screen. Over time they managed to reduce this to 1 developer for 2 weeks, but as I was used to building components in hours instead of weeks I was not impressed. Neither was the client as shortly afterwards the entire project was cancelled as they could see that it would overrun both the budget and the timescales by a HUGE margin. I wrote about this failure in UNIFACE and the N-Tier Architecture. After switching to PHP and building a framework which was designed to be practical instead of buzzword-compliant I reduced the time taken to construct tasks from 2 weeks for 2 tasks to 5 minutes for 6 tasks.

I was very unimpressed with the way that UNIFACE produced web pages as the HTML forms were still compiled and therefore static. When UNIFACE changed from 2-Tier to 3-Tier it used XML forms to transfer data between the Presentation and Business layers, and the more I investigated this new technology the more impressed I became. I even learned about using XSL stylesheets to transform XML documents, but although UNIFACE had the capability of performing XSL transformations it was limited to transforming one XML document into another XML document but with a different format. When I learned that XSL stylesheets could actually be used to transform XML into HTML I did some experiments on my home PC and I became even more impressed. I could not understand why the authors of UNIFACE chose to build web pages using a clunky mechanism when they had access to XML and XSL, which is why I wrote Using XSL and XML to generate dynamic web pages from UNIFACE.


Switching to PHP

I could see that the future lay in web applications, but I could also see that UNIFACE was nowhere near the best language for the job, so I decided to switch to something more effective. I chose PHP as it was designed specifically for building web-based database applications, and I could download and install all the software I needed - PHP, Apache and MySQL - onto my home PC for free. I read the PHP manual, found some online tutorials, proved that it was easy to use and could do what I wanted it to do, then began to rebuild my entire development framework. I had only two objectives to start with:

Before I started work on building my new framework I created a Proof Of Concept in the form of a small Sample Application which had a small database and the scripts to maintain the tables. All the menu buttons were hard coded as all I wanted to do was run a few scripts to maintain the contents of different tables with different relationships, to test the pagination and scrolling mechanism, and to test the mechanism of passing control from one script to another and then back again. This was published in November 2003. After proving that my design was sound I then built the MENU database so that I could store the details of application tasks and users, build menu screens so that users could see what tasks were available and then run them, allocate tasks to roles and then roles to users in order to control which users could access which tasks.

With my previous languages I had to use a special tool to design each page and then compile it before it could be used, which meant that there was no scope for reusability. With web pages each page/document is a text file which is full of HTML tags, and this text file does not need to be compiled before it is sent to the browser, you just build it using strings of plain text and then send it. While it is possible to load a file from disk and send that as a static web page, it is also possible to build it dynamically from scratch at runtime by inserting values into templates. A template could be for the entire page or just parts of a page. When I first started to look at examples of PHP code to see how web pages could be generated I came across two different methods:

While either of these approaches may be suitable for small personal projects with small numbers of simple web pages I knew that they would be too quick and dirty for large enterprise applications with hundreds of pages and therefore unsuitable for my framework. I knew that using some sort of HTML templating system would be more productive than building each page from scratch with PHP, and as I had already proved to myself the effectiveness of XSL transformations I decided not to waste time in looking at other templating systems.

An XSL stylesheet works by transforming XML into HTML where the XSL stylesheet identifies the structure of the HTML output and the XML document provides the data that goes into that structure. The XSL stylesheet is usually static and loaded from disk when needed while the XML document is built in memory from scratch using different rows of data from the database. My chosen method was to let all the database objects finish their processing so that the page controller could then call a separate View object to build the XML document, load the XSL stylesheet from disk, perform the transformation, then send the result to the client's browser. Note that I do not need a separate View object for each Model with any hard-coded table and column names as I use dependency injection to insert an array of objects into the View where it uses a series of polymorphic methods to extract the data that it needs.

There are two distinct advantages of using XSL transformations to create your HTML pages:

Because I had learned so much from articles and how-tos which other developers had posted on the internet I decided to repay the favour by publishing articles of my own in May 2003 such as:

In August 2003 I published RADICORE - A Development Infrastructure for PHP which documented the entire architecture of my framework, and Transaction Patterns for Web Applications which documented my implementation of a series of patterns which, totally unlike design patterns for which you must build your own implementations, actually provide a series of pre-written and reusable implementations.

In May 2004 I published The Model-View-Controller (MVC) Design Pattern for PHP after a colleague pointed out that my framework contained components which matched the description of the MVC design pattern. I always tell people that this was by accident and not design (pun intended) as I did not read about this pattern and then try to implement it, I simply wrote simple code that worked and then refactored it to make it better. The real clincher was the fact that by developing a single function which converted application data from a PHP array into an XML document, then used a XSL stylesheet to transform it into HTML, I had split my Presentation layer component into separate View and Controller components, with everything in the Business/Domain layer being equivalent to the Model.


Design Decisions which I'm glad I made

When I began to produce my PHP framework I did not base it on any ideas from other people, I simply built upon what I had produced earlier in COBOL and UNIFACE, then modified it according to the additional capabilities offered by the PHP language, namely that of programming with objects. Fortunately for me I did not go on any formal OOP training courses, instead I read the PHP manual and ran through some sample code which I found in various online tutorials and books which I purchased. I learned the mechanics of creating classes and how to share code using inheritance. I struggled initially with the concept of polymorphism as the descriptions were vague and examples were virtually non-existent, but I got there in the end. I say "fortunately" as I later discovered that what was being taught as the "proper" way to implement the principles of OOP was far from being the "best" way. Programming is an art, not a science, so it requires artistic talent and not the slavish following of rules which can be read in books. There is no such thing as "one true design methodology" or "one true programming style", so instead of following the suggestions of others I decided to build my new framework based on my own experience, instincts and intuition. These decisions are identified below:

  1. Using XSL to generate web pages

    The first XSL stylesheet that I created worked specifically for a single web page, but as I built more and more web pages and more and more XSL stylesheets I could see more and more places where I could replace repeating code with a call to a library routine. Fortunately XSL offers the following facilities:

    You can see this in action in The XSL stylesheet for a DETAIL form. You should notice that the table name is hard-coded, as well as the name of every column which is to be displayed on the screen.You may notice in this early example that I am using the ability to add data to the stylesheet by using parameters in the XSL transformation process. I later changed that to place ALL data into the XML file which then made it easier to use my XSL debugger.

    After a period of hard-coding a separate XSL stylesheet for each web page I thought to myself that there must be a better way. I noticed that the only difference between one web page and another was the table name and the list of column names, so I wondered if it could be possible to provide this information inside the XML document where it could then be processed during the XSL transformation. With a bit of experimentation I discovered that it could, so instead of having to define the table and column names inside the XSL stylesheet I now define it outside using the following mechanism:

    This then meant that instead of having to create a separate XSL stylesheet for each web page I could create a small set of reusable XSL stylesheets which provide a common structure with just the differences being described in a screen structure file. I currently have 12 XSL stylesheets which I have used to create over 4,000 web pages in my main ERP application. This means that I do not have any PHP code in my software which spits out any HTML.

    I also decided to load the contents of this file into memory at the start of each script instead of right at the end, thus giving me the opportunity of modifying the structure before it is processed.

    When I eventually added the capability of producing PDF output I found myself adopting the same approach by using a report structure file to identify which bits of application data should go where on the page.

  2. Using the 3-Tier Architecture

    I found that implementing the 3-Tier Architecture using PHP and objects was surprisingly easy as programming with objects is automatically 2-tier to begin with. This is because after creating a class for a business/domain component with properties and methods you must also have a separate component which instantiates that class into an object and then calls whatever methods are required. The business/domain object is what I now refer to as a Model in my infrastructure while the component which instantiates it and calls its methods is what I refer to as a Controller.

    In my prototype implementation I had methods within each table class which accessed the database directly, but when MySQL version 4.1 was released I needed a mechanism to switch between using either the original "mysql_" functions or the improved "mysqli_" functions. All I had to do was to create a separate database class for each different set of functions then modify each table class so that the method which accessed the database then passed control to a separate DBMS object instead. This was easy to do as each table class inherited those methods from an abstract table class which meant that all the changes were confined to that single abstract class. This made it very easy later on to add support for additional DBMS engines, starting with PostgreSQL, then Oracle and later SQL Server.

    With the creation of a separate component which used XML and XSL to create all HTML pages I had effectively split my Presentation layer into 2 separate pieces - a Controller and a View - which you should recognise as being parts of the Model-View-Controller design pattern.

  3. What objects should I encapsulate into classes?

    The starting point of OOP is the creation of classes which act as the containers (or capsules) for an entity's properties (data) and methods (operations). You need to create classes so that you can instantiate them into objects, then you can call an object's methods. This leads to the question "How do I identify something for which I should create a class?" This is supposed to be the result of a process called Abstraction which can result in two types of class:

    By putting common methods in an abstract class which is then inherited by multiple concrete classes you then have access to polymorphism. You can then take advantage of polymorphism by using dependency injection.

    To make the situation even more confusing an experienced developer will tell you that there are basically only two types of object:

    Some languages include a 3rd option known as a VALUE OBJECT, but I ignore them as PHP supports only primitive data types. This seems logical to me as neither SQL nor HTML deal with value objects.

    As far as I am concerned entities belong only in the Business/Domain layer while all the other layers should consist of nothing but services. The components in the RADICORE framework fall into the following categories:

    It should also be noted that:

    Object Oriented Programming requires that you first create classes with methods (operations) and properties (data) so you can instantiate them into objects, after which you can call an object's methods to manipulate its properties. The act of creating classes is known as Encapsulation which can be defined as:

    The act of placing data and the operations that perform on that data in the same class.
    To me this means that ALL the data for an object and ALL the operations that can manipulate that data should be placed in a single class. This means the same class. If the lowest form of object in a database is a table then it makes sense, to me at least, to create a separate class for each table. This is also confirmed by the fact that the standard CRUD operations are performed on individual tables, not on individual columns or collections of tables. A "table" is a collection of "columns" which identify the data that is stored for a particular type of entity, and each row in a table represents a different instance of that entity. If my principle of "one class for each database table" is wrong then what are the alternatives? I can only think of the following:

    It was also obvious to me as an experienced developer, but perhaps not so obvious to a clueless newbie, that "all the operations that perform on that data" meant "all the operations that perform on the raw data" (eg: business rules) and not "operations which transform the raw data into another format". Being already familiar with the 3-Tier Architecture I was aware that the code which deals with moving data and in and of the database belongs in the Data Access layer while the code which deals with moving data to and from the user interface belongs in the Presentation layer. All the code which processes the business rules for each entity belongs in the Business layer and is (or should be) totally unconcerned and unaware of what happens to the data in the other layers. The code which transforms data to and from an SQL query does not belong in the Business layer. The code which transforms data from HTML input or HTML/CSV/PDF/Image output does not belong in the Business layer.

    That is why I started my framework by creating a separate class for each database table, and why I am still doing so 20 years later.

    I have subsequently read several articles by people who seem to think that creating a separate class for each database table is totally wrong as database tables do not represent complete objects in the real world, just parts of objects. They say that the rules of OOP require that you create objects which model the real world, which means creating classes that are responsible for handling as many database tables as it takes to represent each real-world entity. These people have got it backwards. Just because you can write software which models the real world does not mean that you should. When you write software which communicates with objects outside of itself it makes sense, to me at least, to communicate with those objects directly instead of indirectly through an intermediary. When you are writing a database application you are writing software which communicates with objects in a database, not objects in the real world, and these database objects are called tables. You do not manipulate any real-world objects, either directly or indirectly through a database table, you simply manipulate the data that you hold on those objects inside tables in your database. Anyone who cannot grasp this simple concept is making a fundamental mistake, and if the foundation of your software is built on a misunderstanding then it won't be long before the cracks begin to show in your application and the entire edifice starts crumbling in front of your eyes.

  4. How do I use inheritance?

    Inheritance is an OO technique for sharing code between classes. You can define a piece of code once in a superclass and inherit it into as many subclasses as you like. That code then "appears in" or "is made available to" the subclass when it is instantiated into an object just as if it was coded directly into the subclass. Note that it is written once and shared many times, not written many times.

    I did not bother trying to create any superclasses until I found some pieces of code which were duplicated and therefore ripe for being shared. In order to create the family of forms for my first database table I create a table class which supported the basic CRUD operations, then created the page controllers which dealt with each of those tasks (use cases, user transactions or units of work). I then wrote the code until every component in this family did what it was supposed to do.

    The fun started when I created another family of forms for the next database table. I duplicated both the page controllers and the table class, then modified the second set of scripts to change all table references from table#1 to table#2. I then created a superclass to hold the shared set of methods and properties, and began moving what was duplicated from the subclasses to the superclass. When I was finished there was nothing left in the subclasses except for the constructor which looked like the following:

    <?php
    require_once 'std.table.class.inc';
    class #tablename# extends Default_Table
    {
        // ****************************************************************************
        // class constructor
        // ****************************************************************************
        function __construct ()
        {
            $this->dbname    = '#dbname#';
            $this->tablename = '#tablename#';
            $this->fieldspec = array(....);
            
        } // __construct
        
    // ****************************************************************************
    } // end class
    // ****************************************************************************
    ?>
    

    This to me is an example of the process of abstraction which is described in The meaning of "abstraction"?

    The act of performing an abstraction means that you separate the abstract from the concrete, the general from the specific. You need to look for patterns of similar characteristics in different objects. You cannot look at a single object in isolation and perform this process, you must look at groups of objects and identify all those characteristics which they have in common and separate out the differences. Everything which is similar can then be classed as abstract as it is non-specific and can be applied to all those objects, while everything which is different is unique to a particular concrete object. In computer software these similarities can be contained in an abstract class while the differences are limited to a concrete subclass. In my framework the abstract table class contains the common characteristics that can be applied to any table subclass while each concrete table class specifies the unique details for a specific database table. This separation between the general and the specific, the similar and the unique, is implemented using the Template Method Pattern where all invariant methods are defined in the abstract class and all variable "hook" methods are defined in each subclass.

    Some developers seem to think that the creation of a single class is the result of performing an abstraction just because you can have multiple instances of the same blueprint. Identifying an object for which the business needs to have its data stored in the database is not a special process which requires special rules, it is as simple as saying "we need to store data on Products, Customers, Orders, etc", creating a database table for each of those objects, then creating a class for each table. The only tricky part is examining the mass of data which you want to store for each of those objects and applying the rules of Data Normalisation which may require the splitting of that data across several related tables (as shown in Figure 4 for example). Once you have created a table the structure is fixed, but each row (instance or occurrence) on that table will have a unique set of values which adhere to that structure.

    Note that although I sometimes create a subclass of a concrete class this is never to create a class for a diferent table, it is only to provide a different implementation in some "hook" methods. For example in the DICT subsystem I have the following class files:

  5. How do I use polymorphism?

    My early research into Polymorphism was initially unproductive as I found the descriptions to be less than informative. Here is one such description:

    Polymorphism is the ability to send a message to an object without knowing what its type (class) is.

    This to me is rubbish for two reasons:

    Here is another description which I found to be of no use whatsoever:

    Polymorphism is the ability of a message to be displayed in more than one form.

    WTF!! OOP is NOT messaging software, it is NOT about sending messages, and it is certainly NOT about displaying messages. I have seen many other descriptions, but I find them to be just as confusing and less than informative. The most useful description which I eventually found was this:

    Same interface, different implementation. This means that different classes may contain the same method signature, but the result which is returned by calling that method on a different object will be different as the code behind that method (the implementation) is different in each object.

    That immediately told me that my use of an abstract table class which supported the standard CRUD methods which were then shared by every concrete table class was a shining example of polymorphism as every method in the abstract superclass automatically appears in every concrete subclass. Take the following code which appears in several Page Controllers as an example:

    require "classes/$table_id.class.inc";
    $dbobject = new $table_id;
    $fieldarray = $dbobject->getData($where);
    

    The getData() method will produce an SQL query which defaults to the following:

    SELECT * FROM $this->tablename WHERE $where;
    

    The value inside $this->tablename is set within the constructor of each subclass, so what is returned in $fieldarray will be different for each subclass. This clearly shows that calling the same method on different objects will produce different results. In case you have still not grasped the benefit that this provides, it means that the code which calls the getData() method instantly becomes reusable. I can use it hundreds of times with a different value for $table_id and it will produce a different result each time. Because of this each of my 40 page controllers can be used with any of my 400 Model classes. This provides me with 16,000 (40 x 400) opportunities for polymorphism.

  6. How do I use Dependency Injection?

    Once you have created a number of objects which share a set of common methods you have enabled polymorphism, but how can you take advantage of what this has to offer? The answer is Dependency Injection. The first question is "What is a dependency?". If ModuleA calls a method on ModuleB then ModuleA requires access to ModuleB in order to complete its processing. In other words ModuleA is dependent on ModuleB. ModuleB is not dependent on ModuleA but it is a dependency of ModuleA.

    As an example suppose we have modules M1, M2, M3 and M4 which all share the methods insertRecord(), getData(), updateRecord() and deleteRecord(). In order to call these methods we could have a separate version of the calling module C for each of the objects M1, M2, M3 and M4, such as:

    module C1:
    require 'classes/m1.class.inc';
    $object = new m1;
    $result = $object->insertRecord($_POST);
    ...
    module C2:
    require 'classes/m2.class.inc';
    $object = new m2;
    $result = $object->insertRecord($_POST);
    ...
    module C3:
    require 'classes/m3.class.inc';
    $object = new m3;
    $result = $object->insertRecord($_POST);
    ...
    module C4:
    require 'classes/m4.class.inc';
    $object = new m4;
    $result = $object->insertRecord($_POST);
    

    This means that for each of the M objects you will need a separate version of the C object. That's a lot of duplication, especially if you have 400 versions of the M object. You can make huge savings by having just one version of the C object as follows:

    require 'classes/$module_id.class.inc';
    $object = new $module_id;
    $result = $object->insertRecord($_POST);
    

    This works by using whatever object identity is contained within the variable $module_id. This can be set using code such as the following:

    $module_id = 'm1';  // or 'm2' or 'm3' or 'm4' or 'm999'
    require 'c.inc';
    

    This instantly makes object C (the Controller) reusable with any version of object M (the (Model). In my ERP application I have 400 Models and 40 Controllers, so that means I can use the same Controller 400 times instead of having 400 versions. Does that meet the deinition of "reusable code"?

  7. What properties should be put in each class?

    As I was used to passing complete rows of data from one component to another I decided against the idea of defining each database column as a separate property in each class and use a single property called $fieldarray instead. I thus avoided the need to require a collection of getters and setters for each column. This single property could also contain as many or as few columns as I liked, and as many or as few rows as I liked. When I saw the first example of using getters and setters I thought to myself "What a stupid idea! Why should I waste time in unpacking the $_POST array into its component parts and then insert them one column at a time when I can pass in the entire array in one fell swoop?"

    I did not like the idea of having a separate class property for each table column as I could immediately see the disadvantage of having separate bits of code to deal with each individual column. I could also see that it would restrict each object to holding data for just one row, and I knew enough about databases to realise that a database query can return any number of rows, even no rows at all. My experiments with PHP showed that when the various pieces of data are sent from an HTML form on the client to a PHP script on the server they are presented as elements within the $_POST variable which is an associative array. I also noticed that when reading data from the database it appears as another array (in fact an indexed array of associative arrays) with a separate index number for each row. I then asked myself a simple question: if the Presentation layer deals with multiple rows and columns of data in a single array, and the Data Access layer deals with multiple rows and columns of data in a single array, do I need code in the Business layer to deconstruct and reconstruct this array, or can I remove the need for extra code and access the contents of the array directly? Some people may call me an idiot because of my programming style, but wasting time by writing code that I don't need to write seems more idiotic to me.

  8. What methods should I put in each class?

    After deciding that each database table should have its own class the next step was to decide what methods to put in each class. As the only operations that can be performed on a database table, regardless of what data it contains, are Create, Read, Update and Delete (CRUD) I decided to support these four methods in each table class using a standard set of method names - insertRecord(), getData(), updateRecord() and deleteRecord(). The idea of using unique names within each class, such as createCustomer(), createProduct() and createOrder() never occurred to me. Unlike procedural functions which must have unique names within the entire application, with OOP it is possible for the same method name to be duplicated in any number of different classes. This is why I chose $customer->insertRecord(), $product->insertRecord() and $order->insertRecord().

    I noticed during my initial development that there was a lot of boilerplate code involved in each of these methods, so rather than having to duplicate it in each table class I decided to put both the methods and the boilerplate code in an abstract table class so that it could be inherited and therefore shared by each concrete table class. I also decided to include the $fieldarray variable as an input and output argument on each method.

    Other programmers choose to have separate public methods called load(), validate() and store(). This is not a good idea as it allows for more data to be inserted after the validate() has been performed, which could lead to errors during the store(). In my framework I do not treat these as separate operations as they must always be executed together and in a particular sequence. In other words they form a group operation in which they are separate steps within that operation. If you look at either insertRecord() or updateRecord() the load() is performed by passing all the data in as an input argument while the validate() and store() are performed internally. Note that the store() method is only called if the validate() method does not detect any errors. For fans of design patterns this is an example of the Template Method Pattern where the abstract class contains all the invariant methods and allows variable/customisable methods to be defined within individual subclasses.

    This technique of using common method names in different objects is described in Robert C. Martin's article The Dependency Inversion Principle where his "Copy" program uses multiple device objects each of which supports the same read() and write() methods, but with different implementations for each device object. This is also an example of polymorphism in action. Without polymorphism you cannot have dependency injection.

  9. How do I validate data before it gets written to the database?

    One thing I learned early on in my programming days was to never trust data provided by the user as it could be full of errors, by which I mean values that could not be inserted into the database because they did not match the column's data type. It is better to check each value in the software before it is sent to the database so that you can inform the user when it is wrong and give him the opportunity to correct it instead of having the entire program come to a halt because of a failure with the query.

    I already knew that within the database schema each table's structure contained a list of field names and their data types, so what I needed was a method of validating each field's value against its data type. I was already passing the data around in a single associative array called $fieldarray which contained an array of field names with their values, so it struck me that it would be useful to have a second array of field names and their data specifications, which would then make it possible to write a standard routine to iterate through these two arrays comparing each field's value with its specifications. I started off by writing this list of field specifications by hand, but this became so boring and repetitious I decided to automate it. Just as I had done in my COBOL days with my COPYGEN utility I wrote a program which read each table's structure from the database schema and produced a file of field specifications which could be read into the table's object at runtime. Instead of creating this file directly from the database schema I decided to import it into an intermediate database of my own design called a Data Dictionary from which it could be exported to a disk file. I chose to do it this way as I knew that I would want to include additional details in this structure file that are not available in the database schema. I have functions within my Data Dictionary which enable me to add, and therefore extract, as much additional information as I like.

    This method now means that the framework now takes care of both creating the table's structure file and validating the user's data against this structure file without having to write a single line of code. This is what I refer to as primary validation. For the uninitiated this is also an example of declarative programming where I am declaring the rules that need to be followed without actually performing them. This is done later in the framework's validation object.

  10. Where do I put the business rules?

    When I became aware of the 3-Tier Architecture and saw it in action with the UNIFACE language I instantly saw the benefits of splitting the logic of an application into several distinct layers:

    Apart from the fact that the description of the 3-Tier Architecture specifically states that all business rules should reside in the Business layer, the fact that all the objects in the other layers are services and not entities should drive you to the obvious conclusion that business rules do not belong in a service object. Services are (at least in the RADICORE framework they are) pre-built and reusable, which means that they do not contain any information regarding application entities, which includes their business rules.

    When a developer comes to build an application using the RADICORE framework he only need concern himself with building Model classes in the Business/Domain layer as all the other components have been pre-built and come supplied in the framework. All business logic, which includes table structures, validation rules, business rules and task specific behaviour, are confined to and should not exist anywhere else but in the Business layer. All the other layers are (or should be) comprised of service objects which should only contain the logic which performs that service.

    If it is necessary to add additional processing rules to any table class then this can be done using any of the available "hook" methods. These only exist because I chose to create an abstract class which then enabled me to implement the Template Method Pattern.

    I have subsequently been made aware that other programmers have totally different ideas on what pieces of logic should go where. There still are debates on whether we should have Fat Models and Skinny Controllers, or Fat Controllers and Skinny Models (with the "fat" identifying where the business rules should exist). I have heard it said that data validation should not be performed within an object as it is wrong to insert data into an object that has not been pre-validated. I have heard it said that each business rule should go into its own class. As far as I am concerned all these different theories, regardless of how clever their arguments appear to be, are violating the basic principles of programming in general and OO programming in particular.

    Putting business rules in the Business/Domain layer is also correct according to Martin Fowler who, in his article AnemicDomainModel, says the following:

    It's also worth emphasizing that putting behavior into the domain objects should not contradict the solid approach of using layering to separate domain logic from such things as persistence and presentation responsibilities. The logic that should be in a domain object is domain logic - validations, calculations, business rules - whatever you like to call it.
  11. How do I insert non-standard or custom code?

    While the framework can take care of all standard processing there will always be times when you will want to perform some additional processing or data validation that cannot be performed automatically. The standard processing flow is handled by the methods in the abstract table class, so what is needed is a mechanism where you can say "when you get to this point in the processing flow I want you to execute this code". This is where my use of an abstract table class provided a simple and elegant solution. My experiments with inheritance had already proved to me that when you inherit from one class (the superclass) into another (the subclass) the resulting object will contain the methods from both classes. The method in the superclass will be executed unless you override it in the subclass. This means that in certain points of the processing flow I can call a method which is defined in the superclass but which does nothing, but if I want to I can copy that method into my subclass and insert whatever code is necessary. This then replaces at runtime a method in the superclass which does nothing with a method in the subclass which does something. To make it easy to identify such methods I have them a "_cm_" prefix which standards for customisable method. Some of them also include "pre_" or "post_" in the prefix to identify that they are executed either before or after the standard method of that name.

    Here is an example of an empty method in the abstract class:

    function _cm_whatever ($fieldarray)
    // perform custom processing at .....
    {
        // customisable code goes here
    		
        return $fieldarray;
    } // if
    

    Here is some sample code which can be inserted into the subclass to compare the value in one field with that in another field:

        if ($fieldarray['start_date'] > $fieldarray['end_date']) {
            // 'Start Date cannot be later than End Date'
            $this->errors['start_date'] = getLanguageText('e0001');
            // 'End Date cannot be earlier than Start Date'
            $this->errors['end_date']   = getLanguageText('e0002');
        } // if
    

    Note here that errors are indicated by inserting an entry into the $this->errors array and NOT by throwing an exception. Data validation errors can be corrected by the user whereas true exceptions indicate a fault in the code which can only be corrected by changing the code. Another reason is that if you throw an exception it can only report a single error whereas an array can contain as many error as you encounter.

    It was not until several years later that I discovered that what I had done was to provide an implementation of the Template Method Pattern which, according to the Gang of Four, is one of the most important patterns for a framework.

  12. How do I call the numerous tasks (use cases) within the application?

    In my early COBOL days it was common practice to have a separate program which handled all the aspects of a particular area of business. This resulted in a small number of large programs, each of which handled multiple responsibilities. As discussed in my COBOL experience this method began to generate problems, and after some thought I realised that the simplest and best solution would be to change from having a small number of large programs which handled multiple responsibilities to have a large number of small programs which handled a single responsibility each. This solved all the known issues without creating new ones, so it became a philosophy which I carried forward when I switched from COBOL to UNIFACE and then to PHP.

    I have seen comments from other developers who consider my family of forms to be a single use case which therefore should be covered by a single program component. I disagree most strongly. By treating each of those six operating modes (List, Search, Insert, Update, Delete and Enquire) as a separate component (unit of work, use case or task) I end up with the following advantages:

    1. Each component has its own entry the TASK table in the MENU database.
    2. Each TASK has its own PHP script in the file system which can be activated by its own URL in the browser.
    3. Each task can then be added to the relevant MENU or NAVIGATION-BUTTON table in the MENU database.
    4. Any number of ROLES can be created so that TASKS can be made accessible to those ROLES via the ROLE-TASK table.
    5. USERS of the application can be assigned to any number of ROLES via the USER-ROLE table which then allows a user to access specified tasks.
    6. A USER's access to a task is via either a MENU button or a NAVIGATION button, but when the framework is defining which of those buttons can appear in the current screen it can remove the buttons for those tasks which the user is not allowed to access.
  13. Do I have a separate Controller for each Model?

    A lot of the code samples which I read while experimenting with PHP had a separate Controller for each Model which could only be used with that Model. This was because it had hard-coded references to a particular Model, hard-coded references to properties within that Model (either via getters and setters or argument names on method calls), and hard-coded references to methods which were unique to that Model. This meant having a bespoke Controller for each Model, but I wanted to have something that was more reusable. I had already worked out that in a database application each task, regardless of the data which it manipulates and the complexity of that manipulation, is responsible for performing one or more operations on one or more database tables. In my COBOL days I had already noticed that after writing a program which "did something" with one database table it was often a subsequent requirement to write another program which did exactly the same thing but with a different table. This requirement could only be satisfied by copying the original program then changing all the table references, but this still resulted in a lot of similar code.

    When coding my first PHP Controller it had the name of the class which was instantiated into an object hard-coded within its bowels, so I looked to see if there was a way to remove this hard-coded reference. It only took me five minutes to discover that there was, so instead of having code like this:

    require "classes/product.class.inc";
    $dbobject = new product;
    
    I could replace it with code like this:
    require "classes/$table_id.class.inc";
    $dbobject = new $table_id;
    
    All I then had to do was assign the value "product" to the variable $table_id before activating the Controller. This is performed in what I call a component script which looks like the following:
    <?php
    $table_id = "person";                      // identify the Model
    $screen   = 'person.detail.screen.inc';    // identify the View
    require 'std.enquire1.inc';                // activate the Controller
    ?>
    

    You should be able to see at this point the advantages of (a) having a separate class for each database table, (b) using common method names in each table class and (c) passing data in and out in a single array instead of individual properties. For example, I have an ADD1 Controller which adds a single record to a database table, but this Controller does not contain either the name of the database table nor the names of any columns on that table. The code which it does contain looks like the following:

    require "classes/$table_id.class.inc";
    $dbobject = new $table_id;
    $fieldarray = $dbobject->insertRecord($_POST);
    if (!empty($dbobject->errors)) {
        $dbobject->rollback();
    } else {
        $dbobject->commit();
    } // if
    

    The insertRecord() method performs several steps in its processing cycle., among which is primary and secondary validation as well as pre-insert and post-insert processing.

    This means that instead of having a separate Controller to handle all the use cases for a particular Model (table class), which would make the Controller unreusable with other Models, I have a separate Controller which handles a single use case for an unspecified Model, which means that each Controller can be used with any Model. It also means that any Model can be used with any Controller, thus making them both reusable.

  14. How many Models can a Controller access?

    It was not long after I started to publish articles on my framework that I was told that my idea of creating Controllers which accessed more than one Model was totally wrong. What that critic failed to understand was that just because he had only seen sample code where a Controller accessed only one Model did not mean that a Controller could only ever access a single Model. Such a restriction has never existed, and those who suggest it can never come up with a valid reason to justify its existence other than "that is the way I was taught". There are numerous situations where what you want to display on the screen comes from more than one database table, such as displaying a sales order where details from the ORDER_HEADER are displayed at the top with rows from the ORDER_LINE table displayed underneath, so treating each of those areas as separate zones which require separate accesses of the database was common practice even as far back as my COBOL days in the 1980s. With UNIFACE it was much the same - you painted an entity frame on the top of the screen to display a single row from the ORDER_HEADER entity below which you created an entity frame for the ORDER_LINE entity which showed multiple rows from the database.

    Because this practice of allowing a screen to be broken down into several zones, each of which dealt with rows from a different database table, had been standard practice for the 20 years before I switched to an OO language, I saw absolutely no reason why I should switch to an alternative practice when no such practice had ever been documented or justified. Just because my critic had not seen it done did not mean that it should not be done.

  15. How many scripts do I have for each task?

    Some of the early code samples which I saw showed a task which performed an insert or update operation using two separate scripts - the first which performed a GET operation to build and display the screen, and a second which was activated by pressing the SUBMIT button which then performed a POST operation to handle both the data validation and the database update. I instantly took can immediate dislike to this idea. I much prefer to have all the aspects of each task, both the GET and the POST, handled in a single script.

    The only exception to this idea is when a field on the screen requires to be selected from a list and the contents of this list is either too large or too complex for a dropdown list. In this case I use a separate task which I first encountered in my UNIFACE days called a Popup.

  16. How many different documents can a single task produce?

    In my early COBOL days it was common practice to write a single large program which could be switched from one mode to another (from LIST to ADD, for example) which sometimes required a different screen. This practice became obsolete when I decided to switch from a small number of large programs to a large number of small programs each of which handled a single mode with a single screen. Using my PHP framework each task can produce no more than one output document. This document is usually HTML, but could be CSV, PDF or even some other format. In some cases there is nothing output at all as the script is required to do no more than perform some sort of update and then return control to the task from which it was activated. I do not have any tasks where the user can choose what output format he wants while running the task as each task has a fixed format of output. Just as there is one task to output an HTML document in a LIST screen which shows multiple rows going across the page there is another task which outputs a DETAIL screen which shows a single row going down the page. There is also have another task which produces CSV output and yet another which produces PDF output.

  17. How do I jump from one task to another?

    In some of the early PHP samples which I examined I saw that the way to jump from something like a LIST form, which showed summary details for multiple database rows which were displayed horizontally across the page, to an ENQUIRE/UPDATE/DELETE form for a selected row which showed the full details was via a hyperlink on each row. I did not like this idea for several reasons:

    To get around these limitations I decided to switch to the POST method which involved the following changes:

    While this code took a bit of time and effort to build I knew that it would be a good investment as it would provide standard functionality that could be used with every new task that I wrote. This has resulted in library functions called scriptNext() and scriptPrevious() which are used extensively in the framework.

  18. What directory structure do I use?

    When I first developed my system of menu and security components I put all the files in a directory called MENU which I placed in the web server's DocumentRoot. Under this I created a series of subdirectories, one for each file type, to contain the various files or scripts required by that system. When I started to write applications which ran under my menu program I decided to put the files into a separate directory so as not to mix them up. I also decided that each application would have its own dedicated database instead of having a single database to contain everything. This now means that I regard the system as a whole as being a collection of interconnected subsystems where each subsystem can share the components of any other subsystem without the need for duplication. The directory structure for every subsystem resembles the following:

    • default
      • classes
      • reports
        • en
        • language2
        • language3
      • screens
        • en
        • language2
        • language3
      • sql
        • logs
        • mssql
        • mysql
        • oracle
        • postgresql
        • sqlsrv
      • text
        • en
        • language2
        • language3
      • xsl

    This means that every subsystem has its own database and all its files in its own subdirectory so that different sets of developers can work on different subsystems at the same time without getting in each other's way. It also makes it very easy to copy an entire subsystem into a single zip file so that you can install that subsystem onto another server. I have seen other frameworks which do not understand the concept of subsystems which means that all the various scripts are intermingled and jumbled up together. I'm glad I do not have to work with such primitive frameworks.

  19. Do I produce UML diagrams for each task?

    The first time I encountered a team of developers who insisted of drawing UML diagrams for each and every use case the more exasperated I became as it took longer for them to draw the diagrams than it took me to write the code which implemented those diagrams. These diagrams became more complicated than they needed to be and contained a lot of duplication, so as an avid follower of the KISS and DRY principles I wanted something simpler and better. I know that some developers struggle with words alone and sometimes need a pretty picture to clear the fog from their minds, so I looked for the simplest diagram possible which covered as many use cases as possible. I had already determined that every use case, regardless of its complexity, can be boiled down to performing one or more operations on one or more database tables, and that the only operations which can be performed on a database table are Create, Read, Update and Delete (CRUD), so it seemed obvious to me that all I needed to do was create a single set of UML diagrams which covered these four operations. These diagrams can be found in UML diagrams for the Radicore Development Infrastructure. Note that these diagrams clearly show the following:

  20. Building a Data Dictionary

    This again was a huge investment in time and effort which few other developers would make, but I wanted to automate the process by which I extracted each table's structure information out of the database schema and made it available to the PHP code. Doing it manually was tedious, boring and prone to errors, and as I knew that I would be constantly adding new tables to my application database I knew that in the long run the investment would pay dividends. What I basically did was to take an Extract-Load process and extend it into an Extract-Transform-Load process by creating a simple database called a Data Dictionary in the middle between the two ends. I then used the RADICORE framework to build the maintenance screens.

    The advantage of using an intermediate data store instead of just copying the contents of the database's INFORMATION_SCHEMA verbatim is that I can add whatever extra information I like to provide additional functionality. At first it was simple things like identifying which HTML control should be used for each column, and for controls like dropdown lists and radio groups which require a list of options the name of the variable which contains those options.

    The framework gives the application developer the ability, via the _cm_pre_getData() method, of extending the SQL query beyond the simple SELECT * FROM $this->tablename WHERE .... When dealing with a table which was the child in a parent-child relationship I found myself many times in having to write extra code to include one or more fields in the SELECT list, as in the following example:

    SELECT $this->tablename.*, parent.column1, parent.column2
    FROM $this->tablename
    LEFT JOIN parent ON (parent.primary_key=$this->tablename.foreign_key)
    WHERE ...
    

    The Data Dictionary already contained the basic information regarding each relationship, so I added the parent_field and calc_field columns which then enabled the framework, when constructing the SQL query, to automatically add the parent column(s) to the SELECT list and insert a LEFT JOIN. Yet another example of a little bit of investment in time and effort up front which pays for itself in the long term.

    When I built the function which extracted a table's data from the Data Dictionary and made it available to the PHP script I deliberately chose NOT to write it directly to the $fieldspec variable inside the class file. Instead I wrote it to a separate structure file which is loaded into the object using the standard loadFieldSpec() method within the class constructor. This is because the class file may have been amended to include code inside any of the "hook" methods, and I don't want to lose any of those amendments. This also means that at any time after creating the class file for a table I can change the structure of that table and make those changes available to the PHP code with nothing more than two button clicks:

    In this way I can also keep my software structure synchronised with the database structure which in turn means that I do not have to waste any time with that abomination called an Object-Relational Mapper (ORM).

  21. Creating a library of Transaction Patterns

    Once you start down the road of building software to automate manual tasks you sometimes find that with all the steps that you have already taken you can take just one more step and add yet another level of automation. And so it was with the RADICORE framework. I had already automated the construction of my Model components via my Data Dictionary, I had already built a reusable set of View, Controller and Data Access Objects, but there were still some manual steps which had become boring and tedious:

    Because I consider a task (user transaction or use case) to be nothing more than performing a set of operations on a database table, where each table is defined within the Data Dictionary and each set of operations is defined within a pre-written and reusable controller script, it turned out to be an easy procedure to automate. I added a new task to my Data Dictionary so that you can select a table, select a pattern, fill in a few fields, then press a button which will generate the scripts and update the MENU database all in one go.

    A full description of all these patterns is provided in Transaction Patterns for Web Applications.

Because the RADICORE framework contains so many pre-written and reusable components, and because I have automated as many of the tedious and boring manual procedures as I can, I have produced a framework which really does provide Rapid Application Development (RAD). If you don't believe me then consider the following - after building a brand new table in my database I can create a standard family of forms to maintain and view the contents of that table in just 5 minutes without having to write a line of code - no PHP, no HTML, no SQL.


Practices which I do not follow

It was not until several years after I had got my framework up and running that some so-called "experts" in the field of OOP informed me that everything I was doing was wrong, and because of that my work was totally useless. When they said "wrong" what they actually meant was "different from what they had been taught" which is not the same thing. The fact that they were taught one way to do things does not mean that it was the ONLY way, the one TRUE way, and that any other way is automatically wrong. When I examined some of these principles and practices more closely I discovered that a large number of them were based on experiences with the Smalltalk language which was built for educational use by a bunch of academics in the 1970s. I have never seen any evidence that this language has ever been used to build database applications for the enterprise, so a lot of the code samples and programming techniques which I have seen are totally irrenevant. When you also consider the large number of different OO languages which have been created in the last 45+ years, each of which was created by people who had a different interpretation of how code should be written, you should realise that not all of these principles are relevant or even practical in all of these languages.

I chose PHP as my new development language as it was designed specifically for building applications with HTML forms at the front-end and a relational database at the back-end. I liked the simple, easy to learn syntax, and my experiments proved that it could do all that I needed it to do. The decisions that I made when constructing my framework were based on my decades of prior experience, mixed with common sense and intuition, which made me follow practices which had proved to be sound.

Below is a list of "best practices" which I refuse to follow simply because, in my humble opinion, they are not actually "best" at all.

  1. I don't model the real world.

    I do nothing but write database applications for businesses, which are also known as enterprise applications, and this type of software does not interact with objects in the real world, it interacts with objects in a database. The sole purpose of these applications is to put data into and get data out of a database, which is why they were originally called Data Processing Systems. It does not matter that each object in the real world has a totally unique set of properties and operations, when data about those objects in stored in a database it is reduced to a set of tables and columns upon which the only operations that can be performed are Create, Read, Update and Delete (CRUD).

  2. I don't use a separate methodology to design my software.

    I know from years of experience that the most important part of a database application is the database design, after which you can then structure your software around that design. Get the database structure right first, then write the software to follow that structure. If your database design is wrong then it will make it more difficult to write the software, or, as Eric S. Raymond put it in his book "The Cathedral and the Bazaar":

    Smart data structures and dumb code works a lot better than the other way around.
    The idea of using different and incompatible design methodologies for the database and the software strikes me as being questionable. The idea of deliberately creating two parts of the application which are incompatible, then getting round this problem by introducing another piece of software known as an Object-Relational Mapper (ORM) strikes me as being incomprehensible. As a devout follower of the KISS Principle I would never dream of doing it that way, not in a million years.

    My framework is built around a combination of the 3-Tier Architecture and the Model-View-Controller (MVC) Design Pattern which means that all application code, all business rules, are confined to the Business/Domain layer, or the Model in MVC. The components in the remaining Presentation and Data Access layers are completely application-agnostic in that they do not contain any business rules or any other knowledge of the application, which means that I have been able to implement them as pre-built and reusable services which need no further design. Every object in the Business/Domain layer is responsible for one object in the database, so because each object "IS-A" database table I created an abstract superclass to hold common behaviour and characteristics from which I can create many concrete subclasses which only need hold the behaviour and characteristics which are specific to one table. Among this information is the table's structure which is extracted directly from the database schema, which means that I can keep my software structure completely synchronised with my database structure.

  3. I don't create deep class hierarchies.

    In OO theory class hierarchies are the result of identifying "IS-A" relationships between different objects, such as "a CAR is-a VEHICLE", "a BEAGLE is-a DOG" and "a CUSTOMER is-a PERSON". This causes some developers to create separate classes for each of those types where the type to the left of "is-a" inherits from the type on the right. This is not how such relationships are expressed in a database, so it is not how I deal with it in my software. Each of these relationships has to be analysed more closely to identity the exact details. Please refer to Using "IS-A" to identify class hierarchies for more details on this topic.

  4. I don't design classes to deal with associations.

    Objects in the real world, as well as in a database, may either be stand-alone, or they have associations with other objects which then form part of larger compound/composite objects. In OO theory this is known as a "HAS A" relationship where you identify that the compound object contains (or is comprised of) a number of associated objects. There are several flavours of association:

    Please refer to Using "HAS-A" to identify composite objects for more details.

  5. I don't use object composition

    Shortly after I released my framework as open source I received the complaint from someone asking "Why are you using inheritance instead of object composition?" My first reaction was "What is object composition and why is it better than inheritance?" Eventually I found an article on the Composite Reuse Principle (CRP) but it did not explain the problem with inheritance, nor did it explain why composition was better. Those two facts alone made me conclude that the whole idea was not worth the toilet paper on which is was printed, so I ignored it. Please refer to Use inheritance instead of object compostion for more details on this topic.

  6. I don't need to design any Model classes.

    Each table in the database has its own Model class in the Business/Domain layer, and I don't need to spend time working out what properties and methods should go in each class as every one follows exactly the same pattern:

    I quickly realised when coding the class for my second database table that there was much in common with the code I had written for the first database table, and the idea of having the same code duplicated in every other table class I immediately recognised as being undesirable as it violates the DRY principle. Question: How do you solve this problem of code duplication in OOP? Answer: Inheritance. I built an abstract class which could then be inherited by every table class, and moved as much code as I could from each table class to the abstract class. At the end of this exercise I had removed every method out of each table class until there was nothing left but the constructor. This meant that the abstract class had code which dealt with an unspecified table with an unspecified structure while it was the table class which identified a specific database table and its structure, thus turning the abstract into the concrete.

    When it came to inserting custom code within each table class I followed the examples I had encountered in UNIFACE and a brief exploration into Visual Basic. In both of these languages you could insert into your object a function with a particular name and the contents of that function would automatically be executed at a certain point in the processing cycle. This told me that the runtimes for both those languages had code which looked for functions with those names, and either executed them or did nothing. How do you duplicate this functionality using OOP? Execute special methods in the abstract class which are defined in the abstract class but devoid of any code, then allow the developer to override each of those methods in the subclass. Easy Peasy Lemon Squeezy. It wasn't until several years later that I discovered I had actually implemented the Template Method Pattern.

  7. I don't create a separate method for each use case.

    I was never trained to use Domain Driven Design (DDD) to design the objects in my Business/Domain layer which is precisely why I do not repeat the mistakes that it advocates. I started to read it to find out if I was missing something important, but I got as far as the statement "create a separate method for each use case" when the alarm bells starting ringing in my ears and a huge red flag started waving in front of my eyes. If I were to do such a foolish thing I would be closing the door to one of the most useful parts of OOP, that of polymorphism. As an example let's assume that I have objects called PRODUCT, CUSTOMER and ORDER and I want to create a new record for each of them. Under the rules of DDD I would have to do the following:

    require 'classes/customer.class.inc';
    $dbobject = new customer;
    $dbobject->insertCustomer(...);
    
    require 'classes/product.class.inc';
    $dbobject = new product;
    $dbobject->insertProduct(...);
    
    require 'classes/order.class.inc';
    $dbobject = new order;
    $dbobject->insertOrder(...);
    

    You should notice that both the class name and the method name are hard-coded, which means that each of those 3 blocks of code would have to be in a separate controller. Instead I do the following:

    $table_id = 'customer';
    require "classes/$table_id.class.inc";
    $dbobject = new $table_id;
    $dbobject->insertRecord($_POST);
    
    $table_id = 'product';
    require "classes/$table_id.class.inc";
    $dbobject = new $table_id;
    $dbobject->insertRecord($_POST);
    
    $table_id = 'order';
    require "classes/$table_id.class.inc";
    $dbobject = new $table_id;
    $dbobject->insertRecord($_POST);
    

    In this arrangement it is only the first of the 4 lines in each of these blocks that would have to be hard-coded. In my framework this is done in a separate component script. This script will then activate the same controller script which calls the insertRecord() method on whatever object it is given. If you look you should see that the last 3 lines of code in each of those blocks is identical, which means that you can define them in a single object which you can reuse as many times as you like.

    If you are familiar with the MVC design pattern you should know that the purpose of the Controller can be described as follows:

    A controller is the means by which the user interacts with the application. A controller accepts input from the user and instructs the model and view to perform actions based on that input. In effect, the controller is responsible for mapping end-user action to application response.

    As a simple example a user may request a task which implements the use case to "create a customer" while the controller translates this into "call the insertRecord() method on the customer object". By changing the hard-coded name of the object to a variable which is injected at runtime I now have a controller which can call the insertRecord() method on any object in my application.

    If instead of using shared method names I used unique names I would be removing any opportunities for polymorphism, which would mean no dependency injection, which would therefore mean less opportunity for having reusable objects like my controller scripts. OOP is supposed to increase reusability, so by using a method which decreases reusability seems like anti-OOP to me.

    My approach is the result of my having built hundreds of user transactions in dozens of different applications in several different languages and spotting one common factor - regardless of the overall effect of a user transaction it is always based on the same foundation - it performs one or more of the CRUD operations on one or more database tables and only incidentally executes specific business rules. Instead of having a separate method for each use case (aka unit of work, user transaction or task) I do the following:

  8. I don't create a separate class property for each column.

    While learning PHP I discovered the $_GET and $_POST variables which made data sent to the client's browser available to the PHP script on the server. I also discovered that when reading data from the database the result was delivered as an indexed array of associative arrays. I was quite impressed with PHP arrays as they are far more flexible and powerful than what was available in any of my previous languages, so imagine my surprise when all the sample code which I saw had a separate class property for each column, then a separate getter and setter for each of those columns. I asked myself a simple question:

    If the data coming into an object from the Presentation layer is given as an array, and the data coming in from the Data Access layer is given as an array, is there a good reason to split the array into its component parts for its passage through the Business layer?

    With a little bit of experimentation I discovered that it was very easy within a class to deal with all that column data in an array, so I saw absolutely no advantage in having a separate property for each column. There is no effective difference between the following lines of code:

    $this->column_name
    $fieldarray['column_name']
    

    Not only would there be no advantage, I quickly identified a series of disadvantages which would make the writing of all that extra code a complete waste of time:

    I do not need to provide answers to these questions as my practice of using a single $fieldarray property to hold all data for that table does not cause any problems. Not only that, it also provides for loose coupling which is one of the characteristics of good software design. The concept of Coupling describes how modules interact with one another. Tight coupling is considered to be bad as it forces a ripple effect where changes in one module cause corresponding changes in other modules. As an example, take the following ways in which data can be inserted into and extracted from an object:

    1. As separate arguments on a method call, as in:
      $result = $object->method($column1, $column2, $column3, ...);
      
    2. As separate properties within the class, each with its own setter and getter, as in:
      class foobar {
          var $column1;
          var $column2;
          var $column3;
          function setColumn1 ($column1) {
              $this->column1 = $column1;
          }
          function getColumn1 () {
              return $this->column1;
          }
          function setColumn2 ($column2) {...}
          function getColumn2 () {...}
          function setColumn3 ($column3) {...}
          function getColumn3 () {...}
      }
      $object = new foobar;
      $object->setColumn1($_POST['column1']);
      $column1 = $object->getColumn1()
      
    3. As a single array, as in:
    4. $object = new $table_id;
      $fieldarray = $object->insertRecord($_POST);
      $fieldarray = $object->getData($where);
      $fieldarray = $object->getFieldArray();
      

    Now ask yourself this question: If I were to add or remove a column from a database table, how much effort would be required to make the software deal with that change? If you look at option 1 above you will see that I would have to change the method signature, which would also require changing every place where that method is called. Option 2 above would require even more work as each column has its own pair of getters and setters. Option 3 requires no work at all as any changes to the contents of the array do not require any changes to the method signature. Options 1 and 2 have a ripple effect while option 3 does not.

    What happens if the array contains invalid data? That is automatically taken care of by the framework when it calls the validation object. If I ever change the structure of a table all I have to do is reimport the revised structure into my Data Dictionary then run the export process to recreate the table structure file.

    By using an array I can also tell the difference between a column being present with a NULL value and a column not being present at all.

    I can also deal with any number of columns which are returned from a SELECT query, even columns from other tables, as I can change the contents of the array at will without affecting any method signatures or class properties. There is no ripple effect.

    It should also be noted that all the methods in the abstract class, both variant and invariant, pass the $fieldarray variable around as both an input and an output argument. In this way each method knows precisely what data it has to work with, and each key in the array does not require to be defined as a separate class property.

  9. I do not create separate Controllers for each Model.

    Some junior developers are taught that the six components in my family of forms constitute a single use case. That it what I was taught in my COBOL days. However, as I worked on more and more applications where the use cases got bigger, more complex and more numerous, I realised that the task of writing and maintaining the code was becoming more and more difficult. In order to make the programs simpler I had to make them smaller, and in order to do this I came to the conclusion that each member in that forms family should be treated as a separate use case in its own right and not part of a bigger use case. I knew that it would result in a larger number of programs, but I considered that it would be worth it in the long run - and so it came to pass. Some of my colleagues said that it would result in the same code being duplicated in many programs, but they obviously did not know how to create reusable modules.

    Having a separate module as a controller for each of those use cases was indeed a step in the right direction. Not only do I have a separate Controller for each member of that forms family, each of those Controllers can be used with any Model in the application. I do not have to have a separate version of a Controller for each Model as the Controllers have been specifically built to operate on any Model in the entire application.

    Splitting a compound use case into individual tasks also made it much easier to implement Role Based Access Control as all the logic for checking a user's access to a task was moved out of the task itself and into the framework. As a task could only be activated by pressing its button, either on the menu bar or the navigation bar, it became easy to hide the buttons to those tasks to which the user did not have permission to access.


How using OOP increased my productivity

Productivity is defined as:

a ratio between the output volume and the volume of inputs. In other words, it measures how efficiently production inputs, such as labour and capital, are being used in an economy to produce a given level of output.

In the world of software development the usual measurements are time and money, i.e. how long will it take to complete and how much will it cost? After having worked for several decades in software houses where we competed for development contracts against rival companies the client would always look more favourably on the one which came up with the cheapest or quickest solution. As the biggest factor in software development is the cost of all those programmers, it is essential to get those programmers producing effective software in the shortest possible time and therefore the lowest cost. The way to cut down on developer time is to reuse as much code as possible so that there is less code to write and less code to test. I became quite proficient at creating libraries of reusable software, and when I upgraded this to build a fully-fledged framework on one particular project my boss was so impressed that he made it the company standard on all future projects. When the company switched languages from COBOL to UNIFACE I redeveloped that framework to take advantage of the new features offered by that language and reduced development times even more. When I decided to make the switch to the development of web applications using PHP I was convinced that I could reduce my development times even more. Although this was my first incursion into the world of OOP it seemed to be right decision as it promised so much:

The power of object-oriented systems lies in their promise of code reuse which will increase productivity, reduce costs and improve software quality.
...
OOP is easier to learn for those new to computer programming than previous approaches, and its approach is often simpler to develop and to maintain, lending itself to more direct analysis, coding, and understanding of complex situations and procedures than other programming methods.

As far as I am concerned any use of an OO language that cannot be shown to provide these benefits is a failure. Having been designing and building database applications for 40 years using a variety of different programming languages I feel well qualified to judge whether one language/paradigm is better that another. By "better" I mean the ability to produce cost-effective software with more features, shorter development times and lower costs. Having built hundreds of components in each language I could easily determine the average development times:

How did I achieve this significant improvement in productivity? Fortunately I did not go on any formal training courses, so I was not taught a collection of phoney best practices. Instead I used my previous experience, intuition, common sense and my ability to read the PHP manual to work out for myself how to write the code to get the job done, then move as much code as possible into reusable modules. I already knew from previous experience that developing database applications involved two basic types of code:

This leads to two methods of developing your application:

The RADICORE framework makes use of the 2nd method. Of the four classes of object that together form a task (use case, user transation or unit of work)) all the Controllers, Views and Data Access Objects are pre-built and come supplied with the framework. This just leaves the Model components which exist in the Business/Domain layer. These can be generated for you from within the Data Dictionary after importing table details directly from the database schema. Using the same Data Dictionary you can then build basic tasks based on any of the Transaction Patterns. These tasks will have all you need to insert, read, update and delete records in the database table which then leaves the developer with only one task - insert the business rules into the relevant "hook" methods which have been built into the abstract table class and which can be overridden in every concrete table class. In this way the application developer need spend minimum time dealing with the low-value background code and maximum time on the high-value business rules.

I have been criticised by many developers for not following their ideas on what constitutes "best practices", but I consider their rules to be anything but the best, so I ignore them. I am a pragmatist, not a dogmatist, which means that I judge whether my methods are successful or not based on the results which they achieve. A dogmatist, on the other hand, will insist on blindly following a set of rules, or a particular interpretation of those rules, and automatically assume that their results will be acceptable. This to me is a false assumption and leads to the creation of hordes of nothing more than Cargo Cult Programmers. Writing code which is acceptable to other programmers is not the aim of the game, it is writing code which is acceptable to the paying customer. If I can achieve significantly higher levels of productivity by breaking someone's precious rules then how can they possibly claim that their rules are better than mine? Any methodology which fulfills the promises made for OOP can be regarded as excellent while everything else can be regarded as excrement, poop, faeces, dung or crap. When implemented properly OOP is supposed to increase code reuse and decrease code maintenance, but I have yet to see any implementation which produces anywhere near the same levels of reusability as the RADICORE framework. If their results are inferior to mine, by what measurement can they claim that their methods are superior to mine?

If you think that my claims of increased productivity are false and that you can do better with your framework and your methodologies then I suggest you prove it by taking this challenge. If you cannot achieve in 5 minutes what I can, then you need to go back to the drawing board and re-evaluate your entire methodology.


From personal project to open source

Also in May 2004 I published A Role-Based Access Control (RBAC) system for PHP which described the access control mechanism which I had built into my framework. This provoked a response in 2005 when I received a query from the owner of Agreeable Notion who was interested in the functionality which I had described. He had built a website for a client which included a number of administrative screens which were for use only by members of staff, but he had not included a mechanism whereby access to tasks could be limited in any way. He had also looked at my Sample Application and was suitably impressed. Rather than trying to duplicate my ideas he asked if he could use my software as a starting point, which is why in January 2006 I released my framework as open source under the brand name of RADICORE.

Unfortunately he spent so much time in asking me questions on how he could get the framework to do what he wanted that he decided in the end to employ me as a subcontractor to write his software for him. He would build the front-end website while I would build the back-end administrative application. I started by writing a bespoke application for a distillery company which I delivered quite quickly, which impressed both himself and the client. Afterwards we had a discussion in which he said that he could see the possibility of more of his clients wanting such administrative software, but instead of developing a separate bespoke application for each, which would be both time consuming and costly, he wondered if I could design a general-purpose package which would be flexible enough so that it could be used by many organisations without requiring a massive amount of customisations. Thus was born the idea behind TRANSIX, which was a collaboration between my company RADICORE and Agreeable Notion.

I knew from past experience that the foundation of any good database application is the database itself, and that you must start with a properly normalised database and then build your software around this structure. This knowledge came courtesy of a course in Jackson Structured Programming which I took in 1980. I had recently read a copy of Len Silverston's Data Model Resource Book, and I could instantly see the power and flexibility of his designs, so I decided to incorporate them into the TRANSIX application. I started by building the databases for the Party, Product, Order, Inventory, Shipment and Invoice subsystems, then built the software to maintain those databases. The framework allowed me to quickly develop the basic functionality of moving data between the user interface and the database so that I could spend more time writing the complex business rules and less time on the standard boilerplate code. I started building this application in 2007, and the first prototype was ready in just 6 man-months. If you do the maths you will see that this meant that I took an average of only one month each to develop those subsystems. It took a further 6 months to integrate this into a working website for an online jewellery company as I had to migrate all the existing data from its original database into the new database, then rewrite the code in the front-end website to access the new database instead of the old one. This went live in May 2008.

As well as developing application subsystems with the framework I also added several subsystems which became part of the framework. Theses were:


Building a customisable ERP package

While the RADICORE framework is open source and can be downloaded and used by anyone, the TRANSIX application which I developed was always proprietary and designed as a software package for which users could only purchase licences. Anyone who has ever developed a software package will tell you that although it can be designed to provide standard functionality that should be common to many organisations, there will always be those organisations who have non-standard requirements that can only be satisfied with custom code. What I did not want to do was insert any of this custom code into the same place as the core package code, so I designed a mechanism whereby any custom code could be kept in a separate custom-processing directory with is further subdivided by a separate directory for each project code. Each customer has his own project code so that his customisations can be kept separate from anyone else's customisations as well as being kept separate from the core package code. Because the abstract table class, which is inherited by every concrete table class, has an instance of the Template Method Pattern for every method called by a Controller on a Model, it was easy to insert some code in front of every call to a variant method to ask the question "Does this project have any custom code for this method?" and if the answer is "yes" then it will call that custom variant method instead of the standard variant method. In the case of screen structure files or report structure files each standard file in the standard directory can be replaced with an alternative version in a custom processing directory.

My collaboration with Agreeable Notion and the TRANSIX application ceased in 2014 as they could not find enough clients. Their business model involved finding someone who wanted a new front-end eCommerce site and offering TRANSIX as the support application as the back-end. At about that time I had begun a conversation with a director of Geoprise Technologies who were a USA-based software company with offices in the far east. They had already used my open source framework to build one of their own applications, and when I mentioned that I had already built a entire ERP application called TRANSIX they expressed an interest as they operated in the same business area. One of their directors flew into London so that I could give him a demonstration of what I had produced, and he was impressed enough to suggest that we form a partnership so that his company could sell the application using the brand name GM-X. This was quickly agreed, and in a short space of time we had our first client who was a large aerospace company.

Since that time I have made quite a few improvements to the framework as well as adding new subsystems to the ERP application. This is now a multi-module application where each client only needs to purchase a licence for those modules which they actually want to use. As it is a web application which runs on a web server, which could either be local or in the cloud, there is no fee per user, just a single fee per server regardless of the number of users. This multi-module application now consists of the following modules/subsystems:

This ERP package also has the following features as standard which are vital to software which is used by multi-national corporations:

Levels of customisation

Anybody who has ever built a software application as a package, which is akin to "off the shelf" rather than "bespoke", does so in the hope that they can sell copies of that package to multiple customers at a lower price than at full price to a single customer, yet still make a profit at the end of the day. When customers are looking for a software application they would rather pay a lower price for a package than an enormous price for a bespoke solution. While a software package is designed to follow common practices which should be familiar to most organisations there will always be those potential customers who have their own way of doing things and discover that the package is not quite a 100% fit, in which there are two choices - either the organisation changes its practices to fit the package, or the package is customised to fit the organisation. If customisations are required then how easily can they be developed and at what cost? Fortunately the RADICORE framework has been built in such a way that customisations to the GM-X package can be implemented relatively quickly and cheaply. This has been achieved in the following ways:

Because RADICORE was designed and developed to be a Rapid Application Development framework (hence the RAD in RADICORE) it means that adding new subsystems into the standard package follows exactly the same procedure as adding a bespoke subsystem to deal with a client's non-standard requirements:


Maintaining the unmaintainable

I have been told by my critics that because I am not following their ideas on what constitutes "best practices" that my work must surely be bad, and if it's bad then it must surely be unmaintainable. As is usual their theories fall short when it comes to practice. As well as being the author of the framework I am also the author of the ERP application that was built using this framework, and sometimes a new requirement comes along which would best be served by enhancing the framework instead of adding to the application code. Among the changes I have made to the framework are:

Another recent change was to aid in the customisation abilities of the GM-X package in the form of User Defined Fields (UDF). For some while it was felt that some customers might want to record more pieces of data than was allowed on the core tables so over a period of several year I have added additional tables called XXX_EXTRA_NAMES and XXX_EXTRA_VALUES (where 'XXX' identifies the original core table). Each of these has its own set of maintenance tasks.

While this arrangement worked it did mean that that the extra values were displayed on a separate screen and not with the standard values. It was also not possible to perform a search using any of these extra values. One of my business partners, who uses this software himself, said that it would be nice if the extra values could be automatically mixed in with the standard values so that the user did not have to keep jumping to and from other screens. After he promised to buy me a beer I decided to look into the possibility and work my magic. After 2 week this is what I achieved:

By being able to add all this functionality into the framework it means if at any time in the future I add a pair of XXX_EXTRA_NAMES and XXX_EXTRA_VALUES tables to any of the core tables in the application then their contents will automatically be handled by the framework without any additional coding in any of those application table classes.


Summary

Different developers have different ideas on the true meaning of Object Oriented Programming, but the only description which I use is as follows:

Object Oriented Programming is programming which is oriented around objects, thus taking advantage of Encapsulation, Inheritance and Polymorphism to increase code reuse and decrease code maintenance.

The design decisions which I made while building my framework, though described as heretical by my critics, have enable me to be significantly more productive than I was with any of my previous languages.

  1. I implemented Encapsulation by having a separate class for each table in the database.
  2. I implemented Inheritance by creating an abstract table class to hold all the properties and methods which can be shared by any concrete table class.
  3. I implemented Polymorphism by having each concrete table class share the same set of method signatures to support the standard CRUD functions which are the only operations which can be performed on a database table regardless of what data it holds.
  4. I achieved high cohesion by basing my entire framework on the 3-Tier Architecture, which incidentally implements the Single Responsibility Principle (SRP).
  5. I achieved loose coupling by having application data passed around in a single $fieldarray property instead of a separate property for each column.
  6. By using an abstract class I could implement the Template Method Pattern, which is a powerful design pattern for any framework.
  7. By enabling polymorphism I could use dependency injection to inject Model names into my Controllers, thus making it possible to use any Model with any Controller.
  8. By using XSL stylesheets to create all HTML output I was able to build a single View object to extract the data from any Model(s), convert it to XML then transform that XML into HTML.
  9. By splitting my Presentation layer into two separate components, the Controller and the View, I found myself implementing the Model-View-Controller (MVC) design pattern.
  10. I was later able to refactor my XSL stylesheets so that instead of a separate one for each web page I now have just 12 reusable XSL stylesheets from which I can produce thousands of different pages.
  11. I was able to make each table class aware of the structure of its associated table by building a table structure file that could be loaded into every table class file.
  12. By having the table's structure known to its class file, and by passing all application data around in a single $fieldarray variable, I was able to build into the framework a standard validation object which automatically checks all user input for errors which would cause the SQL INSERT or UPDATE query to fail.
  13. I then built a Data Dictionary so that I could generate the table class file and table structure file by pressing a button instead of doing it manually.
  14. By having sharable Controllers and a sharable View object which uses sharable XSL stylesheets I was able to build a library of Transaction Patterns.
  15. I could then modify my Data Dictionary to generate the component scripts and screen structure scripts by pressing a button instead of doing it manually.

As you can see I did not build all this sophistication into the framework in one go, I started small and simple, and each decision that I made opened the door to more opportunities. OOP is supposed to provide more reusability and less maintenance, and my humble efforts, which have not been corrupted by the teachings of so-called OO "experts", has produced the following set of reusable components which are instantly available to any any application which I care to build:

This set of reusable components has been used to creae a large ERP application which contains over 400 database tables and 4,000 tasks. That is a huge amount of reuse from such a relatively small number of components.

Here endeth the lesson. Don't applaud, just throw money.


References

The following articles express my heretical views on the topic of OOP:

The following articles describe aspects of my framework:


Amendment History

01 Nov 2022 Added Design Decisions which I'm glad I made
Added Practices which I do not follow
Added From personal project to open source
Added Building a customisable ERP package
Added Maintaining the unmaintainable

Comments

counter