Tony Marston's Blog About software development, PHP and OOP

The Road to Rapid Application Development (RAD)

Posted on 21st December 2024 by Tony Marston
Introduction
How rapid is RADICORE?
How to achieve rapid development
How to recognise reusable code
How much reusable code does RADICORE provide?
How was this reusable code created?
Dealing with the first database table
Refactoring after dealing with the second database table
Enhancing the framework
Adding new subsystems to the framework
Summary
Decisions which I made
Savings which I made
Practices which I ignored
Comments

Introduction

Recently my RADICORE software received an award for the Best Open Source Rapid Application Development Toolkit 2024 from the Innovation in Business Developer Awards. In this article I shall explain the design decisions that I took in order to create the software that was deemed worthy of this award.

Note that this is the third iteration of a framework which I first developed in COBOL in the 1980s, in UNIFACE in the 1990s, and lastly in PHP in the 2000s. In all cases I was the sole designer, and apart from a few utilities in the COBOL version I was also the sole developer.

RADICORE is not a general-purpose framework. For starters it cannot be installed using Composer, the dependency manager for PHP, simply because that is for installing third-party libraries into your application, and RADICORE is not a library it is an application in its own right as discussed in What is a Framework? I develop nothing but database applications for businesses, commonly known as enterprise applications, which are characterised by having electronic forms at the front end, a database (usually relational) at the back end, and software in the middle to handle the business rules.

How rapid is RADICORE?

Every enterprise application consists of a database containing a number of tables plus a collection of user transactions (use cases, tasks or units of work) to maintain and view the contents of those tables. The first step in the development of an application is to design and build the database, then to build the application components using the following steps:

Note that this process does not require the writing of any code - no PHP, no HTML and no SQL - just the pressing of buttons, and should take no longer than FIVE MINUTES. Primary Validation will be carried out by a standard framework component while Secondary Validation for custom business rules can be added later using any of the available "hook" methods.

When my contemporaries question my claims I point them to my Tutorials page and my Videos page. I then challenge them to beat those figures using a framework of their choice, but so far no-one has even made the attempt.

How to achieve rapid development

Every software application requires code. The volume of code is determined by three factors:

  1. The number of components
  2. The complexity of each component
  3. The availability of reusable code

It is the availability of reusable code which is the critical factor as the more code you can reuse the less you have to write to get the job done, and the less code you have to write the quicker and more productive you will be. Productivity is the key, but far too many of today's programmers fail to recognise practices which produce the best results. Instead they fill their applications with all the "right" design patterns and "best practices" in the vain hope that by copying the code produced by so-called "experts" that they will automatically produce expert-level code. They do not understand that these patterns and practices should only be implemented when appropriate, so they end up with code which violates both the DRY and YAGNI principles, thus becoming inefficient and less productive.

I had an advantage over my contemporaries when I began using PHP in 2002 in that I had been designing and building database applications for the previous 20 years. I had used two different languages - COBOL and UNIFACE - and three different database types - hierarchical, network and relational. This meant that I knew how database worked, and I knew how to write code to access them, so all I had to do was write the necessary code in a different language. This also meant moving away from pre-designed and pre-compiled forms to HTML documents, but that was as easy as falling off a log.

How to recognise code that can be reused

It is rarely possible to start by building reusable modules. You first write code that works, then afterwards you look for repeating patterns which can be turned into reusable modules. Looking for patterns means looking for abstractions and involves a process which I learned later is called programming-by-difference. This means looking for similarities and differences in the code and putting the similarities into reusable modules while keeping the differences in unique modules. This may involve the creation of abstract classes which are inherited multiple times, or objects/functions which are called multiple times.

If, like me, you constantly work in a single domain (such as enterprise applications) and produce multiple applications within that domain it is possible to create an abstract design which provides components which can be reused by any application within that domain. According to Designing Reusable Classes this abstract design is called a framework.

Not all categories of code can be reused, but what are these categories? Here are details from a post created by a person called Selkirk in a Sitepoint newsgroup from years ago:

Circa 1996, I was asked to analyze the development processes of two different development teams.

Team A's project had a half a million lines of code, 500 tables, and over a dozen programmers. Team B's project was roughly 1/6 the size.

Over the course of several months, management noticed that team A was roughly twice as productive as team B. One would think that the smaller team would be more productive.

I spent several months analyzing the code from both projects, working on both projects and interviewing programmers. Finally I did an exercise which lead to an epiphany. I counted each line of code in both applications and assigned them to one of a half a dozen categories: Business logic, glue code, user interface code, database code, etc.

If one considers that in these categories, only the business logic code had any real value to the company. It turned out that Team A was spending more time writing the code that added value, while team B was spending more time gluing things together.

Team A had a set of libraries which was suited to the task which they were performing. Team B had a set of much more powerful and much more general purpose libraries.

So, Team A was more productive because the vocabulary that their tools provided spoke "Their problem domain," while team B was always translating. In addition, team A had several patterns and conventions for doing common tasks, while Team B left things up to the individual programmers, so there was much more variation. (especially because their powerful library had so many different ways to do everything.)

Here you should see the following important points:

  1. To management programmer productivity was more important than anything else. This is the time spent on the business logic.
  2. Programmer productivity is aided by having a toolkit/framework which concentrated on "their problem domain". The toolkit should deal with as much of the the non-business logic as possible.

If you are building a web-based enterprise application, one that requires large numbers of database tables and large numbers of user transactions to maintain their contents, then having a toolkit which provides a mechanism for building those transactions from a set of pre-defined patterns would be a good idea. If your application requires Role-Based Access Control (RBAC), and perhaps Audit Logging and Workflows, plus the ability for Rapid Application Development (RAD), then you would struggle to find something better than RADICORE.

How much reusable code does RADICORE provide?

Every user transaction follows the same pattern by having an HTML form at the front end, an SQL database at the back end, and software in the middle to deal with the the business rules as well as the movement of data between the two ends. In my experience the most efficient way to deal with this scenario is to utilise the 3-Tier Architecture. Using this architecture with PHP was aided by the fact that with OOP the code is automatically 2-tier by default. When you create a class file with methods you must also have a separate script which instantiates that class into an object so that it can call those methods. The class file then exists in the Business layer (the Model in MVC) and the calling script exists in the Presentation layer (the Controller in MVC).

As I had already built hundreds of user transactions in my previous languages I had become aware of patterns of behaviour and structure in the screen layouts, so I had begun to use templates in my code. I had already experimented with XML and XSL to build HTML documents, so I decided to use both of these technologies in my PHP framework. I achieved this by building a single component which could create an HTML document for any user transaction within the application. This meant that I had separate components to deal with receiving an HTTP request and sending out an HTTP response, which meant that I had accidentally created an implementation of the Model-View-Controller design pattern. This combined architecture is shown in Figure 1 below:

Figure 1 - MVC and 3 Tier Architecture combined

Model View Controller Data Access Object Presentation layer Business layer Data Access layer model-view-controller-03a (5K)

Note that each of the above boxes is a hyperlink which will take you to a detailed description of that component.

Each of the above components is supplied as follows:

As you should be able to see from the above the RADICORE framework allows you to create basic but working transactions in a matter of minutes without having to write any code whatsoever - no PHP, no HTML, no SQL. Custom business logic can be added in later. This means that the developer can spend maximum amounts of time on the important business rules and minimum time on the unimportant (to management) "other code". This equates to an extremely high level of productivity.

How was this reusable code created?

It is important to note here that when I began programming with PHP I also had to learn how to utilise its OO capabilities to maximum effect. I first found a definition of OOP which went as follows:

Object Oriented Programming is programming which is oriented around objects, thus taking advantage of Encapsulation, Inheritance and Polymorphism to increase code reuse and decrease code maintenance.

My understanding was that object oriented programming was exactly the same as procedural programming except for the addition of encapsulation, inheritance and polymorphism. They are both designed around the idea of writing imperative statements which are executed in a linear fashion. The commands are the same, it is only the way they are packaged which is different. While both allow the developer to write modular instead of monolithic programs, OOP provides the opportunity to write better modules. In his paper Encapsulation as a First Principle of Object-Oriented Design (PDF) the author Scott L. Bain wrote the following:

OO is routed in those best-practice principles that arose from the wise dons of procedural programming. The three pillars of "good code", namely strong cohesion, loose coupling and the elimination of redundancies, were not discovered by the inventors of OO, but were rather inherited by them (no pun intended).

My first port of call was to read the PHP manual which explained how to create classes, thus taking care of Encapsulation. I read about using the "extends" keyword, thus taking care of Inheritance. I didn't find any useful examples of Polymorphism, so I decided to work on that later. I also found some online resources and purchased a few books. At no point was I made aware of these so-called "best practices", so I did my usual thing of working out which practices worked best for me. I played around with the code trying to find the approach which produced a combination of (a) the best result (b) the least amount of code, and (c) the most reusability. My previous experience with database applications made me aware of the following points:

In a large ERP application, such as the GM-X Application Suite, which is comprised on a number of subsystems, each subsystem has a unique set of attributes:

Despite the fact that these two areas are completely different for each subsystem, they each have their own patterns and so can be handled using standard reusable code provided by the framework:

It was this collection of facts which influenced my development methodology. The code which I created is described in A Sample PHP Application where it can also be downloaded.

Dealing with the first database table

These are the steps I went through to create the code to deal with my first database table..

  1. After creating the table in my sample database I created a class file with methods to handle each of the CRUD operations. This became the Model in the MVC pattern.
  2. Instead of using separate properties for each table column I decided to use the $_POST array as both the input and output arguments on each method call. This meant that I did not have to include code to deal with each column separately such as getters and setters. This array was also used for all data retrieved from the database.
  3. I created a series of scripts to instantiate this class into into an object so it could then call methods on that object. I created a separate script for each member of the Forms Family shown in Figure 2 below:

    Figure 2 - A typical Family of Forms

    infrastructure-07 (1K)

    This became the Controller in the MVC pattern. Note that each of these scripts performs the same set of operations but on a different database table.

  4. In all the sample code that I saw the way to navigate from a parent LIST form to a related child form was via hyperlinks on each row. I did not like this for the following reasons; I decided on a mechanism which used a single row of buttons in a navigation bar. This area can contain any number of buttons. This required one or more rows to be selected using a checkbox at the start of each row. The button will post into the current script which will set up the details in the $_SESSION array before activating the child form. The child form will show one record at a time, but if multiple rows were selected in the parent form it will show a scrolling area (similar to the pagination area) which will allow the user to scroll back and forth between the selected rows.
  5. I created an object to produce the HTML output for any screen by extracting the data from all objects used in the Controller, transferring it to an XML file, then loading an XSL stylesheet in order to transform it into HTML. This became the View in the MVC pattern.
  6. I created separate methods in the table class to handle all the SQL operations for the MySQL database. These were given a "_dml_" prefix to identify them as dealing with the Data Manipulation Language (DML).
  7. While testing this set of forms I realised that I had to validate the contents of the $_POST array before passing it to the database in order to avoid a fatal error. I started by coding into each class the list of column names which belonged in that table, but soon afterwards I found it necessary to add a list of field specifications to each of those columns so that I could identify the types and size of each column using information which I could obtain from the database's INFORMATION_SCHEMA. This then enabled me to write a standard routine to validate user input by comparing this array of field specifications with the array of field values in the $_POST array. The idea of a standard routine seemed a much better idea than inserting custom code to validate each field manually.

Refactoring after dealing with the second database table

I then created a second database table and copied all the scripts for the first table and amended all the references to point to the second table. As you can imagine this resulted in a great deal of duplicated code, so I had to quite a bit of refactoring to do. I started by examining each pair of scripts which contained duplicated code looking for ways to create code which could be reused.

  1. Starting with the table classes I noticed that the only differences were the table names and the list of column names. Although my knowledge of OOP was limited, I knew enough to avoid this duplication by using that mechanism known as inheritance, but instead of making the stupid mistake of making the second concrete class inherit from the first concrete class I instinctively did the right thing by moving all the similar code to an abstract class and making both concrete classes inherit from this abstract class. Note that at that time PHP 4 did not support the "abstract" keyword, so I called it a "generic" class instead. I created this without a class constructor so that each concrete class could define its table name and list of field specifications in its own constructor.

    Note that the abstract class contains a collection of common table methods as well as a collection of common table properties.

  2. When looking at the two Controllers I noticed that the only difference was the identity of the class which was instantiated into an object. The original code looked like this:
    <?php
    require 'classes/foobar.class.inc';
    $object = new foobar;
    $fieldarray = $object->insertRecord($_POST);
    if (empty($object->errors)) {
      $result = $object->commit();
    } else {
      $result = $object->rollback();
    } // if
    ?>
    

    Note that this is an example of tight coupling as this particular Controller can only be used with a particular Model. There is no reusability.

    After a bit of experimenting I discovered that I could replace the hard-coded class name with the contents of a variable, which required another script to load the value into that variable as shown below:

    -- a COMPONENT script
    <?php
    $table_id = "foobar";                      // identify the Model
    $screen   = 'foobar.detail.screen.inc';    // identify the View (a file identifying the XSL stylesheet)
    require 'std.add1.inc';                    // activate the Controller
    ?>
    -- a CONTROLLER script (std.add1.inc)
    <?php
    require "classes/$table_id.class.inc";
    $object = new $table_id;
    $fieldarray = $object->insertRecord($_POST);
    if (empty($object->errors)) {
      $result = $object->commit();
    } else {
      $result = $object->rollback();
    } // if
    ?>
    

    Note that this is an example of loose coupling as this particular Controller can be used with any available Model. There is maximum reusability.

    This meant that, by taking advantage of the polymorphism which I had created with my abstract class, I could create a separate version of the component script for each user transaction, but share the same controller script. Without realising it I had actually implemented a new variation of dependency injection.

  3. As I added more tables I realised that in some cases I needed to add some extra code to a concrete class to perform some additional non-standard processing. I needed a way to insert this code into the concrete class but have it called by code in the abstract class. Then I remembered about functions called "triggers" from my work with UNIFACE and "events" from a brief foray into Visual Basic. These "triggers" and "events" had special names which identified when they were called in the processing cycle, but did nothing unless you added a function with that name into your code. Even though I had only just started with OOP I knew straight away that I could duplicate this behaviour by creating and calling a method in the abstract class which did nothing, which then allowed me to provide an implementation of that method in a concrete class and thus override the empty method in the abstract class. Without realising it I had created an implementation of the Template Method Pattern with my customisable methods playing the part of "hook" methods.
  4. Adding new tables which had relationships then required me to expand my list of Controllers beyond the six shown in Figure 2 above. As well as the LIST1 pattern which dealt with a single table I created a LIST2 pattern which dealt with two tables in a parent-child relationship. As time went by I encountered scenarios which required their own specialised patterns, which means that my library currently contains 45 different patterns. This has been enough to service the requirements of my ERP application which has 20+ subsystems, 450+ database tables, 1,200+ relationships and 4,000+ user transactions.
  5. After creating more and more tables and their associated class files I began to grow tired of manually transferring the details of the table's structure from the INFORMATION_SCHEMA to the array of field specifications in the class file, so I decided to automate this process. I did this by creating a new subsystem called a Data Dictionary with functions to read the database schema and then make this data available to the class file. I knew straight away that writing directly into the class file would be problematic, so I decided to write this data into a table structure file in the file system, then update the class constructor to call a function to load the contents of this disk file into the object's properties. This also meant that I could update the table's structure at any time and have this metadata incorporated into the object without having to amend the class file.

    My data Dictionary was similar to the Application Model which was built into the UNIFACE IDE, but worked in reverse. With UNIFACE you described your table structure within the Application Model, then exported it to your DBMS by generating CREATE TABLE scripts. With my Data Dictionary you create (or amend) your table structure within the DBMS, then import it into the Data Dictionary before exporting it to PHP by creating a table class file and a table structure file. Note that as well as the list of field specifications the table structure file also identifies the primary key, any candidate (unique) keys, and any relationships with other tables either as a parent or a child.

  6. After a while I began to notice that the procedure for creating a new database table, the associated scripts and database records was becoming a bit tedious, and as it always followed the same pattern I decided to update my Data Dictionary to automate this process. As I already had a library of Transaction Patterns it was fairly simple to create a function to link a database table with a Transaction Pattern and then create all the necessary scripts in the file system and updates in the MENU database.

Enhancing the framework

While testing the code which I had written my mind switched from developer mode to user mode, and either I or my business partner began to notice areas where the usability could be improved. This is now known as the User Experience (UX). These changes were made to components within the framework and not to any application classes, but they were then globally available to every transaction in every application.

  1. List screens display multiple rows from the database, and I started off by having this fixed at 10 rows per page. It was suggested that some users might like to see more than 10 rows at a time, so I added controls in the navigation bar to change the page size to either 10, 25, 50 or 100 rows per page.
  2. Sometimes users want to select all the rows on the current screen before navigating into a child form, but it was tedious to manually click on all those checkboxes, so I added controls to select_all and unselect_all to the navigation bar.
  3. Sometimes after activating one child form using a particular selection users would want to navigate to a different child form with the same selection, so I added a control to lock the selection to the navigation bar.
  4. Sometimes a user would want to create a new record based on the contents of an existing record, so I added a Copy button to the action bar in these screens which showed an existing record, and a Paste button
  5. Originally I only catered for the MySQL database as none of the other vendors offered an express edition as a free download. I started with v3 which used the "mysql_" extension, but this was changed to "mysqli_" (the "improved" extension) when v4.1 was released. This meant that I had to cater for both extensions, but I had to find a mechanism to switch easily from one to the other. I did this by moving the contents of the "_dml_" methods in the abstract class into a separate dml.mysql.class.inc file, then changed the "_dml_" methods in the abstract class to call the relevant methods in this new class. I then copied this new class to create a new version called dml.mysqli.class.inc, then added code at the start of each of the "_dml_" methods to find out which database extension should be loaded by using an entry in the configuration file (config.inc). This completed my implementation of the Data Access layer in the 3-Tier Architecture. When other database vendors released their free versions, first PostgreSQL, then Oracle, followed by SQL Server, I was able to create class files for them as well.
  6. Not too long ago my business partner suggested that some users might like to access the application on tablets or mobile phones instead of a full-screen PC, which meant moving to a responsive web design. This would mean changing the way that 4,000 screens were constructed, but this required making changes to only a small number of framework components instead of 4,000 screen definitions. This was due to the fact that there is only one View object in the framework which creates HTML files, and that uses a small collection of just twelve reusable XSL stylesheets. I made a small tweak to the View object, made a new set of XSL stylesheets which I placed in a separate directory, and updated a few screens in the framework which would turn this feature from OFF to ON. Due to the small amount of work required, a product of the framework's design and implementation, this was accomplished in just one man-month. When this feature was added to this ERP application it became The World's First Mobile-First ERP system.

Adding new subsystems to the framework

Sometimes an enhancement to the framework requires an additional set of database tables, so this is implemented by creating a new subsystem with its own set of user transactions. Unlike an application subsystem this may also require a few changes to some framework components in order to complete the integration. Some examples are shown below:

  1. The Audit Logging system. This has its own set of screens to view the contents of the database, but it required changes to some framework components in order to detect database changes and write then to this database. I updated the contents of the insertRecord(), updateRecord() and deleteRecord() methods in each of the dml.???.class.inc files in order to establish what columns had been changed so it could write those changes to the AUDIT database. I also added an $audit_logging property to the abstract table class so that this audit logging could be turned ON or OFF for individual tables.
  2. While working on my TRANSIX application my business partner suggested that I look into adding a Workflow subsystem. While researching this topic I came across an article on Petri Nets which identified the following objects: places, arcs, transitions and tokens. I was immediately struck by the similarity between its "transitions" and my "transactions" as I could see a one-to-one relationship between the two. This led me to design and build my own Activity Based Workflow system which is configured after the application has been built. It does not need any special coding within any application component as during the construction of a workflow a "transition" is linked to a "user transaction" (or task) in the MENU database. At runtime the creation of a new workflow case or the updating of a current workflow case is handled by the workflow engine which is triggered by code which has been added to the abstract class, so it is completely transparent to the application.
  3. Part of my current ERP application is concerned with Supply Chain Management where an organisation communicates with its suppliers by sending purchase orders and receiving shipments. In the past this was done by exchanging paper documents, but with the advent of the internet more advanced options have become available, such as the following:

    Both of these options are vulnerable to attack by determined hackers, but a more secure option is now available - transferring data over a private blockchain. Similar to my Workflow subsystem this involved the creation of a new database and associated maintenance tasks, with the necessary processing built into the framework, not any application component. Just like the addition of the responsive web option this was completed in just one man-month.

Summary

The success of my toolkit was down to the decisions which I made based on my own experience and judgement. When compared with the practices followed by my contemporaries I can achieve more with less effort. When I was later informed that my work must surely be rubbish because I wasn't following an "approved" set of best practice I looked as these practices and quickly concluded that they were not fit for purpose. These are summarised below.

Decisions which I made

Here is a summary of the design decisions I made which helped put the "rapid" into my rapid application development framework:

  1. I structured my code around the database design, not the other way around.
  2. I created a separate concrete class for each database table.
  3. I added methods to this class to handle the Create, Read, Update and Delete (CRUD) operations which are common the every database table.
  4. I moved these methods into an abstract table class which could then be inherited by every concrete table class.
  5. I included a list of fields and their specifications in each concrete class in order to perform data validation.
  6. I decided not to use separate class properties for each table column but to leave all application data in the $_POST array. This allowed me to write a standard routine to perform the data validation automatically instead of manually.
  7. When I needed to insert custom validation into individual concrete classes I found that my use of an abstract class enabled me to implement the Template Method Pattern with empty "hook" methods which could be overridden in any concrete classes.
  8. Because the construction of every table class file followed a standard pattern I could write code to automate this process.
  9. Because every concrete table (Model) class implemented the same set of CRUD methods, and each Controller called different combinations of these methods, this provided a huge amount of polymorphism which I could utilise using dependency injection. This allowed me to reuse any Controller with any Model, thus avoiding the need to create a separate Controller for each Model.
  10. My use of XML and XSL to create all HTML screens allowed me to go a step further when I created a small series of reusable XSL stylesheets instead of a custom stylesheet for each screen.
  11. I was then able to link a particular reusable XSL stylesheet with a particular reusable Controller into a library of Transaction Patterns so that I could link a pattern with a database table to create a basic but working user transaction. This process was originally completely manual, but with a bit of time and effort I was able to automate it.
  12. My use of so many pre-written and reusable components allowed me to add enhancements into the framework which instantly became available to user applications without the need for any developer to change any application components.

Savings which I made

Because I have higher volumes of reusable software than my contemporaries there are a lot of areas where I can achieve results with less effort simply because of the amount of code which I *DON'T* have to write. I reduced and simplified the amount of code I had to write by ignoring certain "best practices" and adopting a set og home-grown "better practices"

Further details on this topic can be found in Bad practices that I avoid.

Practices which I ignored

I am a pragmatist, not a dogmatist like most of my critics. This means that I am results-oriented and not rules-oriented. I decide for myself which is the most cost-effective way of achieving the desired result instead of following the "advice" given by others like a robot. I can think for myself. I don't let others do my thinking for me. I developed my framework in PHP4 and found that its support for Encapsulation, Inheritance and Polymorphism was more than adequate. Although many new features hves been added to PHP since then I do not use them simply because I cannot find a use for them. Either they do something which I don't need, or I have already satisfied that need with a less complicated solution.

Further details on the features which I don't use can be found in PHP features which I avoid.


counter