PHP overhead
The answer to the logical decomposition of your application into the original hierarchy depends on how you place your solution.
- If you use a dedicated host / virtual machine, then you will probably have mod_php + Xcache or equiv, and the answer will be: no, this will not hit the run time much, since everything becomes cached in memory at the level of PHP code.
- If you use a shared hosting service, this will affect performance, since any PHP scripts will be loaded via PHP-cgi, probably via suPHP, and the entire source hierarchy that is included must be read and compiled per request . Even worse, in the general solution, if this request is the first, say, in 1 minute, then the server’s cache file will be cleared, and the sorting of this source will be associated with a large number of I / O time delays = seconds.
I manage several phpBB forums and found that by combining common include hierarchies to implement shared hosting, I can half the user response time. Here are a few articles that describe this in more detail ( Terry Allison [phpBB] ). And to quote one article:
Let me quantify my views with some numbers. I need to emphasize that the figures below are indicative. I have included tests as attachments in this article, just in case you want to test them on your own service.
- 20-40 . The number of files that you can open and read per second if the file system cache is not primed.
- 1500-2500 . The number of files that you can open and read per second if the file system cache is loaded with their contents.
- 300000-400000 . The number of lines per second that the PHP interpreter can compile.
- 20,000,000 . The number of PHP instructions per second that the PHP interpreter can interpret.
- 500-1000 . The number of MySQL statements per second that the PHP interpreter can invoke if the database cache loads with the contents of your table.
For more information, see More about optimizing PHP applications in the shared Webfusion service , where you can copy benchmarks to run.
MySQL connection
The easiest way to do this is to combine the connection. I use my own mysqli class extension, which uses a standard template for one object for each class. In my case, any module can produce:
$db = AppDB::get();
to return this object. This is cheap because the internal call includes half a dozen PHP codes.
An alternative, but traditional method is to use a global object storage and simply
global $db;
in any function that should use it.
Footnote for small applications
You suggested merging everything included in a single include file. This is normal for stable production, but pain during testing. Can I offer a simple compromise? Saves them separately for testing, but allows you to download one composite. You do this in two parts (i) I assume that each of them includes a function or class, so use a standard template for each of them, for example.
if( !function_exists( 'fred' ) ) { require "include/module1.php"; }
Before any loads in the master script, simply do:
@include "include/_all_modules.php";
Thus, when testing, you delete _all_modules.php , and the script returns to loading individual modules. When you are happy, you can recreate _all_modules.php . You can make this server side a simple “release” script that executes
system( 'cp include/[az]*.php include/_all_modules.php' );
So you get the best of both worlds