Are nested php enabled cpu / memory?

I encode the site in PHP and get "pretty urls" (also hide my directories), directing all requests to a single index.php file (using .htaccess). The index file then parses the uri and includes the requested files. These files also contain more than a couple of inclusions in them, and each of them can open a connection to MySQL. And then these files are also included that open sql connections. It drops to 3-4 levels.

Is this CPU process intense, both from PHP and to open (and close) MySQL connections in each included file?

Also, will beautiful URLs use purely htaccess to use less resources?

+4
source share
3 answers

PHP overhead

The answer to the logical decomposition of your application into the original hierarchy depends on how you place your solution.

  • If you use a dedicated host / virtual machine, then you will probably have mod_php + Xcache or equiv, and the answer will be: no, this will not hit the run time much, since everything becomes cached in memory at the level of PHP code.
  • If you use a shared hosting service, this will affect performance, since any PHP scripts will be loaded via PHP-cgi, probably via suPHP, and the entire source hierarchy that is included must be read and compiled per request . Even worse, in the general solution, if this request is the first, say, in 1 minute, then the server’s cache file will be cleared, and the sorting of this source will be associated with a large number of I / O time delays = seconds.

I manage several phpBB forums and found that by combining common include hierarchies to implement shared hosting, I can half the user response time. Here are a few articles that describe this in more detail ( Terry Allison [phpBB] ). And to quote one article:

Let me quantify my views with some numbers. I need to emphasize that the figures below are indicative. I have included tests as attachments in this article, just in case you want to test them on your own service.

  • 20-40 . The number of files that you can open and read per second if the file system cache is not primed.
  • 1500-2500 . The number of files that you can open and read per second if the file system cache is loaded with their contents.
  • 300000-400000 . The number of lines per second that the PHP interpreter can compile.
  • 20,000,000 . The number of PHP instructions per second that the PHP interpreter can interpret.
  • 500-1000 . The number of MySQL statements per second that the PHP interpreter can invoke if the database cache loads with the contents of your table.

For more information, see More about optimizing PHP applications in the shared Webfusion service , where you can copy benchmarks to run.

MySQL connection

The easiest way to do this is to combine the connection. I use my own mysqli class extension, which uses a standard template for one object for each class. In my case, any module can produce:

$db = AppDB::get(); 

to return this object. This is cheap because the internal call includes half a dozen PHP codes.

An alternative, but traditional method is to use a global object storage and simply

 global $db; 

in any function that should use it.

Footnote for small applications

You suggested merging everything included in a single include file. This is normal for stable production, but pain during testing. Can I offer a simple compromise? Saves them separately for testing, but allows you to download one composite. You do this in two parts (i) I assume that each of them includes a function or class, so use a standard template for each of them, for example.

 if( !function_exists( 'fred' ) ) { require "include/module1.php"; } 

Before any loads in the master script, simply do:

 @include "include/_all_modules.php"; 

Thus, when testing, you delete _all_modules.php , and the script returns to loading individual modules. When you are happy, you can recreate _all_modules.php . You can make this server side a simple “release” script that executes

 system( 'cp include/[az]*.php include/_all_modules.php' ); 

So you get the best of both worlds

+6
source

It depends on the MySQL client code, I know that connections are often reused when opening a MySQL connection with the same parameters.

Personally, I would only initialize the database connection in the front controller (your index.php file), because anyway it should be there anyway.

+5
source

You can use include_once() or require_once() methods to ensure that PHP only parses them once, thereby saving processing time. This would be especially useful if you suspect that your code might try to include files more than once per script run.

http://php.net/manual/en/function.include-once.php

I would suggest that using .htaccess to parse URLs will always use more resources than any other method, just because these rules will be activated with every single request of the .php file your server has encountered.

+1
source

Source: https://habr.com/ru/post/1396361/


All Articles