This question may seem strange.
But every time I did PHP projects in the past, I came across such a bad experience:
Scripts stop working after 10 seconds. This leads to very poor database inconsistency (a bad example for the delete cycle: the user is about to delete the photo album. The album object is deleted from the database, and then halfway down the photo deletion script will be killed right where it is, and 10,000 photos left without a link )
It is not safe for transactions. I have never found a way to do something reliably to do this. If the script is killed, it will be killed. Right in the middle of the cycle. He's just killed. This has never happened with tomcat with java. Java works, starts and starts if it takes a long time.
Many newsletter scripts try to solve this problem by breaking the work into many packages, that is, sending 100 at a time, then rewriting the page (oh man really silly), doing the next one, and soon. Most often, something hangs, or the script takes more than 10 seconds, and your platform is damaged.
But then I hear that very large projects use PHP, for example studivz (German Facebook clone, in fact the largest German site). So, there is a tiny light of hope that this bad behavior comes only from unprofessional hosting companies that just kill php scripts because their servers are so bad. What is the truth about this? Can it be configured in such a way that scripts will never be killed because they take a little longer?
source
share