one question about NDBCLUSTER.
I inherited a website entry based on NDBCLUSTER 5.1 (LAMP platform) solution.
Unfortunately, who developed the first solution did not understand that this database mechanism has strong limitations. Firstly, the maximum number of fields a table can have is 128. A former programmer planned tables with 369 fields in one row, one for each day of the year plus some key field (he initially worked with the MyISAM engine). Well, that needs to be reorganized, anyway, I know.
In addition, the engine needs a lot of tuning: the maximum number of attributes for the table (by default is 1000, the bits are too small) and many other parameters, the misinterpretation or underestimation of which can lead to serious problems after you work with your database, and you are forced to change something.
Even the fact that disk storage for NDBCLUSTER tables is inactive, if not configured exactly: even if specified in CREATE TABLE statements, it seems that the engine prefers to store data in memory, which explains the speed, but it can be painful if your table is on node 1 should crash suddenly (as during testing). All table data lost on all nodes and tables is corrupted after only 1000 records.
We were on a server with 8 GB of RAM, and the table had only 27 fields.
Note that no ndb_mgm operation to shut down nodes was performed to compromise table data. He just fell, completely stopped. Our supplier did not understand why.
So the question is: would you recommend NDBCLUSTER as a stable solution for a large-scale web services database?
We are talking about a database that should contain several million records, thousands of tables and thousands of directories.
If there is no database that you would recommend as the best for the task of creating a web service at the national level.
Thanks in advance.