SQL, if you mainly use this as a web service. SQLite if you want it to be able to work offline.
SQLite is usually much faster, since most (or ALL) of the data / indexes will be cached in memory. However, in the case of SQLite. If the data is divided into several tables or even several SQLite database files, from my experience so far. For even millions of records (so far I still have 100 million), this is much more efficient than SQL (compensate for the delay / etc). However, this is when records are bifurcated in different tables, and queries are specific to such tables (dun queries all tables).
An example is the database of elements used in a simple game. Although this may not sound like much, the UID will be released for even options. Thus, the generator will quickly generate over a million sets of “characteristics” with variations. However, this was mainly due to the fact that every 1000 recordsets were divided between different tables. (since we mainly retrieve records via UID). Although the performance of the splitting was not properly measured. We received queries that were 10 times faster than SQL (mainly due to network latency).
Amazingly, however, we ended up reducing the database to several thousand records, with the [pre-fix] / [suf-fix] parameter defining the options. (Like diablo, only that it was hidden). Which turned out to be much faster at the end of the day.
On the side of the note, however, my case was mainly due to the fact that the queues lined up one by one (expecting what was in front of him). If, however, you can simultaneously perform multiple connections / requests to the server. SQL performance degradation is more than compensated by you. Assuming that these requests do not branch / interact with each other (for example, if the result of the this, else that that request is received)
source share