Before you mark this question as a duplicate, PLEASE HEAR ME! I already read the questions asked here about how to increase productivity, for example. just note a few Improve SQLite performance? and What are sqlite performance characteristics with very large database files?
I'm struggling to get sqlite to work with a 5 gigabyte database file size. On the contrary, there are people who claim that sqlite works “perfectly” for them, even when the database size reaches 160 GB. I have not tried this myself, but from the questions asked, I think that all floor marking, perhaps, is performed only with a table in the database.
I am using a database using - 20 or so tables
- half of the tables contain more than 15 columns
- Each of these 15 or table columns has 6/7 foreign key columns - Several of these tables have already grown to 27 million records per month
The development machine I use is a 3 GHz quad-core machine with 4 gigabytes of RAM, and yet it takes more than 3 minutes to query row_count in these large tables.
I could not find a way to split the data horizontally. The best snapshot I have is to split the data across several database files, one for each table. But in this case, as far as I know, the restrictions on the columns of the foreign key cannot be used, so I will have to create my own table with enough (without any foreign keys).
So my questions are: a) Am I using the wrong database for the job?
b) Where do you think I'm wrong?
c) I haven't added foreign key indexes yet, but if only querying the number of rows takes four minutes, how do foreign key indexes help me?
EDIT Provide more information, although no one requested it :) I am using SQLite version 3.7.9 with system.data.sqlite.dll version 1.0.77.0
EDIT2: I THINK where I will differ from 160 gigabytes, it is that they can choose a single record or a small range of records. But I need to load all 27 million rows in my table, combine them with other tables, group the records at the user's request and return the results. Any input based on the best way to optimize the database for such results.
I can not cache the results of the previous query, as this does not make sense in my case. The chances of getting into the cache will be pretty low.