Huge MySQL Database - Do and Don'ts?

I am interested in creating a huge database (100 million million records) using MySQL to store stock data in 1 minute intervals. The database will contain data for 5,000 shares that have been spoken for 10 years.

Two problems:

(1) I used to have a “slow insertion” problem, that is, at the beginning the insertion speed was good, but since the table was populated with millions of records, the insertion became slow (too slow!). I used Windows at the time, and now I use Linux - if that matters?

(2) I know indexing methods that will help queries (retrieve data) be faster. The thing is, is there a way to speed up inserts? I know that when inserting, you can turn off indexing, but then "building" the insertion of indexes (for 10 million entries!) Also takes a lot of time. any advice on this?

Any other Do / Don'ts? Thanks in advance for any help.

+3
source share
3 answers

, . , , ( , ). , . , , . . auto-increment , , . , innodb . Linux , . , , , , . , ( ), .

+2

, Lucene ( ) . Lucene .

0

Consider using an SSD (or array) to store your data, especially if you cannot afford to create a box with gigabytes of memory. Everything about this should be faster.

0
source

Source: https://habr.com/ru/post/1791564/


All Articles