30 million rows in MySQL

Evening

I am going through a long process of importing data from a broken, 15-year read-only data format in MySQL to create several smaller statistics tables from it.

The largest table I built earlier was (I think) 32 million rows, but I did not expect it to get so big and really strain MySQL.

The table will look like this:

surname name year rel bco bplace rco rplace Jones David 1812 head Lond Soho Shop Shewsbury 

So, small ints and varchars.

Can anyone offer tips on how to do this quickly? Will indexes on any of the coummns help, or will they just slow down queries.

Most of the data in each column will be duplicated many times. Some fields have no more than 100 different possible values.

The main columns to which I will refer to the table are: last name, first name, rco, rplace.

+6
source share
1 answer

INDEX on a column the search is fixed. Try using INDEX columns, which will be used more often in queries. As you already mentioned, you will use the columns surname, name, rco, rplace . I suggest you index them.

Since the table contains 32 million records, indexing will take some time, but it's worth the wait.

+5
source

Source: https://habr.com/ru/post/917064/


All Articles