Is it better to run 100 SQL queries with a WHERE clause with one condition or one query with a WHERE clause that has 100 conditions?

Is it better to run 100 SQL queries with a WHERE clause with one condition or one query with a WHERE clause that has 100 conditions? For example, if I were looking for whether 100 user names would already be in the table, it would be better to repeat 100 queries or make one complex query that would search 100 user names at a time.

+4
source share
9 answers

I have not done any tests, but would have preferred one big request. With 100 queries, you need to connect to the database, send a query string and process the response / results 100 times. With a single request, you send one (larger) request and get one response back. I do not know the exact cost of 100 context switches, but this is probably not insignificant. In any case, the database will probably have to do the same amount of work with one large query. And if all you check is 1 username and 100 usernames, this is not a very complicated query, more like

select * from users where username in ('value1',...,'value100'); 
+11
source

The fewer the queries, the better.

+4
source

Less - more. Make the least number of queries to achieve your results in this case.

+3
source

SQL is set based. It is better to run one query with a condition that has 100 conditions. Remember that each request is a separate implicit transaction, so it requires additional overhead to run hundreds of simple requests, and not one complex request (in terms of processing and bandwidth). At the same time, from the point of view of implementation, the launch of 100 requests can be easier, regardless of performance.

+3
source

IMHO this is a personal preference if you do not do some huge crazy things that span multiple tables, etc. and on a huge amount of data, the speed gain will not be huge anyway.

if you decide to do all this on demand, make sure that you set your keys and indexes correctly, otherwise for each query he must read the entire table to find what he needs.

you will get some HUGE speed gains if you tune your indexes well.

+3
source

Would it be easier to create @table and add 100 names to the @table.

Filter the query according to the IN statement.

+2
source

I bet that one big request is faster ...

... but instead of believing in my insides, there is an easy way to prove it. Built a simple performance prototype and see.

+2
source

The short answer is that it depends. It depends on the data in the tables, the types of offers you work on (primary key or several keys, etc.), the size of the result sets, etc. In some cases, one request will be much faster, in others there will be many.

In general, the fewer queries performed, the better. However, with that said, 100 highly efficient queries (for example, primary key searches) are much better than 1 very inefficient query.

As for your specific problem, if 100 requests are identical, then combine them into one. If they are related (for example, authors' samples for 100 posts), then combine them into one. If they have nothing in common with each other, do not bother to combine them (as this will impair readability and maintainability).

+2
source

It's pretty fast, but not very pretty: P

 drop table if exists users; create table users ( user_id int unsigned not null auto_increment primary key, username varbinary(32) unique not null ) engine=innodb; drop procedure if exists username_check; delimiter # create procedure username_check ( in p_username_csv varchar(65535) ) proc_main:begin declare v_token varchar(255); declare v_done tinyint unsigned default 0; declare v_idx int unsigned default 1; if p_username_csv is null or length(p_username_csv) <= 0 then leave proc_main; end if; -- split the string into tokens and put into an in-memory table... create temporary table tmp( username varbinary(32), username_exists tinyint unsigned default 0, key (username) )engine = memory; while not v_done do set v_token = trim(substring(p_username_csv, v_idx, if(locate(',', p_username_csv, v_idx) > 0, locate(',', p_username_csv, v_idx) - v_idx, length(p_username_csv)))); if length(v_token) > 0 then set v_idx = v_idx + length(v_token) + 1; insert into tmp (username) values(v_token); else set v_done = 1; end if; end while; -- flag any that exist update tmp t inner join users u on t.username = u.username set t.username_exists = 1; select * from tmp order by username; drop temporary table if exists tmp; end proc_main # delimiter ; select count(*) from users; +----------+ | count(*) | +----------+ | 1000000 | +----------+ 1 row in set (0.22 sec) call username_check('user1,user2,user3... user250'); 250 rows in set (0.01 sec) 
+2
source

Source: https://habr.com/ru/post/1335501/


All Articles