Need help configuring sql query

I need help to increase this SQL-Statement. The runtime is about 125 ms.
At runtime of my program, this sql (better: equally structured sqls for different tables)
will be called 300,000 times.

The average number of rows in the tables is about 10,000,000 rows, and new rows (updates / inserts) will be added with a time stamp every day. The data that is interesting for this particular export program is in the last 1-3 days. Perhaps this is useful for creating an index. I need the current valid string for the given id and forerunner datarow to receive updates (if exists).

We use the Oracle 11g database and Dot.Net Framework 3.5

SQL statement to enhance:

select ID_SOMETHING, -- Number(12) ID_CONTRIBUTOR, -- Char(4 Byte) DATE_VALID_FROM, -- DATE DATE_VALID_TO -- DATE from TBL_SOMETHING XID where ID_SOMETHING = :ID_SOMETHING and ID_CONTRIBUTOR = :ID_CONTRIBUTOR and DATE_VALID_FROM <= :EXPORT_DATE and DATE_VALID_TO >= :EXPORT_DATE order by DATE_VALID_FROM asc; 

Here I downloaded the current Explain-Plan for this request.

I am not a database expert, so I don’t know which index type is best suited for this requirement. I have seen that there are many different possible types of indexes that can be applied. Perhaps Oracle Optimizer hints are also helpful.

Does anyone have a good idea to configure this sql or can point me in the right direction?

+4
source share
3 answers

The plan of explanations looks as good as it can, but it does not necessarily mean much. The index proposed by Quassnoi is exactly what I would suggest, too.

In any case, making 300,000 similar queries in your program, I ask: is this necessary? Perhaps you can achieve the same goal with fewer requests, each of which performs a little more.

If you cannot avoid many queries, you should at least use prepared instructions. If you use LINQ, then the instructions compiled for you. This way you avoid parsing overhead, which probably makes up a significant portion of the total cost, especially for such simple queries.

+5
source

Create a composite index:

 CREATE INDEX ix_something_s_c_d ON tbl_something (id_something, id_contributor, date_valid_from) 

Unfortunately, you are looking for a constant within two columns, not for a column within two constants, so the last field is not very selective. However, this may help in streamlining.

+4
source

You speak:

The data that is interesting for this particular export program is in the last 1-3 days.

Does this mean that you are interested in the lines where DATE_VALID_FROM is in the last three days? If so, you can have more fun with the index, which looks like this:

 create index something_idx on tbl_something (date_valid_from, id_something, id_contributor, date_valid_to) / 

The inclusion of date_valid_to means that reading the index can satisfy the query without touching the table at all. The host with date_valid_from puts all rows that may interest you in the same part of the index space.

The above assumes your 300,000 calls are for different id_something and id_contributor . If this assumption is incorrect - let's say that they are all for the same id_contributor or you make 50,000 calls for the same id_contributor in a row, then it would be more reasonable to use (id_contributor, date_valid_from ...) . As is usually the case with query tuning, the specificity of business logic is critical to finding a happy result. Oh, and benchmarking different ideas is important.

I agree with AmmoQ that executing the same statement 300,000 times in a single process sounds like an RBAR implementation that might be more suitable for a set-oriented approach.

+1
source

Source: https://habr.com/ru/post/1305082/


All Articles