In my experience, the real problem is less related to the performance of the aggregate query, which I find good in all the main databases that I tried, than this is due to the way the queries are written.
I lost counting the number of times I saw huge report requests with a huge number of joins and built-in subquery aggregates all over the place.
At the top of my head are typical steps to make these things faster:
Use window functions where available and applicable (i.e. the over () operator). There is absolutely no point in retransmitting data several times.
Use common table expressions ( with queries) where they are available and applicable (i.e. sets that you know will be small enough).
Use temporary tables for large subtotals and create indexes for them (and analyze them).
Working with small result sets by filtering rows earlier when possible: select id, aggregate from (aggregate on id) where id in (?) group by id can be done much faster by rewriting it as select id, aggregate from (aggregate on id where id in (?)) group by id .
Use union/except/intersect all , not union/except/intersect , where applicable. This removes the pointless sorting of result sets.
As a bonus, the first three steps, as a rule, make requests with reports more readable and, therefore, more convenient for maintenance.
source share