Why does the following connection significantly increase the query time?

I have a star chart here and I am querying a fact table and would like to join one very small dimension table. I can not explain the following:

EXPLAIN ANALYZE SELECT 
  COUNT(impression_id), imp.os_id 
  FROM bi.impressions imp 
  GROUP BY imp.os_id;

                                                                  QUERY PLAN
    --------------------------------------------------------------------------------------------------------------------------------------
     HashAggregate  (cost=868719.08..868719.24 rows=16 width=10) (actual time=12559.462..12559.466 rows=26 loops=1)
       ->  Seq Scan on impressions imp  (cost=0.00..690306.72 rows=35682472 width=10) (actual time=0.009..3030.093 rows=35682474 loops=1)
     Total runtime: 12559.523 ms
    (3 rows)

It takes ~ 12600 ms, but, of course, there is no related data, so I cannot "resolve" imp.os_id something meaningful, so I add the connection:

EXPLAIN ANALYZE SELECT 
  COUNT(impression_id), imp.os_id, os.os_desc 
  FROM  bi.impressions imp, bi.os_desc os 
  WHERE imp.os_id=os.os_id 
  GROUP BY imp.os_id, os.os_desc;
                                                                     QUERY PLAN
    --------------------------------------------------------------------------------------------------------------------------------------------
     HashAggregate  (cost=1448560.83..1448564.99 rows=416 width=22) (actual time=25565.124..25565.127 rows=26 loops=1)
       ->  Hash Join  (cost=1.58..1180942.29 rows=35682472 width=22) (actual time=0.046..15157.684 rows=35682474 loops=1)
             Hash Cond: (imp.os_id = os.os_id)
             ->  Seq Scan on impressions imp  (cost=0.00..690306.72 rows=35682472 width=10) (actual time=0.007..3705.647 rows=35682474 loops=1)
             ->  Hash  (cost=1.26..1.26 rows=26 width=14) (actual time=0.028..0.028 rows=26 loops=1)
                   Buckets: 1024  Batches: 1  Memory Usage: 2kB
                   ->  Seq Scan on os_desc os  (cost=0.00..1.26 rows=26 width=14) (actual time=0.003..0.010 rows=26 loops=1)
     Total runtime: 25565.199 ms
    (8 rows)

This effectively doubles the execution time of my request. My question is what I left in the picture? I would think that such a small search did not cause huge differences in query execution time.

+2
source share
3 answers

() ANSI JOIN:

SELECT COUNT(impression_id), imp.os_id, os.os_desc 
FROM   bi.impressions imp
JOIN   bi.os_desc os ON os.os_id = imp.os_id
GROUP  BY imp.os_id, os.os_desc;

, , os_desc .
, os_id, , NOT NULL bi.impressions.os_id. , :

SELECT COUNT(*) AS ct, imp.os_id, os.os_desc 
FROM   bi.impressions imp
JOIN   bi.os_desc     os USING (os_id)
GROUP  BY imp.os_id, os.os_desc;

count(*) , count(column), , NOT NULL. .

:

SELECT os_id, os.os_desc, sub.ct
FROM  (
   SELECT os_id, COUNT(*) AS ct
   FROM   bi.impressions
   GROUP  BY 1
   ) sub
JOIN   bi.os_desc os USING (os_id)

, . :

+4
HashAggregate  (cost=868719.08..868719.24 rows=16 width=10)
HashAggregate  (cost=1448560.83..1448564.99 rows=416 width=22)

, 10 22 . , , ?

+1

. - , , Postgres, - .

WITH 
  OSES AS (SELECT os_id,os_desc from bi.os_desc) 
SELECT 
  COUNT(impression_id) as imp_count, 
  os_desc FROM bi.impressions imp, 
  OSES os 
WHERE 
  os.os_id=imp.os_id 
GROUP BY os_desc 
ORDER BY imp_count;
+1

Source: https://habr.com/ru/post/1608948/


All Articles