How to get database size exactly in postgresql?

We are working on PostgreSQL version 9.1, previously we had more than 1 billion rows in one table and were deleted. However, it seems that the \l+ command still reports inaccurately about the actual size of the database (it reported 568 GB, but in fact it is much smaller).

The proof that 568 GB is erroneous is that the table size of individual tables cannot be summed with the number, as you can see, the top 20 relationships are 4292 MB in size, the remaining 985 relationships are all significantly lower than 10 MB. In fact, they all make up less than about 6 GB.

Any idea why PostgreSQL is bloating so much? If this is confirmed, how can I solve it? I am not very familiar with VACUUM , is that what I need to do? If so, how?

I really appreciate it.

 pmlex=# \l+ List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description -----------------+----------+----------+-------------+-------------+-----------------------+---------+------------+-------------------------------------------- pmlex | pmlex | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 568 GB | pg_default | pmlex_analytics | pmlex | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 433 MB | pg_default | postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 5945 kB | pg_default | default administrative connection database template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +| 5841 kB | pg_default | unmodifiable empty database | | | | | postgres=CTc/postgres | | | template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +| 5841 kB | pg_default | default template for new databases | | | | | postgres=CTc/postgres | | | (5 rows) pmlex=# SELECT nspname || '.' || relname AS "relation", pmlex-# pg_size_pretty(pg_relation_size(C.oid)) AS "size" pmlex-# FROM pg_class C pmlex-# LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) pmlex-# WHERE nspname NOT IN ('pg_catalog', 'information_schema') pmlex-# ORDER BY pg_relation_size(C.oid) DESC; relation | size -------------------------------------+--------- public.page_page | 1289 MB public.page_pageimagehistory | 570 MB pg_toast.pg_toast_158103 | 273 MB public.celery_taskmeta_task_id_key | 233 MB public.page_page_unique_hash_uniq | 140 MB public.page_page_ad_text_id | 136 MB public.page_page_kn_result_id | 125 MB public.page_page_seo_term_id | 124 MB public.page_page_kn_search_id | 124 MB public.page_page_direct_network_tag | 124 MB public.page_page_traffic_source_id | 123 MB public.page_page_active | 123 MB public.page_page_is_referrer | 123 MB public.page_page_category_id | 123 MB public.page_page_host_id | 123 MB public.page_page_serp_id | 121 MB public.page_page_domain_id | 120 MB public.celery_taskmeta_pkey | 106 MB public.page_pagerenderhistory | 102 MB public.page_page_campaign_id | 89 MB ... ... ... pg_toast.pg_toast_4354379 | 0 bytes (1005 rows) 
+4
source share
2 answers

Your options include:

one). Providing automatic inclusion and preservation of offline mode.

2). Re-creating the table, as I mentioned in the previous comment (create-table-as-select + truncate + reload source table).

3). Running CLUSTER on the table, if you can afford to be locked from this table (exclusive lock).

4). VACUUM FULL, although CLUSTER is more efficient and recommended.

5). Launching a simple VACUUM ANALYZE several times and leaving the table as it is, to eventually fill in the gap when new data appears.

6). Dump and reload the table via pg_dump

7). pg_repack (although I did not use it in production)

+2
source

it will look different if you use pg_total_relation_size instead of pg_relation_size

pg_relation_size does not give the total size of the table, see

https://www.postgresql.org/docs/9.5/static/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE

0
source

Source: https://habr.com/ru/post/1495645/


All Articles