You need to look into the details of storing PostgreSQL physical data, Page Layout .
As you know, the default PostgreSQL block size is 8 kbytes (8192 bytes). You should also know that rows in PostgreSQL cannot span a block border. This already gives you a size limit of 8192 bytes. But...
Looking at the page layout above, there is also an overhead for PageHeader , which is 24 bytes for the current version of PostgreSQL. So, we have 8168 bytes left. But...
There is also ItemIdData , which is an array of pointers. Suppose there is only 1 record on this page, so this record only takes 4 bytes (1 record). So there are 8164 bytes left. But...
Each record also has a RecordHeader , which, as you know, takes 23 bytes. So, we have 8141 bytes left. But...
There is also a NULL -bitmap immediately after RecordHeader , but suppose we defined all of our columns with a NOT NULL . So, the same 8141 bytes. But...
There is such a thing - MAXALIGN . Take a look at this wonderful answer by Erwin . We are talking about an offset of 24+4+23=51 here. Now everything will depend on the value of this parameter in your system.
If it is 32-bit, then the offset will be aligned to 52, which means that we are losing another byte.
If it is 64-bit, then the offset will be aligned to 54, which means that we spend 3 more bytes. My system is 64-bit, so I assume that we left 8138 bytes.
So this is the space in which we stayed. And now everything will depend on the types of columns selected and how they sit together (remember that the MAXALIGN thing). Take int2 for all columns. Simple calculations show that we must compress column 4069 of this type: all columns are NOT NULL the same type.
Simple script:
echo "CREATE TABLE tab4069 (" > tab4069.sql for num in $(seq -f "%04g" 1 4069); do echo " col$num int2 not null," >> tab4069.sql; done echo " PRIMARY KEY (col0001) );" >> tab4069.sql
However, if you try to create this table, you will get an error:
ERROR: tables can have no more than 1600 columns
A bit of a search point for a similar question, and looking at PostgreSQL sources , we get the answer (lines 23 to 47):
#define MaxTupleAttributeNumber 1664 #define MaxHeapAttributeNumber 1600
There are many types of variable lengths, and they carry out fixed overheads of 1 or 4 bytes + a certain number of bytes in the actual value. This means that you will never know in advance how much space a recording will take until you acquire the actual value. Of course, these values ββcan be stored separately through TOAST , but usually larger (round 2 KB total length).
Consult the official type documents for the space used for fixed-length types. You can also check the output of the pg_column_size() function for any type, especially for complex ones like arrays, hstore or jsonb .
You will need to delve into more detailed information if you want a more complete vision of this topic.
source share