If you need to allow NULL values, use a UNIQUE constraint instead of a PRIMARY KEY (and add a surrogate column PK, I suggest serial ). This allows the columns to be NULL:
CREATE TABLE distributor ( distributor_id serial PRIMARY KEY , m_id integer , x_id integer , UNIQUE (m_id, x_id) );
Note , however ( for documentation ):
For a unique constraint, null values ββare not considered equal.
In your case, you can enter (1, NULL) for (m_id, x_id) any number of times without violating the restriction. Postgres never considers two NULL values ββequal - as defined in the SQL standard.
If you need to treat NULL values ββas equal to prohibit such "duplicates", I see two options :
1. Two partial indexes
In addition to the UNIQUE restriction above:
CREATE UNIQUE INDEX dist_m_uni_idx ON distributor (m_id) WHERE x_id IS NULL; CREATE UNIQUE INDEX dist_x_uni_idx ON distributor (x_id) WHERE m_id IS NULL;
But it quickly gets out of hand with more than two columns, which can be NULL.
Connected:
2. Multilevel UNIQUE index for expressions
Instead of limiting UNIQUE. We need a free default value that is never present in the involved columns, for example -1 . Add CHECK restrictions to prohibit this:
CREATE TABLE distributor ( distributor serial PRIMARY KEY , m_id integer , x_id integer , CHECK (m_id <> -1) , CHECK (x_id <> -1) );
CREATE UNIQUE INDEX distributor_uni_idx ON distributor (COALESCE(m_id, -1) , COALESCE(x_id, -1))
How a particular RDBMS handles things is not always a useful indicator of proper behavior. The Postgres manual will tell you this :
This means that even with a unique constraint, it is possible to store duplicate rows that contain a zero value in at least one of the restricted columns. This behavior complies with the SQL standard, but we heard that other SQL databases cannot follow this rule . Therefore, be careful when developing applications designed for carrying.
My bold accent.