I would recommend finding information on PostgreSQL's multi-story design mailing lists. There has been a lot of discussion, and the answer comes down to "it depends." There are offsets between guaranteed insulation, performance and maintainability.
A common approach is to use a single database, but a schema (namespace) for each client with the same table structure in each schema, plus a common or common schema for data that is the same for all of them. A PostgreSQL schema is similar to a MySQL database in which you can query various schemas, but they are isolated by default. With client data in a separate schema, you can use the search_path parameter, usually through ALTER USER customername SET search_path = 'customerschema, sharedschema' , so that each client sees his data and only his data.
For added protection, you should REVOKE ALL FROM SCHEMA customerschema FROM public , then GRANT ALL ON SCHEMA customerschema TO thecustomer , so they are the only ones who have access to it, doing the same for each of their tables. Then your connection pool can log in with a fixed user account that does not have access to GRANT ed for any client scheme, but has the right to SET ROLE to become a client. (Do this by granting them membership in each client role with the NOINHERIT setting so that rights are explicitly declared through SET ROLE ). The connection should be immediately SET ROLE for the client who is currently working. This will allow you to avoid the overhead of creating new connections for each client while maintaining strong protection against a programmerβs error that leads to access to incorrect client data. So far, the pool does DISCARD ALL and / or RESET ROLE before transferring connections to the next client, which will give you very strong isolation without the frustration of having separate connections for each user.
If your web application doesnβt have a built-in connection pool (let's say you use PHP with persistent connections), you really need to put a good connection pool between Pg and the web server anyway, since too many backend connections will hurt your performance. PgBouncer and PgPool-II are the best options, and it is convenient to take care of performing DISCARD ALL and RESET ROLE for you during a communication session.
The main disadvantage of this approach is the overhead of maintaining multiple tables, since your basic set of non-shared tables is cloned for each client. It will develop as the number of clients grows to the point where the huge number of tables that will be checked during autovacuum starts to become expensive and where any operation that scales depending on the total number of tables in the database slows down. This is more of a problem if you think that there are many thousands or tens of thousands of clients in the same database, but I highly recommend that you conduct several scaling tests with this construct using dummy data before committing it.
The ideal approach is likely to be a single table with automatic level control at the row level, but unfortunately something else does not exist PostgreSQL. This seems to be due to the work of SEPostgreSQL adding a suitable infrastructure and API, but this is not in 9.1.