If it is read to a much greater degree than it is written, and it should always be relevant, then this is ideal work for a pivot table that supports a trigger (like a materialized view).
You have a trigger in the team table that AFTER EACH INSERT OR UPDATE OR DELETE FOR EACH ROW performs a trigger function that updates the team_summary table team_summary for this command with a new rating.
The team_summary table can be accessed with a simple direct equality search index, so it will be insanely fast. Since Pg supports simultaneous readers and writers, the team_summary table will remain responsive, even if it is updated very frequently. The only thing you need to do to get the best results is to set FILLFACTOR to something like 50 in the team_summary table team_summary that HOT can work well, and make sure autovacuum is configured to run quite often to distribute the vacuum load. I / O cure.
Writing a trigger should be pretty trivial. You just need to be careful to write a concurrency-safe trigger that will not break if you have parallel updates of the same command by multiple parallel connections. Sort of:
UPDATE team_summary SET score = score + 1 WHERE team_id = NEW.team_id;
should be good at isolating SERIALIZABLE and READ COMMITTED . See Concurrency control . The only hard bit is that you must always insert a new row in team_summary before inserting the first row for a new team in team so that your trigger does not handle the surprisingly difficult case where the team_summary row may not yet exist in the team table. Getting upsert / merge for this is quite difficult.
If the write speed is also very high and you can only get out with updated results every few seconds / minutes, use the Clodoaldo approach.
source share