class MasterSlaveRouter(object): """A router that sets up a simple master/slave configuration""" def db_for_read(self, model, **hints): "Point all read operations to a random slave" return random.choice(['slave1','slave2']) def db_for_write(self, model, **hints): "Point all write operations to the master" return 'master' def allow_relation(self, obj1, obj2, **hints): "Allow any relation between two objects in the db pool" db_list = ('master','slave1','slave2') if obj1._state.db in db_list and obj2._state.db in db_list: return True return None def allow_syncdb(self, db, model): "Explicitly put all models on all databases." return True
From django docs
This may be if you really want to do it as an exercise, but I would not recommend it if that happens.
You are requesting that your database replicate for you, which is the right way to do this.
Then in your application you basically say that you want to jump in the middle of this replication and write to a subordinate, and then read from the master; in other words, you are trying to use replication just like a cluster. This can only lead to bad things in the future; concurrency for one will be a problem. A connection unites another, and data integrity - a third.
Another approach to reducing response time - if the user has an account, uploads their information to the back cache of a quick cache, such as redis or couchdb - depending on your preference for key / value storage and document-based storage.
For guest users, since they will not make as many records as they read, this will reduce the load from your db; not to mention performance improvements for registered users.
source share