What would be a good solution for storing object change history?

You must track changes made to objects in the database.

The trivial implementation will be to have a mirror table that receives the records inserted into it using triggers, either inside the database or inside the application, but this affects performance and over time the mirror database becomes huge and essentially doubles service time, when the source table should be changed (the mirror table should reflect this change).

Since my biggest requirement is to have minimal impact on database and application performance, my current preference is to flush changes to syslog-ng by udp and store them in text files.

After all, the change log is not something that is often accessed, so it can even archive it well over time. But, obviously, with this setup, actual access to this data is rather complicated.

So, I think, my question is: is there already a system that at least partially satisfies my needs? An ideal fit would be available via UDP without a database access scheme with the ability to automatically archive data (or at least the minimum amount of configuration required for this), or to slow insert performance very slowly. MongoDB? CouchDB? Yourdb?

+4
source share
6 answers

Well, there are many ways to approach this. I am most familiar with MongoDB and so lean in that direction. All in all, I think this will satisfy your performance needs, and using a replica set with reading coming out of slaves is likely to be the approach. However, the version is not built-in. You can see one approach to version control with Mongoid :: Versioning:

Mongoid :: Versioning - how to check previous versions?

The other solutions you mentioned may have better support in the native language, but I can't talk about it. Hopefully this at least gives you some guidance on the side of MongoDB.

+2
source

See the history of the Mongoids

It tracks the history of changes, like what, when, by whom along with the version. It also has configuration options.

+1
source

RavenDB has this function native (but it may be too young, as for NoSQL db for production needs - of course, of course)

http://ravendb.net/docs/server/bundles/versioning

http://www.slideshare.net/jwoglamott/battle-of-nosql-stars-amazons-sdb-vs-mongodb-vs-couchdb-vs-ravendb

If you want to switch to MongoDB, in this thread

Strategy 1: embed history will not affect your record performance and will read if you configure your code to avoid returning the history when it is not necessary, however you have a 16 MB limit for one document (maybe it is blocking for you or not). Strategy 2: write history to separate collection requires (explicitly) two operations. I agree that these (or a combination) are the strategies available in MongoDB.

CouchDB uses the internal MVCC approach (and you could use one for it as suggested here ), but in SO this approach has been discussed . There is a question about this section , and the proposed solution is similar to the built-in strategy described above for MongoDB (so you should choose the one you prefer).

+1
source

For simple purposes (MySQL!), Just create an AFTER UPDATE trigger for the tables you want to keep.

For example, for table machines with fields

carId (primary key) color manufacturer model

create a table 'cars_history' (or an equal name) with the fields: carId field old_value new_value

and add an AFTER UPDATE trigger as follows:

 delimiter // CREATE TRIGGER trigger_changes AFTER UPDATE ON cars FOR EACH ROW BEGIN IF OLD.manufacturer <> NEW.manufacturer THEN INSERT INTO cars_history ( carId, field, old_value, new_value) VALUES (OLD.carId, 'manufacturer', OLD.manufacturer, NEW.manufacturer); ELSE IF OLD.color <> NEW.color THEN ... END IF; END;// delimiter ; 

unchecked, so it may contain syntax errors :) I hope this helps!

+1
source

What about SQLite? Each database is a stand-alone file that can be easily renamed and moved during archiving. If the file is renamed or moved automatically, another is created in the next insert.

The only problem with SQLite is simultaneous write, which requires locking the file for writing. It can perform about 60 transactions per second, but you can make thousands of investments in one transaction (see doc ).

0
source

Interestingly, this is exactly the type of solution you are looking for: http://www.tonymarston.net/php-mysql/auditlog.html

This is a very simple, elegant solution with a small data size, and I expect it to have minimal impact on insert time.

0
source

Source: https://habr.com/ru/post/1395065/


All Articles