Impact of Transactional Performance on Materialized View Logs

I studied the use of materialized views for data aggregation and reporting for a company that is mainly focused on transactions (using Oracle db). The current reporting system depends on a number of views that hide a lot of complex application data logic. These views place a heavy burden on the system when they are called.

We are interested in using a “quick update” for incremental updates to perform some complex query logic before being used in reporting; however, the organization is concerned that materialized browsing logs (which are required for this quick update) will affect our current database transaction performance. This work is very important for our organization, therefore there is a big fear of any changes.

Here is an example of the type of materialized view log that we need to implement:

create materialized view log on transaction with rowid, sequence(transaction_id,account_id,order_id,currency_id,price,transaction_date,payment_processor_id) including new values; 

We will not use the on commit clause for updates, but the “on demand” clause when creating the view, since we understand that this will affect performance.

Will implementing this type of logging affect database transaction performance? I assume that this should slightly affect performance, as there is an additional write procedure (to the log) that is completed in commit, but I cannot find a link to this in the Oracle documentation. Any literature or advice on this would be appreciated.

Thank you for your help!

+4
source share
1 answer

Yes, there will be a blow. The materialized log must be maintained synchronously, so transactions will need to insert a new row in the materialized log for each row that has been changed in the base table. How much influence is highly system dependent. If your system is connected to I / O and you optimized it so that physically writing changes to the base table is a significant part of the latency, the impact will be much greater than if your system is CPU-linked and most of your latency is spent reading data or performing calculations.

If you are really concerned about OLTP system performance, it would be advisable to offload the report to another database on another server. You can replicate data to the report server using Streams (or GoldenGate, if you can afford additional licensing), which will have less impact on the source than materialized views, because the repetition information can be read asynchronously (and can be read to the report server rather than putting this workload on a production server). You could then define materialized views on the report server, where they would not have any effect on the OLTP server. Or you can create a logical backup database as a report server and create materialized views there. In any case, transferring the report workload from the production server and reading asynchronously the snooze data will protect the production server performance.

+7
source

Source: https://habr.com/ru/post/1494335/


All Articles