Optimization for Write heavy Oracle application?

What are some useful Oracle optimizations that you can use for an application that basically writes (updates) to an Oracle database?

The general usage pattern here is not a web service or logging, as is usually the case, but instead to preserve the complex state of the system, so you only need to read the time when the system starts, after which it needs to be updated and created all the way. So right now, the Write to Read ratio is more than 9 to 1, in which case, which database tuning can improve performance?

+4
source share
4 answers

Monitoring system health using statspack (9i) or AWR (10g +) would be the best way to identify bottlenecks.

In particular:

  • looking for repeat wait. A repeat log is critical to maintaining a high write speed.
  • Use bind variables
  • Use bulk operations where possible.
  • Watch out for competition with the index when multiple processes insert records into the same table with the index in the column that produces the sequence.
+5
source

Along with David, answer:

  • If necessary, track the movement of rows of rows and the grouping of rows and the storage parameters of the change table
  • Check your snooze log file system: disable FS caching (i.e. use Direct I / O), disable the last access time, change the block size to 512B. Or better yet, go to ASM.
  • Read about indexed tables and see if you can apply them anywhere.
  • Verify that you are using asynchronous I / O.
  • For large SGA sizes, enable large pages and LOCK_SGA (platform specific)
  • An experiment with various DBWR settings (e.g. fast_start_mttr_target, dbwr_processes)
  • At the hardware level, make sure you have a decent RAID-10 controller with write caching enabled! Get lots of 15,000 RPM hard drives.

Last but not least, identify repetitive and realistic benchmarks before making any changes. There are many hits and misses in such a setting - for each test run, only one change is made at a time.

+2
source

I could not recommend the Oracle Enterprise Management Console (built into Oracle) enough. This will let you know exactly what you are doing wrong, and how to fix it!

You may want to get rid of any additional index (s) you might have. This can cause a small overhead at startup, but adding data to the indexed table can slow it down significantly.

+1
source

Depending on the characteristics of your application and your data, consider bulk loading data using an external Oracle table. Ask the application to write data to a text file, and then use INSERT INTO in your target table from SELECT in the external table = very quickly.

There are some limitations, and this may not suit your circumstances, but it gives great performance when you can use it.

I used this to download almost real-time text data files at a speed of 40,000 files per day, up to about 2 MB per file, into an Oracle 10g database instance (yes, TeraBytes).

0
source

Source: https://habr.com/ru/post/1277596/


All Articles