16 January 2015

Basic Strategies for Database Performance Tuning

Tuning, especially with respect to performance of databases, is an art more than it is a science, and it can take years before one qualifies to call himself a master. The general aim of this blog is therefore to provide basic strategies through which to address database performance issues– a checklist, as it were.

The data model

Where you are using a home-grown system or application ensure that the work is done by a good designer. A badly designed data model will wreak havoc to operations as the database grows. Use normalization sparingly, because while it is good for saving space, it can make quick retrieval of data much harder.

Tuning the SQL

Poorly written SQL code can halt operations in the most. Statistically, poorly written SQL code accounts for 75% of database performance problems. You can apply many tools to help, but here are a few tips:

Use indexes
  • Try to avoid table scans, particularly longer table scans
  • Wherever possible steer clear of complex joins, unions in particular
Tune the memory

Most remote database management systems (RDBMSs) are capable of operating more efficiently on memory rather than disk IOs. The reason for this is simple: read-write memory is always faster than read-write disk.
  • Always ensure that the shared pool (ratio of library and dictionary hits) and the buffer hit rations stay at or above the recommended levels.
  • Ensure that every other buffer is sized properly – PGA, redo log buffer, java pool and others. 

Tuning the disk IO

There must never be any IO contention existing on the physical disks. IOs should therefore spread across several devices, and this should be part of the considerations when choosing the disk RAID strategy and when creating the database. At least monthly, confirm that the IO is balanced and identify any ‘hot spots’.
  • Confirm that indexes have been tuned properly
Tuning the sorts

The key is to ensure that the size of the sort area is big enough to allow for sorting on the memory rather than on disk.

Gathering statistics

Stale statistics will give undesirable and unpredictable results, so be sure to update regularly the statistics used by the cost-based optimizer. The cost-based optimizer utilizes this information in order to assign the best access path for every SQL statement. This of course, is not applicable if you are using the Rule-based optimizer.

Watching the OS

You are looking out to make sure that there is not any memory swapping or paging.


Schedule regular reorganizing sessions especially for the highly volatile tables, and do not overlook the indexes. In performance tuning, indexes are probably the most important structures within the database. With time, B-Tree indexes may become unbalanced. As good practice, it is advisable to rebuild indexes regularly on heavily updated tables.

Identifying lock contentions

Lock contentions occur when one user has to wait for another user to free up a resource before processing of his or her request. It therefore goes without saying that lock contentions are the enemy of high performance on databases. If you are unable to solve such occurrences, you should contact Remotedba.com experts for further assistance.

Identifying wait events

Wait events are particularly useful in identifying areas where there are database issues.
The final thing to have in mind is that regard of performance time is relative. What may be acceptable to you may not be so for another. As such, it is important to consider the needs of the end user in setting benchmark levels for performance. To know more about DBA services click here.