Large Blocks and Oracle Instance
Caching
When an SQL query retrieves a result set from an Oracle table,
it is probably gathering the table rows through an index. Many
Oracle tuning experts have recommended that databases created
prior to Oracle10g be redefined with large block sizes. The
performance gains realized from switching a 2K block size
database to an 8K block size have perplexed many DBAs.
Resistance to increasing the block size was typically expressed
as “Why will moving to a large block size improve a database
that only randomly fetches small rows?” The answer to this
question is not so simple, but it involves indexes.
Many DBAs fail to consider index trees and the index range scan
process of sequential retrieval of the
index when choosing a block size. Nested loop joins usually
indicate an index range scan, and the vast majority of rows are
accessed using indexes.
Locating indexes in larger size blocks reduces I/O and further
improves throughput for the entire database because index range
scans gather index nodes sequentially. If this is the case, why
not just create the database with large block sizes and forget
about multiple block sizes?
The answer to this question is also complex. RAM buffer memory
cannot be utilized with maximum efficiency unless the tables are
segregated according to the distribution of related data between
them. In allocating block sizes, the same general rules can be
applied, with some modification in understanding.
Small block size
Tables containing small rows that are accessed randomly should
be placed into tablespaces with smaller block sizes. This way,
more of the buffer RAM remains available to store rows from
other tables that are referenced frequently.
Larger block size
Larger block sizes are suitable for indexes, row ordered tables,
single-table clusters, and tables with frequent full-table
scans. In this way, a single I/O will retrieve many related
rows, and future requests for related rows will already be
available in the data buffer.
Some objects that may benefit from a larger blocksize, such as
16K or 32K, include:
§
Some indexes, such as those that experience index
range scans
§
Large tables that are the target of full table
scans
§
Tables with large object, such as BLOB, CLOB,
etc., data
§
Tables with large row sizes that might blossom
into chained/migrated rows
§
Temporary tablespaces used for sorting
The simple goal is to maximize the amount of RAM available to
the data buffers by setting the block size according to the
amount of I/O the table or index sees. Smaller block sizes are
appropriate for randomly accessed small rows while larger blocks
are more suitable for rows that are sequentially accessed.
To illustrate, suppose a query retrieves 100 random 80 byte rows
from Oracle. Since the rows are randomly accessed, it is safe
to assume that no two rows exist on the same block, implying
that it is necessary to read 100 blocks to fulfill the task.
If the blocks are sized at 16K, the
db_16k_cache_size buffer will need 16
MB (16K * 100) of RAM. If the blocks are sized at 2K instead,
only 2 MB of RAM is needed in the buffer for the 100 I/Os.
Using the smaller block size would save 14 MB of RAM for this
query alone. This is RAM that will be available elsewhere to
hold other data.
SEE CODE DEPOT FOR FULL SCRIPTS