Oracle Multiple Blocksizes
Databases with multiple blocksizes have been around for more
than 20 years and were first introduced in the 1980’s as a
method to segregate and partition data buffers. Once Oracle
adopted multiple blocksizes in Oracle9i in 2001, the database
foundation for using multiple blocksizes was already a well
tested and proven approach. Non-relational databases such as
the CA IDMS/R network database have been using multiple
blocksizes for nearly two decades.
Originally implemented to support transportable tablespaces,
Oracle DBA’s quickly realized the huge benefit of multiple
blocksizes for improving the utilization and performance of
Oracle systems. These benefits fall into several general areas,
which are detailed in the following sections.
Reducing data buffer waste
By performing block reads of an appropriate size, the DBA can
significantly increase the efficiency of the data buffers. For
example, consider an OLTP database that randomly reads 80 byte
customer rows. If there is a 16k
db_block_size , Oracle must read all
of the 16k into the data buffer to get the 80 bytes, which is a
waste of data buffer resources. If this customer table is
migrated into a 2k blocksize, only 2k needs to be read in to get
the row data. This results in eight times more available space
for random block fetches as shown in Figure 13.4.

Figure 13.4:
Improvements in data
buffer utilization
Reducing logical I/O
As more and more Oracle databases become CPU-bound as a result
of solid-state disks and 64-bit systems with large data buffer
caches, minimizing logical I/O consistent gets from the data
buffer has become an important way to reduce CPU consumption.
This can be illustrated with indexes. Oracle performs index
range scans during many types of operations such as nested loop
joins and enforcing row order for result sets with an ORDER BY
clause. In these cases, moving Oracle indexes into large
blocksizes can reduce both the physical I/O (disk reads) and the
logical I/O (buffer gets).
Robin Schumacher has proven in his book
Oracle Performance
Troubleshooting (2003, Rampant TechPress) that
Oracle b-tree indexes are built in flatter structures in 32k
blocksizes. There is also evidence that bitmap indexes will perform faster in a 32k blocksize.
There is also a huge reduction in logical I/O during index range
scans and sorting within the TEMP tablespace because adjacent rows are located inside the same
data block as shown in Figure 13.5.

Figure 13.5:
Improvements Logical I/O
Improving data buffer efficiency
One of the greatest problems with very large data buffers is the
overhead of Oracle in cleaning out direct blocks that result
from truncate operations and high activity DML. This overhead
can drive up CPU consumption of databases that have large data
buffers as shown in Figure 13.6.

Figure 13.6:
Dirty Block cleanup in a
large vs. small data buffer
By segregating high activity tables into a separate, smaller
data buffer, Oracle has far less RAM frames to scan for dirty
block, improving the throughput and also reducing CPU
consumption. This is especially important for high update
tables with more than 100 row changes per second.
multiple blocksizes
The use of multiple blocksizes is the most important for very large
databases with thousands of updates per second and thousands of
concurrent users accessing terabytes of data. In these super large
databases, multiple blocksizes have proven to make a huge difference
in response time.
The largest benefit of multiple blocksizes can be seen in the
following types of databases:
Databases with a large amount of index access (first_rows
optimizer_mode
)
and databases with random fetches of small rows are ideal for buffer
pool segregation.
Oracle databases with 64-bit software can support very large data
buffer caches and these are ideal for caching frequently-referenced
tables and indexes.
In databases where a small subset of the database receives large
update activity (i.e. a single partition within a table) there will
be a large reduction in CPU consumption when the high update objects
are moved into a smaller buffer cache.
On the other hand, there are specific types of databases that may
not benefit from the use of multiple blocksizes:
Since each data blade in an Oracle10g grid node has only two to four
gigabytes of RAM, data blade grid applications do not show a
noticeable benefit from multiple block sizes.
Oracle databases using solid-state disks (RAM-SAN) perform fastest
with super small data buffers that are just large enough to hold the
Oracle serialization locks and latches.
Large Oracle data warehouses with parallel large table full-table
scans do not benefit from multiple blocksizes. Parallel full table
scans bypass the data buffers and store the intermediate rows sets
in the PGA region. As a general rule, databases with the
all_rows
optimizer_mode
may not benefit from multiple blocksizes.
Even though Oracle introduced multiple blocksizes for an innocuous
reason, their power has become obvious in very large database
systems. The same divide and conquer approach that Oracle has used
to support very large databases can also be used to divide and
conquer Oracle data buffers.
SEE CODE DEPOT FOR FULL SCRIPTS