Important changes in 11g data buffer management


When the Oracle database was first created, Oracle knew that the database need a place to buffer-up frequently referenced data block in faster RAM storage. The data buffer region was originally defined by the single parameter db_cache_size, but Oracle now allows you to customize your I/O by the data characteristics, using multiple block sizes for different data workload characteristics.

However, these multiple data buffers do not address the impending explosion of solid-state disk technology, high-speed RAM disks that are poised to replace the ancient "spinning rust" platters technology that is half a century old.

In addition to being up to 600 times faster than platters, prices are falling to under $1,000 per gigabyte, making SSD flash disks an obvious replacement for the elderly spinning platters. We also see the Exadata database machine, a million dollar monster with terabytes of SSD storage.

Today, when Oracle reads in a data block from disk, it gets inserted at the buffer midpoint, and it is "pinged" to the most-recently-used end of the data buffer each time that the block is required by a SQL query. Oracle also uses the RAM data buffer to make read consistent clones of data blocks and uses the buffer space for some internal space management operations.

But how are things different when our storage is on the same super-fast RAM as the data buffers? Do we even need a data buffer?
The answer is yes, but it's no longer the primary goal of keeping popular data blocks on faster storage. Oracle must still need RAM to clone the data block for multi-version read consistency (a CR copy).

Let's take a closer look and see how the flash_cache data buffer differs from a regular buffer and learn more about these important new changes to data buffer management for solid-state storage SSD flash RAM disk:

..more..