Hypercharge SQL Performance with Function-based indexing

I am now one year into writing my next book "Oracle SQL Tuning: The Definitive Reference", and I have had an epiphany about SQL tuning that all Oracle professionals should know.

The vast majority of SQL tuning can be avoided by following a few simple rules:

- Carefully formulate the SQL syntax to conform to standard practices
- Always use complete optimizer statistics (using dbms_stats)
- Make sure that all indexes exist, especially function-based indexes

In my experience, the single most common mistake that I have observed is SQL that performs poorly because the DBA or developer did not know how to create a function-based index. A function-based index (FBI) is a Godsend because it allows the matching of virtually any WHERE clause predicate with a corresponding index.

With an FBI in place, Oracle goes directly to the rows needed, minimizing database I/O and improving overall performance. Better yet, adding missing function-based indexes serves as a silver bullet (an act that changes the entire workload landscape). When an FBI is added, Oracle instantly recognizes the new index and re-parses all SQL that might benefit from it such that a single FBI might improve the performance of thousands of SQL statements.

Follow the link below to see why function-based indexes are so critical to SQL performance:

..more..

Understanding Oracle knowledge engineering

Since the earliest days of business computing, the idea has been to identify well structured activities and automate them.

The first business processes to be automated were well structured redundant tasks such as payroll processing, tasks that take repetitive and well structured components of a system and automate them.

As the decades passed, Information Systems became more sophisticated at capturing and deploying human intelligence within computer systems, and we see these types of systems:

Expert Systems - These online system capture a well structured task and mimic human processing. An example would be Mycin, a system that applies physician intelligence at analyzing blood samples. An expert system makes the decision without the aid of any human intuition.

Decision Support Systems - A DSS is a computerized system that recognizes that human intuition is difficult to quantify and automate. In a DSS the human makes the decision, guided by software that automates the well structured aspects of the problem domain.

The line between an expert system and a decision support system blurs in some cases when what is thought to be an intuitive process is actually a well structured problem with extremely complex decision rules.

To see examples of knowledge engineering and how it can be used for data cleansing follow the link below:

..more..

Rewriting complex SQL for high performance


I am now about one year into writing my next book "Oracle SQL Tuning: The Definitive Reference," and I've discovered some fascinating new principles of SQL tuning, counterintuitive approaches that can make your SQL run faster than ever before.

Many experts correctly state that a problem should always be solved using SQL without using PL/SQL unless absolutely necessary, but there is a limit to that technology. Once a SQL query has subqueries nested within subqueries, subqueries in the select clause (a scalar subquery), subqueries in the from clause (an in-line view), the SQL becomes difficult to read, hard to maintain, and challenging to optimize. In these cases, we can use some powerful Oracle tools to divide and conquer complex SQL statements.

Because SQL is a declarative language, we can formulate equivalent queries with numerous techniques. SQL is a "state space" query language where you simply request the desired rows, and there are many ways to write the same query.

In the past, programming competitions were not concerned with who could solve a problem the fastest but with who could write the solution that ran the most efficiently. Today this is no longer the case, and SQL developers are charged with getting the right rows back as quickly as possible, regardless of the internal execution plan or the time required to execute the query!

Follow the link below to take a closer look at several powerful SQL re-writing techniques:

..more..

To pre-order my SQL tuning book, follow the link below where you can pre-order it for 30% off by buying it directly from the publisher:

..more..

A guide to Oracle virtualization


To our friends in the Burleson Oracle community:

Before you start planning an Oracle virtualization strategy, be sure to familiarize yourself with the basics, gain an understanding of Oracle’s licensing and support policies, and learn the pros and cons of Oracle VM vs. VMware.

For an introduction to Oracle virtualization and to discover how Oracle VM fits into the virtualization market, check out this exclusive guide:

..more..

Also, don’t forget to check out additional sections in this Oracle virtualization guide, including an overview of:

* Oracle's recent acquisition of Virtual Iron
* Oracle VM Template Builder
* Oracle VM vs. VMware

Become a member of SearchOracle.com today to access this resource for Oracle professionals seeking to create a uniquely tailored virtualization strategy:

..more..

Secret Oracle Utilities


Oracle has "secret" undocumented utilities that are so powerful that the company is reluctant to release the details to the general public. Oracle deliberately glosses over these powerful utilities, which can be extremely dangerous in the hands of Oracle professionals without the required work experience. Oracle must write their documentation with the lowest common denominator in mind, and the complexity of these utilities is such that it can far exceed the skill and understanding level of that lowest common denominator user.

These utilities are often left tucked inside an Oracle distribution for use exclusively by Oracle technical support, but expert Oracle DBAs also find them invaluable for advanced database operations. These undocumented utilities are described in the new book "Advanced Oracle Utilities: The Definitive Reference".

A hidden Oracle utility is not a utility that is physically hidden in the Oracle software. Rather, a hidden utility is an executable or PL/SQL package that is either undocumented or is included such that the documentation is difficult to find. For example, some PL/SQL packages are never loaded by the Oracle installer, yet their definitions remain in the operating system files.

Follow the link below to discover how to identify and use these secret Oracle Utilities:

..more..

See below to get the book on advanced Oracle utilities for 30%-off, by buying it directly from the publisher:

..more..

How to read the steps in a SQL execution plan listing


When Codd and Date created the relational data model, the execution plan was an afterthought. This was largely due to the fact the SQL optimizer was always supposed to generate the best execution plan; therefore, there was no real need to understand the internal machinations of Oracle execution plans.

In the real world, all SQL tuning experts must be proficient in reading Oracle execution plans and understanding the steps within a explain plans, including the sequence in which the steps are executed. To successfully understand an explain plan, you must be able to know the order that the plan steps are executed.

Reading an explain plan is important for many reasons, and Oracle SQL tuning experts reveal the explain plans to check many things:

* Ensure that the tables will be joined in optimal order.
* Determine the most restrictive indexes to fetch the rows.
* Determine the best internal join method to use (e.g. nested loops, hash join).
* Determine that the SQL is executing the steps in the optimal order.

Reading SQL execution plans has always been difficult, but there are some tricks to help determine the correct order that the explain plan steps are executed. Follow the link below for some insight into this challenging undertaking:

..more..

Oracle 11gR2 new parallel query management enhancements


The database industry is clearly in the midst of massive server consolidation, an economic imperative whereby the old-fashioned one database/one server approach of the client-server days has been rendered obsolete. Today, single servers with 32 and 64 CPU’s and hundreds of gigabytes of RAM can host dozens of large Oracle databases.

While the 2nd age of mainframe computing came about to facilitate easier DBA management, there remain the impressive benefits of having a back of dozens of CPU’s to perform full scans very quickly.

When invoking Oracle parallel query, there are many perils and pitfalls:

* Setting parallelism on at the table or system level influences the optimizer, and sometimes makes full-scan operations appear cheaper than they really are.

* Determining the optimal degree of parallelism is tricky. The real optimal degree of parallelism depends on the physical placement of the data blocks on disk as well as the number of processors on the server (cpu_count).

To relieve these issues with parallel query, in Oracle 11g Release 2, the following new parallel query parameters are included:

* The parallel_degree_policy parameter
* The parallel_min_time_threshold parameter
* The parallel_degree_limit parameter
* The parallel_force_local parameter

Follow the link below to take a closer look at these important enhancements to Oracle parallel query in 11g Release 2.

..more..

Oracle flash_cache tips


Oracle 11g Release 2 includes a feature called flash_cache, a table and index argument that resembles the assignment of high use objects to the KEEP pool.

However, unlike the KEEP pool which uses volatile RAM disk, the flash_cache is used for tertiary storage on solid-state disk (SSD). On Oracle, SSD is up to 600 times faster than platter disks, and at only $2,000.00 per gigabyte, SSD technology is going to replace platter disks for most systems by 2015. In fact, the 2009 Sun servers have on-board SSD flash memory.

The Oracle documentation suggests enabling the flash_cache when the data buffer advisor suggests that Oracle wants more RAM, which is very common, when the system is disk I/O bound and when the system has spare CPU cycles.
They mention CPU because moving to solid-state flash storage removes disk I/O but changes the workload bottleneck to CPU in many cases.

In other words, flash_cache is for every system that is not already running an SSD back end! It should be considered if:

* The Buffer Pool Advisory section of your Automatic Workload Repository
(AWR) report or STATSPACK report indicates that doubling the size of the buffer cache would be beneficial.

* db file sequential read is a top wait event.

* You have spare CPU.

Of course, many Oracle shops have been running solid-state disks for many years now, so it is not really a new technology. In traditional SSD, the SSD looks just like a platter disk, except that the I/O can be hundreds of times faster!

See below for expert Oracle flash_cache tips:

..more..

Review: Oracle's 11g R2 database has some good and bad


To our friends in the Burleson Oracle community:

Although Oracle made no major architectural enhancements to the new version of 11g, Release2 serves as a mix of patches and fixes to existing 11g features. And there are some interesting new features, alongside a couple of clunkers, that you should know about.

Take a closer look at these new features in 11g R2, and see why they are important to an Oracle professional:

..more..

In this review, Don Burleson delves into the more interesting features, as well as the clunkers, including:

* Column level data storage for Exadata
* Oracle flash_cache
* ASM Cluster Filesystem (ACFS)
* And more!

Get the scoop on Oracle 11g R2’s useful changes and learn how to work around the flops. Simply become a SearchOracle.com member today to access this review:

..more..

RAC One Node tips


Traditionally, Oracle RAC is used in a multi-node architecture, with many separate instances of RAC residing on separate servers. This protects against unplanned server outages because transparent application failover will redirect to another server. It also aids the RAC goal of scalability when a new server instance is genned into the RAC cluster.

Now in Oracle 11g R2, there is a new feature dubbed RAC One Node. RAC One Node claims to be multiple instances of RAC running on a single node in a cluster with a fast instance relocation feature in cases of catastrophic server failure.

This instance relocation uses a new featured called Oracle Omotion.

Unlike regular RAC, running nodes on many servers, RAC One node will not protect you in a case of server failure. It does, however, offer instance duplication within a single server environment.

Oracle Corporation is now embracing the IT industry’s concept of instance consolidation, a movement toward collected instances from the bad old days of client server computing where there existed a one instance/one server architecture. This is a “back to the future” approach, a movement back to the monolithic server environments of the 1980's, with all of their benefits:

* Centralized patching and software maintenance.
* Redundant instances mean zero outages during patches and upgrades.
* On demand RAM and CPU sharing within the single large server.
* Less Oracle DBA resources required to manage dozens of instances.

Follow the link below for expert RAC One Node tips:

..more..

Oracle column level compression


In traditional relational theory, the internal representation of the data on the physical blocks is not supposed to matter, but in the real world, the placement of the data on blocks is critical. Oracle provides tools like sorted hash clusters to group related rows together, and row sequencing can dramatically improve the performance of SQL queries by placing all information on a single data block.

Related tables can be grouped together on the same data block using Oracle cluster tables. For example, this can be done if you have a busy OLTP database where millions of people query customer and related order rows all day long.

If you use Oracle cluster tables to put the customer and order rows together on a single data block, it will greatly reduce the number of trips to the database to fetch the desired result set.

There is a tradeoff in using Oracle 11g table compression between run-time performance of the SQL vs. the processing overhead of compressing and de-compressing the rows. This type of compression requires overhead, and the less volatile the table, the better the overall performance.

Follow the link below for the advantages and obstacles in using Oracle 11g table compression:

..more..

Important tips for tuning SQL joins


Most Oracle professionals are aware of the logical ways to join tables, such as equi-joins, outer joins and semi-joins; however, Oracle also has internal ways of joining tables, including physical join methods that are critical to understanding SQL tuning.

Built into Oracle is a vast array of internal algorithms, including several types of nested loops and merge joins as well as six types of hash joins.
There are also many data access methods, such as a variety of internal sorts and table access methods.

When we examine an execution plan for a SQL statement, we may see the physical join implementations with names like nested loops, sort merge and hash join.

The propensity of the SQL optimizer to invoke a hash join is heavily controlled by the setting for the hash_area_size and pga_aggregate_target Oracle parameters. The larger the value for hash_area_size, the more hash joins the optimizer will invoke. In some releases of Oracle, the hash_area_size defaults to double the value of the sort_area_size parameter, but it highly dependent upon parameter settings and the Oracle release level.

When tuning SQL, we must always remember that it is possible for the optimizer to fail to choose the best table join method. This is especially true for cases where a hash join is wrongly chosen over nested loops. This is frequently the case when we have sub-optimal schema statistics, especially column histograms, which can lead to cases where the optimizer makes an incorrect guess about the cardinality of a result set and wrongly invokes a join that requires a full-table scan rather than choosing nested loops.

See below for expert SQL tuning tricks for forcing nested loops joins over hash joins:

..more..

Choose one: High throughput or fast response time


Many people assume that the defaults for the Oracle parameters are just-right for their systems, not realizing the incompatible goals of optimizing for throughput vs. optimizing for fast response time.

The default in Oracle 11g for the optimizer_mode is all_rows, an optimizer mode that has the goal of maximizing total throughout by choosing SQL access plans that minimize server resources. While this is a worthy goal, optimizing for maximum system throughput is not the same as optimizing for the fastest response time, a goal set with optimizer_mode=first_rows.

- Response time – This is a “selfish” goal; minimizing response time for individual queries, regardless of the expense to other tasks on the system. Fast response time is often associated with index access.

- Throughput – This is a holistic system-wide optimizer goal that is concerned with optimizing your entire workload to maximize total throughout as a whole. High throughput is often associated with parallelized large-table full-table scans.

To get the fastest response time we often need to use the first_rows optimizer_mode, a setting that directs the SQL optimizer to choose plans that start returning rows back to the calling application as quickly as possible. Optimizing for the fastest response time often involves using indexes, an access technique that returns rows quickly but incurs more I/O operations.

The optimizer_mode is a “silver bullet” parameter, case where a single change will have a profound impact on the entire database.

Follow the link to read more about optimizing for fast response time vs. optimizing for high system throughput:

..more..

Important changes in 11g data buffer management


When the Oracle database was first created, Oracle knew that the database need a place to buffer-up frequently referenced data block in faster RAM storage. The data buffer region was originally defined by the single parameter db_cache_size, but Oracle now allows you to customize your I/O by the data characteristics, using multiple block sizes for different data workload characteristics.

However, these multiple data buffers do not address the impending explosion of solid-state disk technology, high-speed RAM disks that are poised to replace the ancient "spinning rust" platters technology that is half a century old.

In addition to being up to 600 times faster than platters, prices are falling to under $1,000 per gigabyte, making SSD flash disks an obvious replacement for the elderly spinning platters. We also see the Exadata database machine, a million dollar monster with terabytes of SSD storage.

Today, when Oracle reads in a data block from disk, it gets inserted at the buffer midpoint, and it is "pinged" to the most-recently-used end of the data buffer each time that the block is required by a SQL query. Oracle also uses the RAM data buffer to make read consistent clones of data blocks and uses the buffer space for some internal space management operations.

But how are things different when our storage is on the same super-fast RAM as the data buffers? Do we even need a data buffer?
The answer is yes, but it's no longer the primary goal of keeping popular data blocks on faster storage. Oracle must still need RAM to clone the data block for multi-version read consistency (a CR copy).

Let's take a closer look and see how the flash_cache data buffer differs from a regular buffer and learn more about these important new changes to data buffer management for solid-state storage SSD flash RAM disk:

..more..