Hypercharge SQL Performance with Function-based indexing

I am now one year into writing my next book "Oracle SQL Tuning: The Definitive Reference", and I have had an epiphany about SQL tuning that all Oracle professionals should know.

The vast majority of SQL tuning can be avoided by following a few simple rules:

- Carefully formulate the SQL syntax to conform to standard practices
- Always use complete optimizer statistics (using dbms_stats)
- Make sure that all indexes exist, especially function-based indexes

In my experience, the single most common mistake that I have observed is SQL that performs poorly because the DBA or developer did not know how to create a function-based index. A function-based index (FBI) is a Godsend because it allows the matching of virtually any WHERE clause predicate with a corresponding index.

With an FBI in place, Oracle goes directly to the rows needed, minimizing database I/O and improving overall performance. Better yet, adding missing function-based indexes serves as a silver bullet (an act that changes the entire workload landscape). When an FBI is added, Oracle instantly recognizes the new index and re-parses all SQL that might benefit from it such that a single FBI might improve the performance of thousands of SQL statements.

Follow the link below to see why function-based indexes are so critical to SQL performance:

..more..

Understanding Oracle knowledge engineering

Since the earliest days of business computing, the idea has been to identify well structured activities and automate them.

The first business processes to be automated were well structured redundant tasks such as payroll processing, tasks that take repetitive and well structured components of a system and automate them.

As the decades passed, Information Systems became more sophisticated at capturing and deploying human intelligence within computer systems, and we see these types of systems:

Expert Systems - These online system capture a well structured task and mimic human processing. An example would be Mycin, a system that applies physician intelligence at analyzing blood samples. An expert system makes the decision without the aid of any human intuition.

Decision Support Systems - A DSS is a computerized system that recognizes that human intuition is difficult to quantify and automate. In a DSS the human makes the decision, guided by software that automates the well structured aspects of the problem domain.

The line between an expert system and a decision support system blurs in some cases when what is thought to be an intuitive process is actually a well structured problem with extremely complex decision rules.

To see examples of knowledge engineering and how it can be used for data cleansing follow the link below:

..more..

Rewriting complex SQL for high performance


I am now about one year into writing my next book "Oracle SQL Tuning: The Definitive Reference," and I've discovered some fascinating new principles of SQL tuning, counterintuitive approaches that can make your SQL run faster than ever before.

Many experts correctly state that a problem should always be solved using SQL without using PL/SQL unless absolutely necessary, but there is a limit to that technology. Once a SQL query has subqueries nested within subqueries, subqueries in the select clause (a scalar subquery), subqueries in the from clause (an in-line view), the SQL becomes difficult to read, hard to maintain, and challenging to optimize. In these cases, we can use some powerful Oracle tools to divide and conquer complex SQL statements.

Because SQL is a declarative language, we can formulate equivalent queries with numerous techniques. SQL is a "state space" query language where you simply request the desired rows, and there are many ways to write the same query.

In the past, programming competitions were not concerned with who could solve a problem the fastest but with who could write the solution that ran the most efficiently. Today this is no longer the case, and SQL developers are charged with getting the right rows back as quickly as possible, regardless of the internal execution plan or the time required to execute the query!

Follow the link below to take a closer look at several powerful SQL re-writing techniques:

..more..

To pre-order my SQL tuning book, follow the link below where you can pre-order it for 30% off by buying it directly from the publisher:

..more..

A guide to Oracle virtualization


To our friends in the Burleson Oracle community:

Before you start planning an Oracle virtualization strategy, be sure to familiarize yourself with the basics, gain an understanding of Oracle’s licensing and support policies, and learn the pros and cons of Oracle VM vs. VMware.

For an introduction to Oracle virtualization and to discover how Oracle VM fits into the virtualization market, check out this exclusive guide:

..more..

Also, don’t forget to check out additional sections in this Oracle virtualization guide, including an overview of:

* Oracle's recent acquisition of Virtual Iron
* Oracle VM Template Builder
* Oracle VM vs. VMware

Become a member of SearchOracle.com today to access this resource for Oracle professionals seeking to create a uniquely tailored virtualization strategy:

..more..

Secret Oracle Utilities


Oracle has "secret" undocumented utilities that are so powerful that the company is reluctant to release the details to the general public. Oracle deliberately glosses over these powerful utilities, which can be extremely dangerous in the hands of Oracle professionals without the required work experience. Oracle must write their documentation with the lowest common denominator in mind, and the complexity of these utilities is such that it can far exceed the skill and understanding level of that lowest common denominator user.

These utilities are often left tucked inside an Oracle distribution for use exclusively by Oracle technical support, but expert Oracle DBAs also find them invaluable for advanced database operations. These undocumented utilities are described in the new book "Advanced Oracle Utilities: The Definitive Reference".

A hidden Oracle utility is not a utility that is physically hidden in the Oracle software. Rather, a hidden utility is an executable or PL/SQL package that is either undocumented or is included such that the documentation is difficult to find. For example, some PL/SQL packages are never loaded by the Oracle installer, yet their definitions remain in the operating system files.

Follow the link below to discover how to identify and use these secret Oracle Utilities:

..more..

See below to get the book on advanced Oracle utilities for 30%-off, by buying it directly from the publisher:

..more..

How to read the steps in a SQL execution plan listing


When Codd and Date created the relational data model, the execution plan was an afterthought. This was largely due to the fact the SQL optimizer was always supposed to generate the best execution plan; therefore, there was no real need to understand the internal machinations of Oracle execution plans.

In the real world, all SQL tuning experts must be proficient in reading Oracle execution plans and understanding the steps within a explain plans, including the sequence in which the steps are executed. To successfully understand an explain plan, you must be able to know the order that the plan steps are executed.

Reading an explain plan is important for many reasons, and Oracle SQL tuning experts reveal the explain plans to check many things:

* Ensure that the tables will be joined in optimal order.
* Determine the most restrictive indexes to fetch the rows.
* Determine the best internal join method to use (e.g. nested loops, hash join).
* Determine that the SQL is executing the steps in the optimal order.

Reading SQL execution plans has always been difficult, but there are some tricks to help determine the correct order that the explain plan steps are executed. Follow the link below for some insight into this challenging undertaking:

..more..

Oracle 11gR2 new parallel query management enhancements


The database industry is clearly in the midst of massive server consolidation, an economic imperative whereby the old-fashioned one database/one server approach of the client-server days has been rendered obsolete. Today, single servers with 32 and 64 CPU’s and hundreds of gigabytes of RAM can host dozens of large Oracle databases.

While the 2nd age of mainframe computing came about to facilitate easier DBA management, there remain the impressive benefits of having a back of dozens of CPU’s to perform full scans very quickly.

When invoking Oracle parallel query, there are many perils and pitfalls:

* Setting parallelism on at the table or system level influences the optimizer, and sometimes makes full-scan operations appear cheaper than they really are.

* Determining the optimal degree of parallelism is tricky. The real optimal degree of parallelism depends on the physical placement of the data blocks on disk as well as the number of processors on the server (cpu_count).

To relieve these issues with parallel query, in Oracle 11g Release 2, the following new parallel query parameters are included:

* The parallel_degree_policy parameter
* The parallel_min_time_threshold parameter
* The parallel_degree_limit parameter
* The parallel_force_local parameter

Follow the link below to take a closer look at these important enhancements to Oracle parallel query in 11g Release 2.

..more..

Oracle flash_cache tips


Oracle 11g Release 2 includes a feature called flash_cache, a table and index argument that resembles the assignment of high use objects to the KEEP pool.

However, unlike the KEEP pool which uses volatile RAM disk, the flash_cache is used for tertiary storage on solid-state disk (SSD). On Oracle, SSD is up to 600 times faster than platter disks, and at only $2,000.00 per gigabyte, SSD technology is going to replace platter disks for most systems by 2015. In fact, the 2009 Sun servers have on-board SSD flash memory.

The Oracle documentation suggests enabling the flash_cache when the data buffer advisor suggests that Oracle wants more RAM, which is very common, when the system is disk I/O bound and when the system has spare CPU cycles.
They mention CPU because moving to solid-state flash storage removes disk I/O but changes the workload bottleneck to CPU in many cases.

In other words, flash_cache is for every system that is not already running an SSD back end! It should be considered if:

* The Buffer Pool Advisory section of your Automatic Workload Repository
(AWR) report or STATSPACK report indicates that doubling the size of the buffer cache would be beneficial.

* db file sequential read is a top wait event.

* You have spare CPU.

Of course, many Oracle shops have been running solid-state disks for many years now, so it is not really a new technology. In traditional SSD, the SSD looks just like a platter disk, except that the I/O can be hundreds of times faster!

See below for expert Oracle flash_cache tips:

..more..

Review: Oracle's 11g R2 database has some good and bad


To our friends in the Burleson Oracle community:

Although Oracle made no major architectural enhancements to the new version of 11g, Release2 serves as a mix of patches and fixes to existing 11g features. And there are some interesting new features, alongside a couple of clunkers, that you should know about.

Take a closer look at these new features in 11g R2, and see why they are important to an Oracle professional:

..more..

In this review, Don Burleson delves into the more interesting features, as well as the clunkers, including:

* Column level data storage for Exadata
* Oracle flash_cache
* ASM Cluster Filesystem (ACFS)
* And more!

Get the scoop on Oracle 11g R2’s useful changes and learn how to work around the flops. Simply become a SearchOracle.com member today to access this review:

..more..

RAC One Node tips


Traditionally, Oracle RAC is used in a multi-node architecture, with many separate instances of RAC residing on separate servers. This protects against unplanned server outages because transparent application failover will redirect to another server. It also aids the RAC goal of scalability when a new server instance is genned into the RAC cluster.

Now in Oracle 11g R2, there is a new feature dubbed RAC One Node. RAC One Node claims to be multiple instances of RAC running on a single node in a cluster with a fast instance relocation feature in cases of catastrophic server failure.

This instance relocation uses a new featured called Oracle Omotion.

Unlike regular RAC, running nodes on many servers, RAC One node will not protect you in a case of server failure. It does, however, offer instance duplication within a single server environment.

Oracle Corporation is now embracing the IT industry’s concept of instance consolidation, a movement toward collected instances from the bad old days of client server computing where there existed a one instance/one server architecture. This is a “back to the future” approach, a movement back to the monolithic server environments of the 1980's, with all of their benefits:

* Centralized patching and software maintenance.
* Redundant instances mean zero outages during patches and upgrades.
* On demand RAM and CPU sharing within the single large server.
* Less Oracle DBA resources required to manage dozens of instances.

Follow the link below for expert RAC One Node tips:

..more..

Oracle column level compression


In traditional relational theory, the internal representation of the data on the physical blocks is not supposed to matter, but in the real world, the placement of the data on blocks is critical. Oracle provides tools like sorted hash clusters to group related rows together, and row sequencing can dramatically improve the performance of SQL queries by placing all information on a single data block.

Related tables can be grouped together on the same data block using Oracle cluster tables. For example, this can be done if you have a busy OLTP database where millions of people query customer and related order rows all day long.

If you use Oracle cluster tables to put the customer and order rows together on a single data block, it will greatly reduce the number of trips to the database to fetch the desired result set.

There is a tradeoff in using Oracle 11g table compression between run-time performance of the SQL vs. the processing overhead of compressing and de-compressing the rows. This type of compression requires overhead, and the less volatile the table, the better the overall performance.

Follow the link below for the advantages and obstacles in using Oracle 11g table compression:

..more..

Important tips for tuning SQL joins


Most Oracle professionals are aware of the logical ways to join tables, such as equi-joins, outer joins and semi-joins; however, Oracle also has internal ways of joining tables, including physical join methods that are critical to understanding SQL tuning.

Built into Oracle is a vast array of internal algorithms, including several types of nested loops and merge joins as well as six types of hash joins.
There are also many data access methods, such as a variety of internal sorts and table access methods.

When we examine an execution plan for a SQL statement, we may see the physical join implementations with names like nested loops, sort merge and hash join.

The propensity of the SQL optimizer to invoke a hash join is heavily controlled by the setting for the hash_area_size and pga_aggregate_target Oracle parameters. The larger the value for hash_area_size, the more hash joins the optimizer will invoke. In some releases of Oracle, the hash_area_size defaults to double the value of the sort_area_size parameter, but it highly dependent upon parameter settings and the Oracle release level.

When tuning SQL, we must always remember that it is possible for the optimizer to fail to choose the best table join method. This is especially true for cases where a hash join is wrongly chosen over nested loops. This is frequently the case when we have sub-optimal schema statistics, especially column histograms, which can lead to cases where the optimizer makes an incorrect guess about the cardinality of a result set and wrongly invokes a join that requires a full-table scan rather than choosing nested loops.

See below for expert SQL tuning tricks for forcing nested loops joins over hash joins:

..more..

Choose one: High throughput or fast response time


Many people assume that the defaults for the Oracle parameters are just-right for their systems, not realizing the incompatible goals of optimizing for throughput vs. optimizing for fast response time.

The default in Oracle 11g for the optimizer_mode is all_rows, an optimizer mode that has the goal of maximizing total throughout by choosing SQL access plans that minimize server resources. While this is a worthy goal, optimizing for maximum system throughput is not the same as optimizing for the fastest response time, a goal set with optimizer_mode=first_rows.

- Response time – This is a “selfish” goal; minimizing response time for individual queries, regardless of the expense to other tasks on the system. Fast response time is often associated with index access.

- Throughput – This is a holistic system-wide optimizer goal that is concerned with optimizing your entire workload to maximize total throughout as a whole. High throughput is often associated with parallelized large-table full-table scans.

To get the fastest response time we often need to use the first_rows optimizer_mode, a setting that directs the SQL optimizer to choose plans that start returning rows back to the calling application as quickly as possible. Optimizing for the fastest response time often involves using indexes, an access technique that returns rows quickly but incurs more I/O operations.

The optimizer_mode is a “silver bullet” parameter, case where a single change will have a profound impact on the entire database.

Follow the link to read more about optimizing for fast response time vs. optimizing for high system throughput:

..more..

Important changes in 11g data buffer management


When the Oracle database was first created, Oracle knew that the database need a place to buffer-up frequently referenced data block in faster RAM storage. The data buffer region was originally defined by the single parameter db_cache_size, but Oracle now allows you to customize your I/O by the data characteristics, using multiple block sizes for different data workload characteristics.

However, these multiple data buffers do not address the impending explosion of solid-state disk technology, high-speed RAM disks that are poised to replace the ancient "spinning rust" platters technology that is half a century old.

In addition to being up to 600 times faster than platters, prices are falling to under $1,000 per gigabyte, making SSD flash disks an obvious replacement for the elderly spinning platters. We also see the Exadata database machine, a million dollar monster with terabytes of SSD storage.

Today, when Oracle reads in a data block from disk, it gets inserted at the buffer midpoint, and it is "pinged" to the most-recently-used end of the data buffer each time that the block is required by a SQL query. Oracle also uses the RAM data buffer to make read consistent clones of data blocks and uses the buffer space for some internal space management operations.

But how are things different when our storage is on the same super-fast RAM as the data buffers? Do we even need a data buffer?
The answer is yes, but it's no longer the primary goal of keeping popular data blocks on faster storage. Oracle must still need RAM to clone the data block for multi-version read consistency (a CR copy).

Let's take a closer look and see how the flash_cache data buffer differs from a regular buffer and learn more about these important new changes to data buffer management for solid-state storage SSD flash RAM disk:

..more..

How hardware advances drive Oracle DBMS advances


It is well known that hardware advances always precede advances in software technology, and this is especially true for Oracle. As hardware becomes faster and cheaper, Oracle professionals change the ways in which we process information. Changes in hardware affect the way we design databases, the way we implement database and most importantly, the way that we tune databases.

While the costs of a DBA have remained relatively constant over the past 30 years, the costs of hardware have fallen dramatically, leading to a condition where applying hardware resources is often faster and cheaper than applying human resources to fix an acute performance problem. To read more about this, click below:

..click..

This change in the costs of people vs. hardware has led to a technique called "throwing hardware at a problem". Even though hardware may not address the root cause of an acute performance problem, it's often the smart thing to do for these reasons:

* Hardware upgrades can be done fast, in just a few hours. Root cause fixes can take months.

* Hardware upgrades have very little risk. Re-designing an Oracle system can be very risky.

* Hardware upgrades are a cheaper, guaranteed fix. You can cache a 100 gigabyte database on SSD for under $100,000, while a root cause fix may cost millions of dollars.

Read below to get the fascinating story about how hardware advances precede Oracle advances:

..more..

See here how hardware advances will change the Oracle job roles:

..more..

How to avoid an Oracle layoff


In this down economy, companies are forced to make tough decisions. Layoffs have become commonplace, even among Oracle professionals. When a corporation is forced to trim their Oracle professional workforce, they commonly hire outside risk assessment advisors to determine which people are to be laid off.

I have participated in this unsavory process on many occasions, where I am charged with determining which Oracle employees will retain their jobs and which will be terminated. This process generally starts with an unobtrusive analysis of the Oracle professionals and includes the obvious criteria like measuring the quality and quantity of their work product; however, we also look at habits, e-mail habits and web surfing histories.

The goal is to identify those professionals who are indispensible to the ongoing success of the database and separate out those who contribute less to the bottom line. This is especially challenging when evaluating Oracle DBAs because one of the hallmarks of a good DBA is a database that rarely experienced an unplanned outage. Hence, the best DBAs are often those who have automated their tasks and use carefully crafted jobs to monitor and tune their workloads.

Read below some important tips for avoiding layoffs and getting fired from an Oracle job:

..more..

Are you ready for Business Intelligence?


Changes in hardware technology always precede changes in software technology, and Oracle is no exception. As disk platter devices experience an unprecedented fall in cost, disk has become insanely cheap, with over a terabyte available for under $10,000.00. As a direct result of this hardware priced change, Oracle DBAs are being told to keep larger and larger amounts of historical data and to start providing management with tools to analyze and leverage this critical business information. This process is called Oracle business intelligence.

All Oracle databases collect information about ongoing business processes and it is only a matter of time before management starts to seek specialized information about their data. Eventually, all Oracle DBAs are faced with these requests for analysis of operational data, and these requests form the foundation of Oracle data warehousing and business intelligence.

As disk becomes insanely inexpensive, business managers are now choosing keep their operational data over years and use this valuable data to help them with basic Oracle business intelligence, the planning and execution of business analysis processes. Let’s take a quick look at the stages of this evolution into Oracle business intelligence:

- Data collection – This phase involved the ETL (Extract, test and Load) processes, taking operational data and saving it for analysis. The Oracle Data Warehouse Builder tool provides a framework for business intelligence, allowing for data extraction and loading of advanced data warehouse applications.

- Summary and aggregation – Managers start the business intelligence process by making requests for summaries and rollups of critical business metrics.

- Predictive analytics – At this stage, business intelligence requests become more sophisticated, and management will ask the DBA to extract historical data and make predictions using regression techniques.

- Hypothesis testing – As the managers begin to see the value of their data they will start requesting advanced business intelligence, full-blown modeling and the capability to perform hypothesis testing.

Sooner or later, all Oracle DBAs will be asked to perform business intelligence tasks and all Oracle professionals should become familiar with the common BI tools and techniques for Oracle business intelligence operations. See the link below for more details:

..more..

Popular RAC Storage Options in 11g


When one of my clients is working on a new RAC configuration, I’m guaranteed to receive tons of questions. One of the most common is: what is the best storage option for RAC?

Despite the plethora of articles and information regarding storage options, most companies end up going with the advice of their storage vendor.

Some of these options include:

* Raw storage, which is often demanded yet rarely used. Popular misconceptions and difficult management make this a fading technology.

* ASM, the new standard for RAC storage. As a one-time skeptic of this technology, I have found myself consistently pleased with it.

* Direct NFS, 11g’s new networked storage product sure to excite users of NAS filers.

* OCFS2, a cluster file system developed for Oracle RAC environments.

To read more on these options as well as a discussion on udev rules, a device management solution which replaces traditional methods in RHEL5 see here:

..more..

Important ASM changes in 11g Release 2


There are some super-important changes to storage management in Oracle 11g Release 2, Oracle ACE Lutz Hartmann has offered the following items that delve into these important new enhancements in 11gR2.

ASM Cluster Filesystem (ACFS) - This new feature allows is also named "Unified Storage Management" (UFS) and allows for storing Application File Data , the Oracle Cluster Registry (OCR), the Cluster Voting Disk, and the Oracle Binaries, all within ASM

Cluster Voting Disk and Cluster Registry in ASM - This is an important RAC enhancement where it is now possible to create Quorum Disk Groups and Quorum Disk FailGroups which cannot hold any other data than OCR or Voting Disk data.

ASM Dynamic Volume Manager (ADVM) - ADVM is a new device driver which is integrated into the Oracle Kernel and functions as a standard I/O interface for normal file systems to utilize ASM functionalities. This driver communicates with the ASM instance about ASM extent maps, rebalancing operations and I/O failure issues.

ASM FS Snapshots in 11gR2 - It is now possible to create up to 63 image copies of file systems as a point in time capture. These snapshots a read only and can be used to view a file system as of the point in time in the past when they were created.

ASM Optimal Disk Placement in 11gR2 - It is now possible to specify on which disk regions the ASM Disks will be placed. This enables us to make use of faster disk zones on the outer regions of disks for better I/O performance.

..more..

The secrets of Oracle scalability


One of the reasons that Oracle became a leader in database technology was their flexible tool set, a group of utilities that allows a database to grow seamlessly from a small departmental system to a giant multinational behemoth. Being the world’s most flexible and robust database, Oracle offers a wide array of tools and techniques for scaling. It is the challenge of the IT manager to apply these tools at the appropriate time to ensure seamless growth while minimizing the investment in hardware resources.

Even though hardware prices fall by a order of magnitude every decade, the investment in computing resources remains a large, critical expense that requires careful management. Hardware depreciates regardless of use, and a “too much too soon” approach can be wasteful. On the other hand, waiting until the system experiences stress related response time delays is also bad, especially since today’s end-user community has very little tolerance for sluggish response time.

The answer is the “scale up, scale out” approach.

This scalability secret allows Oracle database to grow seamlessly from tiny to behemoth. Read below for the trick:

..more..

Costs of SQL Server vs. Oracle


Microsoft and Oracle are engaged in a price war. Microsoft SQL Server struggles to grow "up" into the midsized database market, while Oracle pushes "down", to server the market for small departmental database systems.

As a result, we are flooded with marketing from both sides. There is some debate in the market about Oracle being too expensive for small users.

Comparing SQL Server to Oracle is like comparing a moped to a Maserati! Oracle is an order of magnitude more flexible and robust that SQL Server.

When comparing the costs of Oracle vs. SQL Server we must remember that it’s not just about features. SQL Server only runs on Intel, no HPUX, no Solaris, no AIX, nothing "industrial strength", with the exception of the UNISYS ES series monster servers.

Read more about the real costs of SQL Server vs. Oracle:

..more..

How tune Oracle SQL remotely


If you use a vendor package with Oracle, you may be one of the thousands of shops that struggle with optimizing and tuning vendor systems that you did not write and cannot change. So, how do you tune SQL when you cannot access the SQL source code.

In traditional SQL tuning, changes are made to re-writing the SQL statement into a more efficient form or by changing the execution plan by adding hints to the select clause. With SQL that hides inside a compiled program, changing the SQL is impossible. In other cases, the software vendor may explicitly prohibit any changes to the source code, you must come up with a creative way to tune the SQL without touching the source code.

Oracle provides a “trick” technique for tuning SQL when you cannot “touch” the source code. Cases where you cannot change the SQL source code include dynamically generated SQL, SQL inside 3rd party vendor packages, and compiled programs with embedded SQL.

Read more below to see examples of tuning SQL when you cannot touch the source code:

..more..

How to find the fastest execution plans

Learn SQL tuning tricks

While many Oracle professionals learned about the relational model and SQL in college, they make the common mistake of thinking that SQL tuning is governed by well-defined decision rules.

Oracle's SQL is only one of a host of different SQL optimizer implementations, and, like any other vendor implementation, there are many nuances and tricks to SQL tuning.

Not all Oracle SQL tuning tricks make sense, but nonetheless, there are powerful yet counterintuitive tricks that can make SQL run at optimal performance levels.

Becoming an expert in Oracle SQL tuning is not trivial, and there are no shortcuts. Only by tuning thousands of SQL statements can you fully understand the subtleties and nuances of Oracle SQL and be able to effective tune large and complex SQL statements.

..more..

Integrating Streams Replication and Standby Databases

The use of Oracle Streams Replication has become more and more common, replacing the more cumbersome Multimaster Advanced Replication. Many shops use Stream Replication to off load reporting and high impact loads from production databases. Based on my experience, more than half of the shops have the destination database in the same rack as the source database. With both databases located in close proximity, many shops are also implementing physical standbys of one or both of the replicated databases in a remote location. Oracle Streams and Dataguard Physical Standby database integrate easily, once you understand how the two technologies fit together.

..more..

Tracking & auditing changes to your init.ora parameters

A very important auditing task for any production database is the ability to track changes to the powerful initialization parameters. Many of these parameters are "silver bullets", single parameters that have a profound impact on system-wide behavior. This is especially true for SQL optimizer changes.

A single change to an optimizer parameter might affect thousands of SQL statements, a disaster in a controlled production environment.

Auditing changes to init.ora parameters (via pfile or spfile) is an important DBA task. Sometimes, users having “alter system” privileges can make unauthorized changes to the initialization parameters in the spfile on a production database. Hence, auditing changes to parameters is a critical DBA task.

..more..

SQL tuning with views

Oracle views provide a benefit in these important areas:

·Code reuse: Views ensure that everyone uses the exact same SQL to access their tables.

·Column access security: Using the "grant" security model, views can be used for column-level security, such that some columns in a table may be "hidden" by not specifying them in a view.

For all of the benefits that Oracle views provide, they do come at a cost. One downside to using views is that it adds complexity to the SQL and makes it harder for the optimizer to service a query with the minimum amount of resources. This can pertain to either I/O or CPU resources, depending on the optimizer goal.

While it is clear that views are useful for end-user ad hoc queries and cases where the goal is to simplify the syntax of complex SQL queries, the following serious problems can occur when queries contain views:

·Predicate pushing: The downside to reusable code is that WHERE clause predicates must be "pushed" down the hierarchy of views to the base query. This adds processing overhead to the optimizer and increases the chances of a bad execution plan

·Non mergeable views: Because a view is an encapsulation of a complex query, it is used as if it were a discrete relational table. Hence, Oracle must pre-materialize a view whenever it is used in a query. This creates a hidden sub-plan that must be exposed for SQL tuning.

·Unnecessary overhead: Views are abused when they introduce unnecessary complexity. For example, there may be a call to a view that is composed of 10 tables where the result set only requires data from two tables.

·Excessive hard parsing: Predicate pushing may result in a hard parse of the underlying SQL that is executed. Hence, it is important to make sure bind variables are used instead of literals in SQL code calling views.

Read on to see important tips and tricks for tuning SQL that contains views.

..more..

Ethics for the Oracle DBA

Do DBAs have a social or ethical responsibility to the organizations they work for?

The role of a database administrator has changed according to the technology of database management systems (DBMSs) as well as the needs of the owners of the databases. For example, although logical and physical database designs are traditionally the duties of a database analyst or database designer, a DBA may be tasked to perform those duties.

Why we need a code of ethics:

System Administrators (SAs) and DBAs generally have high levels of access into computer systems at major corporations, financial institutions, educational facilities and brokerage firms. The IT professionals have access to highly-confidential information, prior to it becoming public knowledge, such as IPOs, stock ratings, debt ratings, test questions and answers just to name a few.

.. more ..

Secrets for managing multiple instances


In the world of the working Oracle DBA, it is possible one could be expected to manage dozens of production instances concurrently. Almost all working Oracle experts interact with their database with OS Commands, SQL*Plus scripts and shell scripts. The Oracle Enterprise Manager (OEM) GUI is primarily used only by the newbies and the more inexperienced DBAs.

OEM makes it easy for a neophyte to “pretend” to be an experienced DBA. Since it takes several years of full-time work to be familiar with the command-line syntax, OEM has been a great favorite among the “poser” crowd.

Anytime I work with a DBA who can only administer a database from OEM, I know that they are likely to belong to one of these groups:

Raw beginners - OEM is a great tool for beginners who have not yet learned that the command line interface is infinitely more flexible than the limited possibilities offered in OEM.

Posers - Fraudulent credentials and fake resumes are all too common in the Oracle industry, and reliance on GUI tools should be a tip-off that you are dealing with someone who has a very limited background in managing large computer systems.

Non-techies - Oracle managers with a non-technical background (no degree in Information Systems or Computer science) love to use OEM because it allows them to do tasks that would normally be far beyond their ability level.

A seasoned DBA knows that a firm knowledge of shell scripting, OS commands and and native SQL and DDL is indispensible for managing multiple instances.

Read more here about tips for managing a server with multiple instances without relying on OEM:

more

Advanced SQL tips - reading an execution plan


When Codd and Date created the relational data model, the execution plan was an afterthought, largely because the SQL optimizer was always supposed to generate the best execution plan, and hence, there was not real need to understand the internal machinations of Oracle execution plans.

However, in the real world, all SQL tuning experts must be proficient in reading Oracle execution plans and understand the steps within an explain plan and the sequence in which the steps are executed. To successfully understand an explain plan, you must be able to know the order that the plan steps are executed. Reading an explain plan is important for many reasons, and Oracle SQL tuning experts use the explain plans to check many things. For example:

Reading SQL execution plans has always been difficult, but there are some tricks to help determine the correct order that the explain plan steps are executed.

Ordering the sequence of execution plan steps

SQL execution plans are interpreted using a preorder (reverse) transversal algorithm. This is a fancy way of saying:

1 - To read an execution plan, look for the innermost indented statement. That is generally the first statement executed.
2 - In most cases, if there are two statements at the same level, the first statement is executed first.

In other words, execution plans are read inside-out, starting with the most indented operation. There are some general rules for reading an explain plan.

Read on and learn how to read the sequence of steps in any SQL execution plan:

more

The Care and Feeding of a Standby Database


The Oracle Managed Standby Database is one of the most efficient and effective disaster recovery options for Oracle databases. The Managed Standby provides the capability to very quickly recover from the loss of a production database. It also can temporarily take over if the production database requires maintenance. Using an Oracle Managed Standby database can allow the DBA to provide near 24/7 availability of the database, year in and year out.

The standby database can be either local or in a remote location. Many shops have both a local and a remote standby. The local is to support the management of the primary, and the remote supports disaster recovery. One primary database can have up to nine standby databases.

Find the steps critical to maintaining a healthy standby database:

more

Be careful with Oracle silver bullets


Oracle has codified the "silver bullet" approach to Oracle tuning with the Oracle Real Application Testing (RAT) facility. RAT is used along with the SQL Performance Analyzer to provide a complete framework for system-wide testing of Oracle silver bullet changes.

RAT allows the DBA to perform holistic tests when introducing silver bullets, a way to ensure that the global change will have the maximum effect and not clobber the database.

A silver bullet is any single Oracle change that has a profound impact in system-wide performance, both good and bad!

Should you fail to optimize the system for its workload, SQL tuning can be undone with subsequent changes to any of the Silver Bullets, changes that could alter the execution plans of thousands of SQL statements! A silver bullet can be any of these Oracle changes:

- SGA pool sizes (shared_pool_size, db_32k_cache_size, etc.): Pool sizes have a dramatic effect on Oracle performance.

- Materialized Views: MV can pre-join popular tables together and prefetch commonly referenced summary data.

- Indexing : Intelligent indexing ensures SQL query completion with a minimum amount of buffer gets.

- Optimizer knobs (optimizer_mode, Optimizer_index_cost_adj): Changing these global parameters can have a profound impact on Oracle performance, both good and bad.

- I/O subsystem changes (RAID level, multiple blocksizes, SSD): I/O remains the most significant performance bottleneck in most production databases.

- Optimizer statistics : Applying CBO histograms can have a huge benefit to an entire system.

These "broad brush" settings change the way that Oracle processes data in a profound way, and a single change can either make you a hero or get you fired.

Click below to learn more about global tuning with Oracle Silver Bullets:

more

Indexing....The Good, The Bad, and the Ugly


Indexes are the easiest way to improve the performance of long running queries with full table scans. Indexes allow the database to search the smaller indexes as opposed to searching the large table. This can improve not only the SELECT queries, but also UPDATEs and DELETEs.

Finding the most efficient index is difficult. Most indexes will be the default B_Tree type. This type of index has been in use for years and has been highly optimized.

Having too many indexes can slow down performance of UPDATEs and DELETEs. Use too few indexes, and all types of queries may run slower, even UPDATEs and DELETEs. If it were just a trade off, performance of UPDATEs and INSERTs versus performance of SELECTs, optimizing indexes in a database would be fairly easy. However, it is not that simple and finding the right balance is the challenge.

Learn secrets for finding balance when index planning:

http://oracle-tips.c.topica.com/maam5JAabQj2zbLGJrib/

*****************************************

Can you see your Oracle bottlenecks?

After several years of hard work, Ion for Oracle is finally available for general release. Ion is unlike any other Oracle tool on the market, a tool that encapsulates expert techniques to allow a DBA to quickly visualize important trends and signatures.

Predictive analytics if the key to repairing Oracle problems before they cripple your database, and I created Ion for Oracle to allow any DBA to quickly identify changing workloads and unobtrusive patterns of data access. Follow the link to try Ion for free:

http://oracle-tips.c.topica.com/maam5JAabQj2AbLGJrib/

Finding trends and signatures for proactive tuning


One shortcoming with reactive tuning approaches like Oracle's automatic memory management (AMM) is that it is reactive, and it waits until a problem is signaled before taking remedial actions. To be truly proactive, we must develop techniques that will analyze repeating trends and identify processing signatures.

These trends, in turn, will predict changes in workload characteristics. If you know in-advance when a workload is going to change, you can adjust the SGA just in time to accommodate the new workload, fixing the issue before it bothers your end-user community.

This is the idea behind creating a self-tuning Oracle database, a proactive technique that I've been working on for over a decade.

To get started, you must use AWR or STATSPACK queries to identify changing workloads, analyzed in a linear regression, as well as by hour of the day and day of the week. Once you do this, "signatures" will become apparent, and you will see when your workload changes, the first step in proactive self-tuning for an Oracle database.

I am excited to announce that after years of work, I have finally completed a comprehensive tool for identifying hidden trends and signatures, Ion for Oracle. Ion uses applied artificial intelligence to remove the tedium from proactive analysis, leaving you free to do the intuitive work:

http://oracle-tips.c.topica.com/maam4PSabQcc1bLGJrib/

Special Report: The Oracle - Sun deal


Oracle's announcement of its acquisition of Sun Microsystems will provide it with a complete lineup of hardware and software technologies -- an advantage few companies can boast. Stay tuned to SearchOracle.com for all of the latest headlines circulating around the exciting acquisition.

Check out our Special Report, offering a complete round-up of all the latest news stories buzzing around Oracle’s big move:

http://oracle-tips.c.topica.com/maam385abP6qObLGJrib/

Read news articles from across the TechTarget network including:

* Oracle-Sun combo: What does it mean for enterprise Java?
* Oracle-Sun acquisition raises MySQL, hardware concerns
* Analysts see Oracle-Sun deal as storage 'game changer'
* In Oracle-Sun deal, analysts predict identity management fallout
* VARs turn wary eye on Sun-Oracle combo

Access this Special Report today to stay up-to-date on all aspects of the latest Oracle acquisition:

http://oracle-tips.c.topica.com/maam385abP6qObLGJrib/

Experts tricks for Oracle SQL Tuning


The Oracle cost-based SQL optimizer (the Cost Based Optimizer - CBO) is one of the world's most complex software packages ever created. Imagine, writing a program that will always generate the "best" execution plan for any SQL statement. Oracle's implementation of SQL optimization is the best in the world, and along with this great power and flexibility comes great complexity.

Oracle SQL tuning is phenomenally complex, but there are some secrets that can help you succeed. Here are a few tricks that they do not teach you in Oracle University:

- Do the system-level tuning first - Setting the optimizer parms properly (especially optimizer_mode, optimizer_index_cost_adj, and optimizer_index_caching) can tune thousands of SQL statements in a single action.

- Try a /*+ RULE */ hint - The first thing I do when tuning a query is to invoke the rule-based optimizer. If the Rule Based Optimizer (RBO) generates a fast plan, then you know that it is not a problem with missing indexes, and you can focus on sub-optimal schema statistics (especially histogram).

- Choose the optimizer_mode wisely - The default optimizer mode of all_rows is designed to minimize computing resources. If you want to optimizer your SQL for fast response time, you need first_rows (or first_rows_10, first_rows_100).

- Avoid hints whenever possible - In Oracle, hints are a last resort, and you can adjust your CBO statistics (with dbms_stats) and use histograms to reproduce the execution plan from a hint. This has the side benefit of tuning other SQL statements as well!

- Use Histograms wisely - Oracle sometimes makes incorrect "guesses" (cardinality estimates), resulting in incorrect index usage and improper table join order. Make sure to deploy histograms to resolve these issues. Here are my notes on tuning with histograms: link

Remember, adjusting initialization parms, adding missing indexes and re-computing schema statistics can impact thousands of SQL statements, reducing the amount of manual SQL tuning. These system-wide approaches are codified in the 11g SQL Performance Analyzer, and many of these "silver bullets" are described in the book "Oracle Silver Bullets"

http://oracle-tips.c.topica.com/maam3F4abP15obLGJrib/

To learn more secrets for SQL tuning, see my book Oracle Tuning: The Definitive Reference:
more

Oracle Troubleshooting tips and secrets


More than any other area of Oracle, teaching Oracle troubleshooting is extremely challenging. Troubleshooting is largely an on-the-job skill, and no amount of book learning can match the skill of an experienced Oracle troubleshooter.

Oracle has become the world's most sophisticated and flexible database, and the sheer complexity of Oracle makes it very difficult to troubleshoot.

So, how can you learn Oracle troubleshooting skills? First, it is important that students have a strong background in the internal machinations of Oracle and understand the underlying mechanisms of the Oracle engine. It is also important to recognize how Oracle alerts the DBA to problems. Finally, it is vital to know how to properly interpret error messages.

When an end user gets hung up with a process, the source of the problem could be system wise or it could be unique to their session. It is up to the Oracle DBA to understand the right questions to ask and know the right places to look for further information. The goals of Oracle troubleshooting include:

* Learn a troubleshooting methodology for use in analyzing any Oracle database

* Understand ratio-based and bottleneck troubleshooting analysis

* Learn techniques for monitoring and optimizing memory usage

* Know how to quickly pinpoint and resolve I/O hotspots at the database, storage and object level

* Using scripts to uncover session-related bottlenecks

* Understand techniques for locating and fixing problem SQL

There are also specific tip for Oracle troubleshooting that can guide a neophyte in the right direction:

* Always check the first error message - Oracle will frequently throw multiple errors, but it's always the first error that will lead you to the root cause of the problem.

* Root cause analysis is not always practical

* Know your OFA structure

* Don't be afraid to bounce

Click this link for detailed tips for the neophyte as well as other Oracle troubleshooting tips:

more

Find opportunities for materialized views


Oracle says that a well defined set of materialized views can be made to cover a whole schema, but that is easier said than done. Resourceful DBAs know about the SQLTuning Advisor, but the wisest are always on the lookout for techniques they can use to find all opportunities for materialized views.

Materialized views are one of the single most important SQL tuning tools.

They are a true silver bullet, allowing the DBA to pre-join complex views and pre-compute summaries for super fast response time.

Read more about how to use your existing workload to determine the best materialized view for super fast response time:

more

Inside Oracle Fully Automated SQL Tuning


Each new release of Oracle brings new tools and techniques for performance optimization, but it is a formidable challenge to create tools that can identify and correct database bottlenecks for every possible situation. I run a large remote DBA operation, monitoring and tuning mission-critical databases around the globe. With such a large base of experience, I am in a unique position to see hundreds of systems in action and understand the best approaches to achieving optimal performance.

Drawing upon my vast experience base, I will attempt to present the unvarnished truth about which Oracle tuning tools and techniques are the most effective. Oracle Corporation is a cheerleader for their product, saying that all of their features are wonderful; however, they are not always forthcoming about the risks and rewards of using new tools and techniques. For example, back in the last days of Oracle 7, word came forth from Oracle Corporation that the rule-based optimizer (RBO) was being removed from Oracle 8, and all shops must move quickly to adopt the cost-based optimizer (CBO).

The CBO was not quite ready for primetime, much to the consternation of those shops who had attempted to migrate to it back in Oracle 8. Even in 11g, we still see improvements to the CBO, and despite dire warnings that the RBO will disappear, RULE hints appear in hundreds of Oracle’s own internal SQL statements.

Read on for more facts about the evolution of Oracle’s tuning tools and techniques:

More

What's the Value of a Professional DBA?


In these recessionary times, it's tempting to cut corners and replace well-trained (and well compensated) DBA's with cheap foreign pretenders. It's penny-wise and pound-foolish behavior, sure to cause issues in the future. Oracle customers are often not aware of the inherent complexities of Oracle that make it the world's most robust and flexible data platform.

Penny wise and pound foolish:

You get what you pay for. If you want a DBA with an MBA and good technical experience, it can easily cost over $100,000 per year.

In the hands of a competent expert, Oracle gives a level of control over that data that is unprecedented, and a good DBA can control every aspect of their mission-critical database. In the hands of an inept reckless beginner, Oracle's powerful features become a weapon of destruction.

Untrained and inexperienced Oracle DBA's are the bane of corporate America. Oracle Corporation white papers suggest that more than 75% of all Oracle database outages are attributable to human error.

Read more on the value of a professional DBA here:

http://oracle-tips.c.topica.com/maamXYHabPbQ3bLGJrib/

Inside Adaptive Cursor Sharing


Oracle has improved cursor_sharing several times over the years. Remember, adaptive cursor sharing is only deployed in rare cases where a skewed column distribution (as noted by the histogram) indicates that a different execution plan would be faster. For example, a query with a popular bind variable value would be best served with a full table scan while an unpopular bind variable value would benefit from an index access plan.

But remember, this is a rare occurrence in many systems.

Based on extensive experience in the industry, it appears that about 80% of shops have uniformly distributed data. Large tables remain large, and the distribution of values within a column remain unchanged.

On the other hand, roughly 20% of databases experience highly volatile data loads, where tables are small on one day and huge the next, or cases where there is a "difference that makes a difference". In these databases, huge changes in the tables data (usually associated with high DML) changes the distribution of data values, necessitating a re-analysis of column histograms.

Read more about 11g adaptive cursor sharing:

http://oracle-tips.c.topica.com/maamW0iabO3zjbLGJrib/

Inside Oracle Cloud Computing


Cloud computing has been touted as a radical "paradigm shift", where IT power is delivered to an application over the internet as you need it, rather than drawn from a centralized server.

What they don’t want you to know is that cloud computing is a new name for a feature that’s almost half a century old! Back in the olden-days of data processing, customers rented all of the resources they needed, software, disk, CPU and RAM, all under the umbrella of a MVS mainframe, and they were billed according to their actual usage.

These cloud computing announcements in the media are complete nonsense, and the idea of "cloud computing" is nothing new. Back in the 1980’s, companies regularly rented usage of mainframe computers to third parties, and the mechanisms for apportioning shared computing resources have been around for almost half a century.

Read more about Oracle's implementation of cloud computing and learn the truth behind the hype:

http://oracle-tips.c.topica.com/maamVKuabOS1TbLGJrib/

Oracle 11g Data Compression Tips for the DBA


One of the exciting new features of Oracle 11g is their inline data compression utility. While it is true that data storage prices have fallen dramatically over the last decade, and continue to fall rapidly, Oracle data compression has far more appealing benefits than simply saving on disk storage cost. Because indexes and the data itself can be highly compressed, information can be fetched off of the disk devices with less physical IO, which radically improves query performance under certain conditions.

Let's take a closer look at how one would implement Oracle 11g Data Compression in order to achieve the optimal results.

Understanding data compression

Data compression techniques, such as the Huffman algorithm, have been around for nearly a century, but only today are being put to use within main stream information systems processing. Using these techniques, a decompression utility is called immediately upon the data block fetch. Within the Oracle data buffers, the fully uncompressed version of the data remains in the data buffers, even though the information remains compressed on the data blocks themselves. This leads to an anomaly between the size of information on the data blocks and the size of the information within the data buffers. Upon applying Oracle data compression, people will find that far more rows will fit on a data block of a given size, but there is still no impact on the data base management system from the point of view of the SGA (system global area). Because the decompression routine is called upon block fetch, the Oracle data buffers remain largely unchanged while the data blocks themselves tend
to have a lot more data on them.


Tests show that 11g compression results in slower transaction throughput but creates less writes because of higher row density on the data block. Overall, the benchmark slows that I/O writes being reduced while CPU increases, resulting in slowing SQL throughput:

* Slower transaction throughput – As we expect, Oracle transactions run faster without the encryption/decryption processing overhead. This encryption benchmark shows significantly slower throughput when deploying TDE, almost 20% (81 transactions/second with TDE, 121 transactions/second with TDE).

* Less Disk Writes – Since transparent data encryption compresses the data, the benchmark with TDE required less disk writes.

* More CPU required - As we would expert, TDE required CPU cycles for the encrypt/decrypt operations, and in this benchmark test we see User CPU rise from 46 to 80 when using TDE data encryption.

See more details on data compression here:
http://oracle-tips.c.topica.com/maamRxVabOkFObLGJrib/

Global SQL Optimization


True Oracle professionals will routinely undertake a holistic tuning approach prior to delving into the tuning of specific SQL statements. This holistic approach has been the bread and butter of successful DBAs since the earliest days of Oracle.

In Oracle 11g, this holistic approach is codified in the SQL Performance Analyzer (SPA), a new tool used to simplify the setting of optimizer statistics and initialization parameters. In the SPA, the DBA chooses a representative workload and runs it, comparing the overall SQL execution plans with different sets of CBO statistics and the settings for the silver bullet initialization parameters.

Click below to read more about Global SQL Optimization:

http://oracle-tips.c.topica.com/maamRxLabOkFabLGJrib/

Would you care to CTAS?


If you don't want to use dbms_redefinition, the Create Table As Select (CTAS) statement is one method for reorganizing an Oracle table or moving the table to another tablespace. Instead of spending hours setting up parameter files and job steps, you can copy and rename the table in three simple SQL statements. The CTAS statement can be used to change storage parameters for a table (INITIAL, NEXT, FREELISTS) and also change the physical sequence of the table rows. CTAS has the following syntax:

create table xxx_new
tablespace new_tablespace_name
storage (initial new_initial next new_next freelists new_freelist_number )
as
select * from xxx
order by primary_index_key_values;

There are several ways to execute CTAS to reorganize table; many of the options depend upon the version of Oracle and the particular configuration of the Oracle database.

Read more about the CTAS here:

http://oracle-tips.c.topica.com/maamRxyabOkDebLGJrib/

The High Cost of Oracle Data Breaches


Maintaining Oracle data security is essential, and all Oracle managers have a fiduciary responsibility to protect confidential information. The Oracle professional must be aware of the financial risk associated with inadvertent disclosure of their Oracle data.

Whether it's an external breach (hacker attack) or internal data management malfeasance (i.e. using an overseas remote DBA provider), disclosure of confidential data has expensive consequences, and these costs can be identified.

I’ve seen firsthand how choosing a bargain offshore Oracle remote DBA provider has lead to widespread data theft, leaving the customer with little or no legal recourse. Many Oracle shops choose their remote DBA provider within the USA because of our strict data protection statutes.

Read more about these risks, Oracle hacker horror stories, and data loss calculators as this article continues:

http://oracle-tips.c.topica.com/maamP6kabN8rNbLGJrib/

Database Management for Business Intelligence


Consumer data has been used since the ancient Greek times to guide shipments of olive oil and other commodities throughout the Ancient Grecian Empire and beyond. While the foundations of the data storage have changed dramatically from the Mesopotamian clay tablets to today's modern database management systems, the goals of business intelligence and data mining remain unchanged. The basic tenet of business intelligence is that one can predict the future by analyzing the past and grouping together related consumers to develop highly sophisticated and accurate predictive models. These predictive models can save tens of billions of dollars a year in advertising. At the same time, consumers are provided with targeted marketing which is most appropriate to their needs.

Business intelligence is not limited exclusively to the area of marketing and sales. Today, hospitals group patients together in terms of their age and symptoms, and analyze treatment regimens in order to determine the best course of treatment for specifically identifiable populations.

Though the use of business intelligence no doubt saves lives, it has even more wide ranging social implications. First and foremost is the issue of data privacy. As consumer monitoring becomes more and more ubiquitous, we see that many privacy advocates do not want even the most innocuous behaviors recorded. For example, many supermarkets monitor consumer purchasing habits via a buyers’ club card. Fortunately, most consumers don't care whether anyone else knows they prefer peas to string beans. Through the use of the buyers’ club card, consumers passively allow point of sale systems to readily track purchases, and tie individual purchases to background demographic information. When consumers apply for buyers’ club cards, they provide basic demographic information which is in turn analyzed with publicly available information on major life events, such as the purchase of a house, a divorce, the presence or absence of children, and income, such that the database has
detailed information not only about what products are being purchased, but the basic demographics of the person who is purchasing.


The issue of data storage has always been important to business intelligence because of the dynamics of changing technology. Disk prices are falling radically each and every year. Back in the 1980’s, 1.2 gigabytes of storage could cost upwards of $200,000.00; whereas, today you can purchase the same amount for less than $100.00. Given our ability to store large amounts of empirical information cheaply, the goal of the business intelligence manager is to somehow be able to cleanse and manipulate this data in such a way that accurate predictive models can be built.

Follow the link below to take a closer look at the evolution of business intelligence from the perspective of the database manager and explore how the database influences the manipulation of these vast quantities of observable data in the real world.

Read more


Super Fast Database Copying/Cloning


A database cloning procedure is especially useful for the DBA who wants to give his developers a full-sized TEST and DEV instance by cloning the PROD instance into the development server areas.

An Oracle database cloning procedure can be used to quickly migrate a system from one UNIX server to another. The process can clone an Oracle database, ultimately resulting in the fastest way to copy an existing Oracle database.

To find out more on the secrets of super fast database cloning, see the full article here:

Read more

The Top 10 Reasons for Poor Oracle Performance


The Oracle Documentation lists reasons for poor performance, but they are not as comprehensive or useful as our BC top-10 list. Oracle Corporation has never been as good at tuning their own software as third-party experts, and their performance cause list is missing many important factors.

Some factors, like a mis-configured I/O sub system, are obvious causes of poor Oracle performance; however, some of the others may come as a surprise.

Click below to see the BC top-10 list of the most common Oracle performance problems:

Read more

Managing an Aging Oracle Database


The Geriatric Instance – Managing the legacy Oracle database is not an easy task.

As Oracle databases pass the decade mark, they begin to suffer from a host of age-related problems. Just like people, the sheen is gone, parts become unwieldy and creaky and some areas of the database need ongoing medical attention.

See these important tips and tricks for caring for an elderly legacy Oracle database:

Read more

Oracle hash joins tips


In cases where a very small table is being joined to a large table, the Oracle hash join will often dramatically speed-up the query. Hash joins are far faster than nested loop joins in certain cases, often in cases where your SQL is joining a large table to a small table.

However, in a production database with very large tables, it is not always easy to get your database to invoke hash joins without increasing the RAM regions that control hash joins. For large tables, hash joins requires lots of RAM.

Oracle places limits of the amount of RAM available for hash joins so that no single session will "hog" all of the RAM.

Read more about Oracle hash

Auditing and reporting Oracle user activity


Changes in U.S. federal laws have mandated increased security for auditing Oracle user activity. HIPAA, the Sarbanes-Oxley Act, and the Gramm-Leach-Bliley Act have all produced serious constraints on Oracle professionals who are now required to produce detailed audit information for Oracle system users.

Starting with Oracle8i, Oracle introduced special triggers that are not associated with specific DML events (e.g., INSERT, UPDATE, and DELETE). These system-level triggers included database startup triggers, DDL triggers, and end-user login/logoff triggers.

While Oracle provided the functionality for these new triggers, it was not clear how they could be used in order to track system-wide usage.

Read more about Oracle auditing and reporting here:

http://oracle-tips.c.topica.com/maamFPSabM2uEbLGJrib/

Forecasting Oracle Performance


One of the most effective tools for forecasting Oracle performance is the time-series data in STATSPACK and the Automatic Workload Repository in Oracle 10g. Using this time-series information, the Oracle DBA can identify hidden performance trends and forecast Oracle Performance problems before they cripple the database.

George Santanaya once said "Those who forget the past are condemned to repeat it". In order to accurately tune any Oracle database, you need a historical data collection mechanism and the ability to translate the data into reports that forecast repeating Oracle performance bottlenecks.

The process of forecasting Oracle performance trends uses the well-known quantitative techniques of predictive modeling and predictive analytics, whereby multivariate correlation techniques are applied to Oracle performance data from STATSPACK and AWR. Once the trends are identified, scheduled job are employed to reconfigure Oracle to relieve the bottleneck in anticipation of the cyclic performance issue.

Read more about forecasting Oracle performance here:

http://oracle-tips.c.topica.com/maamFPyabMZcHbLGJrib/

Cost Control: Inside the Oracle Optimizer


Oracle's cost-based SQL optimizer (CBO) is an extremely sophisticated component of Oracle that governs the execution for every Oracle query. The CBO has evolved into one of the world's most sophisticated software components, and it has the challenging job of evaluating any SQL statement and generating the "best" execution plan for the statement.

Because the CBO determines the execution speed for every Oracle query, the Oracle professional must understand how the CBO is influenced by Oracle external issues, internal statistics, and data distribution.

Read more about the Oracle Optimizer here:

http://oracle-tips.c.topica.com/maamADPabMv0pbLGJrib/