PL/SQL features by release


A Mini-History of Oracle and PL/SQL

This chapter answers two questions: where did PL/SQL come from and why is it the best database development language ever developed?

In the late 70s, around the time Ingres was getting started at UC Berkeley, three guys working on a contract for the CIA got together and started a company called Relational Software, Inc.

Their first product was a relational database called Oracle. The founders decided to use the C language for development. This would later become important when they decided to start porting to different platforms.

They also decided to support SQL as the internal data access language. This would also become a very important factor to its success. In 1979, Relational Software was the only company making an SQL compliant database. If anyone ever asks you who wrote the first SQL database, you now know the answer: Oracle.

To access the database, to write an application for example, you had to use an external language and compiler. In the early days of Oracle, that was C but, in time, several other languages were added: COBOL, ADA, Fortran, PL/1, and others.

In the early 1980s, the company was renamed Oracle Corporation. That would just be the beginning of Oracle’s desire to rename its products. In my time using the Oracle database, I think every tool I have used has been renamed at least once. In the case of CDE/Developer 2000/Developer Suite, it has been renamed enough to be confusing.

Oracle did not have an embedded language for many years. Having come from a government background, when they chose a language for the database, they modeled it on ADA.

I programmed in ADA for a few years in the 1980s while I was working as a consultant for the US Department of Defense. It is a very powerful, but very wordy, object oriented language. ADA, and by extension PL/SQL, are descendants of Pascal.

Oracle named this new language PL/SQL; the Procedural Language extension to SQL. I pronounce it pee ell sequel but many others pronounce it pee ell ess que ell. Feel free to pronounce it however you like though.

Oracle Monitoring Best Practices


All DBA's are required to set-up an Oracle monitoring infrastructure and the architecture of the monitoring system is directly dependent on the demands of management. Oracle system cost the end-user community millions of dollars, and the end-user often demand service-level agreements (SLA's) that impose strict monitoring tasks for the Oracle professional.

- In many shops, the end-user community is demanding Oracle performance statistics at a detailed level.

- A financial services company demands that 95% of their Oracle transactions complete within one wallclock second. The DBA most develop a monitoring strategy to ensure compliance. (Note: there are special techniques that can be employed to get end-to-end response time within Oracle).

- A stock brokerage demands that the DBA write a sophisticated real-time proactive monitor. This monitor will quickly diagnose and pinpoint Oracle bottlenecks, but getting the data is a problem. It requires real-time access to all internal Oracle control structures. Running the monitor imposes a measurable burden on overall system load.

- A manufacturing plant requires measuring response time by hour-of-the-day to ensure fast throughput for all shifts.
These are just a few examples of the business demands that drive the Oracle professional to create sophisticated monitoring infrastructures. Let's example some of the most important issues when making these decisions.

Read the entire article on Oracle monitoring tips here:
http://oracle-tips.c.topica.com/maal51fabI6i4bLGJrib/

Important SSD changes coming to Oracle servers


Last week Sun Microsystems made the stunning announcement that virtually all of their new servers will come with 32 gigabytes of super-fast solid-state disk (SSD). Sun notes that their internal SSD consumes one-fifth the power and is a hundred times faster than the magnetic-coated spinning platter disk. They note that SSD is perfect for I/O intensive systems like Oracle applications:. Sun notes:

"The majority of enterprises building I/O-intensive applications will use some amount of flash within a year, Fowler predicted. Databases like Oracle, MySQL and IBM DB2 are ideal candidates, he says."

Today, 99.99% of all Oracle data is stored on the ancient spinning platter disks that were introduced in the 1970's, but disk technology is rapidly being displaced with RAM-based SSD which is hundreds of time faster than magnetic-coated spinning platter disks.

In the 1980's, platter disks was very expensive, and a 1.2 gigabyte 3380 disk sold for more than $200k. Today, you can buy high-speed SSD for Oracle for under $1k per gigabyte. For terabyte sized Oracle shops, SSD is being selectively used for redo and other tablespaces where I/O is a bottleneck.

Following Sun's lead, several other major hardware providers are following suit and hundreds of hardware vendors will be providing SSD for use by Oracle systems.

Oracle benchmarks have shown that SSD can be up to 600 times faster for some operations, and SSD does not experience platter or controller latency, leading to far faster throughput.

As prices continue to fall, SSD will replace platter disks for all mission critical Oracle databases, and this will lead to some fundamental changes in Oracle administration, especially within the SGA. Because all data is stored on RAM, the data buffer cache size becomes far less important.

Read more about the huge benefits of Solid State Oracle here:

2008 Survey of rack-mount SSD vendors:
http://oracle-tips.c.topica.com/maalZ3gabItTubLGJrib/

SSD clobbers disk speed for Oracle:
http://oracle-tips.c.topica.com/maalZ3gabItTvbLGJrib/

Using RAC with SSD:
http://oracle-tips.c.topica.com/maalZ3gabItTwbLGJrib/

Oracle tuning Guru achieves a 20x performance boost



Steve Karam, the Oracle wunderkind (the world's youngest Oracle ACE and Oracle Certified Master), has published an astute case study describing how he tuned a batch update down from 45 minutes to only 10 seconds!

>> Read "Does Size Matter? by Steve Karam, OCM, Oracle ACE

http://oracle-tips.c.topica.com/maalYRxabInjZbLGJrib/

In order to have a statistically valid test, Karam used a the real-world workload, using identical server, disk and instance settings. Karam notes how moving a high-DML table to a smaller blocksize resulted in a 20x throughput improvement:

"By going from a 16k blocksize to a 4k blocksize with all other things being equal, we experienced roughly a twenty times improvement."

Steve is now investigating further areas in the environment that could benefit from block optimization, targeting frequently used and highly concurrent segments as candidates for a smaller blocksize. Let's take a closer look at the use of multiple blocksizes to reduce waste and improve performance throughput.

Read the whole story of tuning Oracle I/O blocksize here:

http://oracle-tips.c.topica.com/maalYRxabInj0bLGJrib/

Invoking hash joins for faster Oracle SQL


Enabling Oracle to perform hash joins

In cases where a very small table is being joined to a large table, the Oracle hash join will often dramatically speed-up the query. Hash joins are far faster than nested loop joins in certain cases, often in cases where your SQL is joining a large table to a small table.

However, in a production database with very large tables, it is not always easy to get your database to invoke hash joins without increasing the RAM regions that control hash joins. For large tables, hash joins requires lots of RAM.

The 200 megabyte limit for hash joins

Oracle places limits of the amount of RAM available for hash joins so that no single session will "hog" all of the RAM. According to this research using Oracle 10g, the hash join maximum is only to only 200 megabytes (5% of pga_aggregate_target).

The Oracle DBA controls the optimizers' propensity to invoke hash joins because the DBA must allocate the RAM resources to Oracle (using the hash_area_size and pga_aggregate_target parameters) for the optimizer to choose a hash join. The CBO will only choose a hash join if you have allocated Oracle enough RAM area in which to perform the hash join.

Sizing your PGA for hash joins

The rules are quite different depending on your release, and you need to focus on the hash_area_size OR the pga_aggregate_target parameters.