Tips for Using Oracle Stored Procedures


Oracle stored procedures and triggers are faster than traditional code, which leads to an increase in popularity. As application code moves away from external programs and into the database engine, DBAs need to understand the related memory requirements for Oracle stored procedures and know how to manage stored procedures for optimal database performance.

There are many compelling benefits to putting all Oracle SQL inside stored procedures, including:

* Better performance. Oracle stored procedures load once into the shared pool and remain there unless they become paged out. Subsequent executions of the stored procedure are far faster than executions of external code.

* Coupling of data with behavior. DBAs can use naming conventions to couple relational tables with the behaviors associated with a table (using Oracle stored procedures as "methods"). If all behaviors associated with the employee table are prefixed with the table name--employee.hire, employee.give_raise, for example--the data dictionary can be queries to list all behaviors associated with a table (select * from dba_objects where owner = 'EMPLOYEE'), and it's easy to identify and reuse code.

* Isolation of code. Since all SQL is moved out of the external programs and into the Oracle stored procedures, the application programs become nothing more than calls to stored procedures. As such, it becomes very simple to swap out one database and swap in another one.

Read more about using stored procedures here:

http://oracle-tips.c.topica.com/maamAC1abMvSPbLGJrib/

Professional Dress Code Tips


First impressions count. A professional consultant who doesn’t take the time to maintain a professional appearance presents the image of not being able to perform adequately on the job. This professional dress code is codified because many professionals have never been taught appropriate professional appearance and demeanor.

Professional dress code standards are alive and well in major financial and executive management. Anyone who aspires to top management knows that personal appearance counts.

If you look and behave like a highly trained and well-groomed professional, you will win the respect and honor of valued clients.

A fresh haircut, spit-shined shoes and a crisp suit go a long way in establishing a professional demeanor.

Read more about professional dress code tips here:

http://oracle-tips.c.topica.com/maamACVabMvSybLGJrib/

Oracle downsizing: How to Avoid Layoffs


The collapse of major financial institutions has rippled through the economy and managers everywhere are searching for ways to stretch their dollars. In this article, we will explore the ways that Oracle Managers trim their budgets to survive the looming worldwide recession.

The market for Oracle professionals has been a boom-or-bust market. In years where demand exceeds supply (1996-1999), anyone with even the most marginal skills could secure a job in Oracle technology. But bank failures, global insecurity, the credit crunch and declining sales have made Corporations faced with tough choices. Some must either limp along with fewer resources or face bankruptcy, and managers everywhere are struggling to find cost cutting opportunities throughout the organization.

This has had a direct and noticeable impact on the Oracle community. Layoffs are rampant, and those Oracle professionals who are fortunate enough to have their jobs are faced with cutting corners. Many perks that were considered essential to an Oracle professional a few years ago are gone.

When faced with the reality of doing layoffs most managers try to be as fair as possible, preparing an objective comparison of cost vs. performance for each employee. However, you must also consider intangible aspects such as the loss of institutional knowledge which is especially critical for Oracle DBA’s who understand the complex nuances of your systems.

Read more about how Oracle managers handle layoffs in uncertain times here:

http://oracle-tips.c.topica.com/maamypyabMiYxbLGJrib/

Speed Oracle SQL with Temporary Tables


Using temporary tables with Dictionary Views

The prudent use of temporary tables can dramatically improve Oracle SQL performance. To illustrate the concept, consider the following example from the DBA world. In the query that follows, we want to identify all users who exist within Oracle who have not been granted a role. We could formulate the query as an anti-join with a noncorrelated subquery (against a complex view) as shown here:

select
username
from
dba_users
where
username NOT IN
(select grantee from dba_role_privs);
This query runs in 18 seconds. As you may remember from Chapter 12, these anti-joins can often be replaced with an outer join. However, we have another option by using CTAS. Now, we rewrite the same query to utilize temporary tables by selecting the distinct values from each table.

create table
temp1
as
select
username
from
dba_users;

create table
temp2
as
select distinct
grantee
from
dba_role_privs;

select
username
from
temp1
where
username not in
(select grantee from temp2);
With the addition of temporary tables to hold the intermediate results, this query runs in less than three seconds, a 6× performance increase. Again, it is not easy to quantify the reason for this speed increase, since the DBA views do not map directly to Oracle tables, but it is clear that temporary table show promise for improving the execution speed of certain types of Oracle SQL queries.

Using Temporary Tables
If the amount of data to be processed or utilized from your PL/SQL procedure is too large to fit comfortably in a PL/SQL table, use a GLOBAL TEMPORARY table rather than a normal table. A GLOBAL TEMPORARY table has a persistent definition but data is not persistent and the global temporary table generates no redo or rollback information. For example if you are processing a large number of rows, the results of which are not needed when the current session has ended, you should create the table as a temporary table instead:

create global temporary table
results_temp (...)
on commit preserve rows;



The “on commit preserve rows” clause tells the SQL engine that when a transaction is committed the table should not be cleared.

The global temporary table will be created in the users temporary tablespace when the procedure populates it with data and the DIRECT_IO_COUNT will be used to govern the IO throughput (this usually defaults to 64 blocks).


Oracle 11g New Features


At Oracle Openworld 2006, Oracle announced some exciting new features of Oracle 11g, and they promise 482 new Oracle11g features. For me, the most exciting new features relate to tools that automate the well-structured DBA tasks, freeing-up the DBA to pursue more challenging work.

- 11g SQL new features
- 11g language support new features
- 11g PL/SQL New Features
- 11g DBA features
- 11g RAC new features & enhancements
- 11g performance features
- 11g security new features
- 11g Enterprise Manager new features

As of 2007, the commercial database market is very mature and expectations are high. All of the leading databases do a good job in storing and retrieving data, and customers are now demanding self-tuning databases, intelligent engines that detect and correct sub-optimal conditions.

Oracle 11g is the clear leader in this area. Oracle has invested heavily into self-tuning capabilities including automated storage and memory management and intelligent tuning advisors. Now in 11g, Oracle closes the loop and offers intelligent automation tools to create a self-healing database. The most important 11g new automation features include a SQL tuning advisor that automatically tunes SQL statements.

How to Export Oracle Data


Data export is a common task for an Oracle DBA, but it is not necessarily a straightforward one. DBAs have to know how to handle Oracle export errors, work with multiple tables and rows and export stored procedures and functions. That's why SearchOracle.com created the Oracle data export guide, packed with best practices, step-by-step instructions and example scripts on how to export Oracle data.

Explore this learning guide to get all the data export errors and solutions recommended by our experts:

http://oracle-tips.c.topica.com/maamt05abLTMfbLGJrib/


Turbocharge SQL with Advanced Oracle Indexing


Oracle includes numerous data structures to improve the speed of Oracle SQL queries. Taking advantage of the low cost of disk storage, Oracle includes many new indexing algorithms that dramatically increase the speed with which Oracle queries are serviced. This article explores the internals of Oracle indexing; reviews the standard b-tree index, bitmap indexes, function-based indexes, and index-only tables (IOTs); and demonstrates how these indexes may dramatically increase the speed of Oracle SQL queries.

Oracle uses indexes to avoid the need for large-table, full-table scans and disk sorts, which are required when the SQL optimizer cannot find an efficient way to service the SQL query. I begin our look at Oracle indexing with a review of standard Oracle b-tree index methodologies.

The Oracle b-tree index

The oldest and most popular type of Oracle indexing is a standard b-tree index, which excels at servicing simple queries. The b-tree index was introduced in the earliest releases of Oracle and remains widely used with Oracle.
Read more on Turbocharging SQL:

http://oracle-tips.c.topica.com/maamrWGabLGVXbLGJrib/

Tuning individual SQL statements


The acronym SQL stands for Structured Query Language. SQL is an industry standard database query language that was adopted in the mid-1980s. It should not be confused with commercial products such as Microsoft SQL Server or open source products such as MySQL, both of which use the acronym as part of the title of their products.

Do this before you start individual SQL statement tuning

This broad-brush approach can save thousands of hours of tedious SQL tuning because you can hundreds of queries at once. Remember, you MUST do this first, else later changes to the optimizer parameters or statistics may un-tune your SQL Remember, you must ALWAYS start with system-level SQL tuning, else later changes might undo your tuned execution plans:

* Optimize the server kernel - You must always tune your disk and network I/O subsystem (RAID, DASD bandwidth, network) to optimize the I/O time, network packet size and dispatching frequency.

* Adjusting your optimizer statistics - You must always collect and store optimizer statistics to allow the optimizer to learn more about the distribution of your data to take more intelligent execution plans. Also, histograms can hypercharge SQL in cases of determining optimal table join order, and when making access decisions on skewed WHERE clause predicates.

* Adjust optimizer parameters - Optimizer optimizer_mode, optimizer_index_caching, optimizer_index_cost_adj.

* Optimize your instance - Your choice of db_block_size, db_cache_size, and OS parameters (db_file_multiblock_read_count, cpu_count, &c), can influence SQL performance.

* Tune your SQL Access workload with physical indexes and materialized views - Just as the 10g SQLAccess advisor recommends missing indexes and missing materialized views, you should always optimize your SQL workload with indexes, especially function-based indexes, a Godsend for SQL tuning.

11g Note: The Oracle 11g SQL Performance Analyzer (SPA), is primarily designed to speed up the holistic SQL tuning process.

Once you create a workload (called a SQL Tuning Set, or STS), Oracle will repeatedly execute the workload, using sophisticated predictive models (using a regression testing approach) to accurately identify the salient changes to SQL execution plans, based on your environmental changes.

Read more on SQL tuning steps:
http://oracle-tips.c.topica.com/maamqe5abLw4obLGJrib/

Inside Oracle Cloud Computing


Cloud computing has been touted as a radical "paradigm shift", where IT power is delivered to an application over the internet as you need it, rather than drawn from a centralized server.

What they don’t want you to know is that cloud computing is a new name for a feature that’s almost half a century old! Back in the olden-days of data processing, customers rented all of the resources they needed, software, disk, CPU and RAM, all under the umbrella of a MVS mainframe, and they were billed according to their actual usage.

These cloud computing announcements in the media are complete nonsense, and the idea of "cloud computing" is nothing new. Back in the 1980’s, companies regularly rented usage of mainframe computers to third parties, and the mechanisms for apportioning shared computing resources have been around for almost half a century.

Read more about Oracle's implementation of cloud computing and learn the truth behind the hype:

http://oracle-tips.c.topica.com/maamknnabKY7MbLGJrib/

How to become an Oracle DBA


Every year, young computer professionals leave the hallowed halls and ivory towers of college and survey the landscape for computer jobs. They look at the salary surveys and drool at the prospect of earning up to $250k per year as a DBA. Many of them don't know what a DBA does, but they sure like the money.
Since I've been a DBA for 25 years I'm often asked "How do I become a DBA?", many from neophytes who are enticed with the glamour and power of the DBA role. While being a DBA is exciting and lucrative, it's a career choice that requires years of preparation. The most important thing to remember is that the job of a DBA requires a 24x7 commitment. Being an Oracle DBA can be a very stressful, thankless job, and many DBA jobs require the DBA to be on-call on Thanksgiving and Christmas to perform downtime maintenance. Plus, the DBA is expected to constantly keep-up with the rapidly-changing technology, working nights and weekends on a regular basis.
It's not uncommon for a DBA to earn as much as a mid-level manager, and in larger shops the DBA is a vice president. However, the high high pay is a double-edged sword. The DBA must constantly justify their salary, and a good DBA who automates many of their job functions may find themselves looking for a new job.

What classes should I take to prepare for a job as a DBA?

In grad school, all IT and CS students take courses like Operations Research where they learn to develop complex decision rules and them apply them to real world datasets. Using Oracle as the back-end storage of data and decision rules is a great way to prepare for real-world applications of expert systems, DSS and AI. Also, advanced statistics courses (multivariate analysis) are a good way to prepare for a career in Oracle data mining and Business Intelligence (BI)
.
What College Degrees are best for the DBA?

Companies are now requiring a combination of technical and managerial skills and the best-fits are those with an undergraduate degree in Computer Science and a MBA in Information Systems. Employers need a DBA who can understand business systems areas (accounting, finance, marketing) and MBA's are a perfect fir for the DBA job role.

How much can I earn as an Oracle DBA?

Here are my notes on Oracle Salary Compensation, and there is a wide variation according to the responsibilities of the DBA job, the quality of the DBA, the cost of living, and the experience of the DBA. Here is a good Oracle DBA salary survey showing the national average at $65,000 per year.
Read more on Becoming an Oracle DBA:

http://oracle-tips.c.topica.com/maamfocabKwkSbLGJribafpMfn/

What are Oracle worst practices?


There is a great deal of discussion about following Oracle best practices (e.g. using OFA, etc.) but little has been said about Oracle worst practices. Contrary to an Oracle "best practice", an Oracle "worst practice" are actions (or non-actions) that cause poor performance and create excessive management overhead, essentially a non-standard approach to Oracle database management. Let's look at a few of my Oracle worst practices:

Many Oracle shops commission an Oracle Health Check to find these issues, but worst practices are alive-and-well in the Oracle community.

http://www.dba-oracle.com/oracle_health_check.htm

I describe Oracle worst practices into several general areas:

- PL/SQL worst practices

- Oracle DBA worst practices

- Oracle design worst practices

- Oracle architecture worst practices

Read my complete treatise on Oracle worst practices here:

http://www.dba-oracle.com/t_worst_practices.htm

For complete details, see my book "Oracle Tuning: The Definitive Reference":

http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm


Tricks in Oracle SQL to compare two tables


Oracle Corporate developer Vadim Tropashko has some interesting notes on tuning Oracle SQL queries that compare the contents of two tables, showing several SQL solutions and their performance within the Oracle cost-based optimizer. Tropashko has a remarkable approach to solving this problem, a worthy read for anyone who writes SQL in Oracle.

Vadim also shows a great example of using the hidden parameter _convert_set_to_join to improve SQL execution speed for queries that find the semantic difference between two tables, and he shows how to compare the rows and columns of two tables with Oracle SQL syntax.

http://oracle-tips.c.topica.com/maamcioabJ7adbLGJrib/

Vadim Tropashko is the author of the book "SQL Design Patterns: The Expert Guide to SQL Programming".

This is the definitive reference for SQL design patterns, a critical knowledge area for any Oracle developer.

http://oracle-tips.c.topica.com/maamcioabJ7abbLGJrib/

Get that Oracle job you crave!


Tired of searching the Web for relevant career information on how to obtain that Oracle position you've been vying for?
Look no further! Whether you're looking to score your first Oracle job as a DBA or developer or just concerned about the future of your current position -- you're not alone. That's why SearchOracle.com has compiled this fast guide to finding and keeping an Oracle job. You'll get tips and techniques that are sure to help you stand out against the crowd.
Whether you're interested in brushing up on interview questions and techniques, checking out Oracle salaries or simply searching for open positions, this guide has what you're looking for:

http://oracle-tips.c.topica.com/maamfF2abKyppbLGJrib/

Get expert advice, tips, resources and more on everything you need to know about your Oracle dream job including:
* Salary and hiring information.
* Interview and job hunting resources.
* How to make smart Oracle career decisions.
* Job search sites.
* And more!

Become a member of SearchOracle.com to access the entire guide and kick-start your Oracle career today:

http://oracle-tips.c.topica.com/maamfF2abKyppbLGJrib/

ABOUT SEARCHORACLE.COM

On SearchOracle.com, you'll find independent resources that are specifically designed for Oracle professionals, plus
in-depth news, analysis, peer advice and expert tips to help with strategic issues, day-to-day IT management and
Oracle challenges. Membership is free -- sign up today!

Inside Oracle 10g Automatic Memory Management


As we migrate from Oracle9i to Oracle10g, Oracle will automatically enable Automatic Storage Memory Management (ASMM), a automation features that adjusts the sizes of your default data buffer and shared pool.

The Oracle documentation shows that Oracle uses the memory advisories from Oracle9i and applies heuristics (rules of thumb) to determine the best shift in RAM pool sizes. These heuristics consist of hypothesis testing with "what if" scenarios, and chooses the size with the greatest overall marginal benefit.

To learn how ASMM works and how to monitor ASMM, check-out my research:

http://oracle-tips.c.topica.com/maamch5abJ69nbLGJrib/

Oracle Monitoring Best Practices


All DBA's are required to set-up an Oracle monitoring infrastructure and the architecture of the monitoring system is directly dependent on the demands of management. Oracle system cost the end-user community millions of dollars, and the end-user often demand service-level agreements (SLA's) that impose strict monitoring tasks for the Oracle professional.

- In many shops, the end-user community is demanding Oracle performance statistics at a detailed level.

- A financial services company demands that 95% of their Oracle transactions complete within one wallclock second. The DBA most develop a monitoring strategy to ensure compliance. (Note: there are special techniques that can be employed to get end-to-end response time within Oracle).

- A stock brokerage demands that the DBA write a sophisticated real-time proactive monitor. This monitor will quickly diagnose and pinpoint Oracle bottlenecks, but getting the data is a problem. It requires real-time access to all internal Oracle control structures. Running the monitor imposes a measurable burden on overall system load.

- A manufacturing plant requires measuring response time by hour-of-the-day to ensure fast throughput for all shifts.
These are just a few examples of the business demands that drive the Oracle professional to create sophisticated monitoring infrastructures. Let's example some of the most important issues when making these decisions.

Read the entire article on Oracle monitoring tips here:
http://oracle-tips.c.topica.com/maal6WNabJhjibLGJrib/

Sending e-mail from Oracle




Sending e-mail from Oracle

Many systems have online e-mail alerts and Oracle provides the utl_smtp PL/SQL package (first introduced in Oracle8i) to facilitate e-mailing from PL/SQL. The PL/SQL can then be embedded to any application for a slick e-mail alert interface. Here is the PL/SQL code to define the e-mail environment, with references to working PL/SQL Oracle e-mailing script by Dr. Tim Hall and Dave Wotton. If you are developing with HTML-DB you can also send Oracle e-mail with the htmldb_mail package.

Also, Stephen Rea suggests his e-mail package, which was derived from his and other's work, and now allows attachments from Oracle LOB objects (CLOB, BLOB), in addition to text and binary file attachments.

Step 1: Run the Jserver code for PL/SQL e-mail

You must install the utl_smtp package and Jserver by running the following DBA scripts as SYSDBA (SYS user):
SQL> @$ORACLE_HOME/javavm/install/initjvm.sql
SQL> @$ORACLE_HOME/rdbms/admin/initplsj.sql



Step 2: Prototype the PL/SQL e-mail stored procedure

The next step is to write the a prototype for the PL/SQL to process the e-mail message. We will call our procedure e_mail_message, and this will be a sample invocation:

e_mail_message
(
from_name => 'oracle' ,
to_name => 'info@remote-dba.net' ,
subject => 'A test',
message => 'A test message'
);



Step 3: Create your PL/SQL e-mail stored procedure

The following is modified from Dr. Hall's script, and another outstanding and well-documented Oracle emailing script published by Dave Wotton. Wotton has graciously published his sophisticated Oracle e-mailing PL/SQL procedure here.

create or replace procedure
e_mail_message
(
from_name varchar2,
to_name varchar2,
subject varchar2,
message varchar2
)
is
l_mailhost VARCHAR2(64) := 'burleson.cc';
l_from VARCHAR2(64) := 'linda@remote-dba.net';
l_to VARCHAR2(64) := 'don@burleson.cc';
l_mail_conn UTL_SMTP.connection;
BEGIN
l_mail_conn := UTL_SMTP.open_connection(l_mailhost, 25);
UTL_SMTP.helo(l_mail_conn, l_mailhost);
UTL_SMTP.mail(l_mail_conn, l_from);
UTL_SMTP.rcpt(l_mail_conn, l_to);

UTL_SMTP.open_data(l_mail_conn);

UTL_SMTP.write_data(l_mail_conn, 'Date: ' || TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS') || Chr(13));
UTL_SMTP.write_data(l_mail_conn, 'From: ' || l_from || Chr(13));
UTL_SMTP.write_data(l_mail_conn, 'Subject: ' || l_subject || Chr(13));
UTL_SMTP.write_data(l_mail_conn, 'To: ' || l_to || Chr(13));
UTL_SMTP.write_data(l_mail_conn, '' || Chr(13));

FOR i IN 1 .. 10 LOOP
UTL_SMTP.write_data(l_mail_conn, 'This is a test message. Line ' || To_Char(i) || Chr(13));
END LOOP;

UTL_SMTP.close_data(l_mail_conn);

UTL_SMTP.quit(l_mail_conn);
END;
/



Invoking multi-line Oracle e-mail messages

You can invoke multiple line Oracle email messages by using the PL/SQL assignment operator and concatenating the desired message lines together:

mesg := 'This is a text message' || CHR(13)|| CHR(10) ||
'split over two lines' ;


Oracle utl_mail e-mail PL/SQL package

Oracle Database 10g has made it easier than ever before to interface PL/SQL with e-mail.

The new DBMS package is called utl_mail, and it makes it easier than ever before to send e-mail from inside PL/SQL. Unlike the cumbersome utl_smtp package, the new utl_mail package does not require any knowledge of the underlying protocols. OTN has a great summary of the features:

"This new package makes it possible for a PL/SQL programmer to send programmatically composed emails. It requires only the normal mental model of a user of a GUI email client rather than an understanding of the underlying protocol (SMTP) features. This distinguishes it from Utl_Smtp which was introduced in Oracle8i Database. Utl_Smtp requires that the programmer understands the details of the SMTP protocol. Utl_Mail is much simpler to use because it supports just a limited, but very common, subset of the functionality that Utl_Smtp provides."

The top-10 reasons for poor Oracle performance


The Oracle Documentation lists reasons for poor performance, but they are not as comprehensive or useful as our BC top-10 list. Oracle Corporation has never been as good at tuning their own software as third-party experts, and their performance cause list is missing many important factors.
Obviously, a mis-configured I/O sub-system is a major cause of poor Oracle performance, but some of the others will surprise you.

Click below to see the BC top-10 list of the most common Oracle performance problems:

http://oracle-tips.c.topica.com/maal8FqabJAW6bLGJrib/

Important 10g upgrade gotchas revealed


Oracle has improved the cost-based Oracle optimizer in 9.0.5 and again in 10g, so one of the most common causes of bad performance right after moving to 10g are related to environmental parameter settings (init.ora parms) and your optimizer statistics. Properly configured, Oracle 10g should always faster than earlier releases, both for PL/SQL and SQL, so it is likely that any slow performance after an Oracle 10g upgrade is due to initialization parameter settings or incomplete CBO statistics.

Remember, Oracle 10g is the world's most flexible and complex database, and upgrading to Oracle 10g is very tricky. Prior to putting your Oracle 10g upgrade into production, it's a best practice to thoroughly test the migration with a real-world workload. As an alternative, many shops obtain an independent Oracle health check to identify their sub-optimal configuration settings. Experts who frequently perform 10g upgrades know exactly where to look, and they can save you weeks of frustration.

To see the secrets, click below to see the most common causes of poor performance after a 10g upgrade:

http://oracle-tips.c.topica.com/maal8FjabJATTbLGJrib/

Also, see these notes on justifying an Oracle10g migration:

http://oracle-tips.c.topica.com/maal8FjabJATUbLGJrib/

Important research on pre-joining Oracle tables


One serious performance issue with highly-normalized, non-redundant Oracle table designs (e.g. third normal form) is that Oracle experiences a high degree of overhead (especially CPU consumption) when joining dozens of tables together, over-and-over again.

Using materialized views we can pre-join the tables together, resulting in a single, fat, wide and highly-redundant table. This can reduce logical I/O from tens of thousands to a single row fetch, resulting in blisteringly fast response time, but careful attention must be paid to choosing the proper materialized view partition keys and the best refresh interval.

The problem with materialized view for pre-joined tables is keeping them refreshed. Because the materialized view is built from many tables, and changes to the base tables require an update to the materialized view.

Read more on this issue here:

http://oracle-tips.c.topica.com/maal8GhabJA8ebLGJrib/

For more details, see my book "Oracle Tuning: The Definitive Reference". It's 30% off, directly from the publisher:

http://oracle-tips.c.topica.com/maal8GhabJA8dbLGJrib/

Oracle LIKE clause searches with text indexes


One serious SQL performance problem occurs when you use the SQL “LIKE clause” operator to find a string within a large Oracle table column (e.g. VARCHAR(2000), CLOB, BLOB):

Select stuff from bigtab where text_column like ‘%ipod%’;
Select stuff from bigtab where full_name like ‘%JONES’;

Because standard Oracle cannot index into a large column, there “like” queries cause full-table scans, and Oracle must examine every row in the table, even when the result set is very small. These unnecessary full-table scans are a problem:

1) Large-table full-table scans increase the load on the disk I/O sub-system

2) Small table full table scans (in the data buffer) cause high consistent gets and drive-up CPU consumption

One obscure trick for indexing queries with a leading wildcard character (like '%SON') is to create a REVERSE index and them programmatically reverse the SQL like clause to read "like 'NOS%'", effectively indexing on the other side of the text, clumsy, yet effective.

Read more about indexing on Oracle text based searches here:

http://oracle-tips.c.topica.com/maalZ3vabItU3bLGJrib/

PL/SQL features by release


A Mini-History of Oracle and PL/SQL

This chapter answers two questions: where did PL/SQL come from and why is it the best database development language ever developed?

In the late 70s, around the time Ingres was getting started at UC Berkeley, three guys working on a contract for the CIA got together and started a company called Relational Software, Inc.

Their first product was a relational database called Oracle. The founders decided to use the C language for development. This would later become important when they decided to start porting to different platforms.

They also decided to support SQL as the internal data access language. This would also become a very important factor to its success. In 1979, Relational Software was the only company making an SQL compliant database. If anyone ever asks you who wrote the first SQL database, you now know the answer: Oracle.

To access the database, to write an application for example, you had to use an external language and compiler. In the early days of Oracle, that was C but, in time, several other languages were added: COBOL, ADA, Fortran, PL/1, and others.

In the early 1980s, the company was renamed Oracle Corporation. That would just be the beginning of Oracle’s desire to rename its products. In my time using the Oracle database, I think every tool I have used has been renamed at least once. In the case of CDE/Developer 2000/Developer Suite, it has been renamed enough to be confusing.

Oracle did not have an embedded language for many years. Having come from a government background, when they chose a language for the database, they modeled it on ADA.

I programmed in ADA for a few years in the 1980s while I was working as a consultant for the US Department of Defense. It is a very powerful, but very wordy, object oriented language. ADA, and by extension PL/SQL, are descendants of Pascal.

Oracle named this new language PL/SQL; the Procedural Language extension to SQL. I pronounce it pee ell sequel but many others pronounce it pee ell ess que ell. Feel free to pronounce it however you like though.

Oracle Monitoring Best Practices


All DBA's are required to set-up an Oracle monitoring infrastructure and the architecture of the monitoring system is directly dependent on the demands of management. Oracle system cost the end-user community millions of dollars, and the end-user often demand service-level agreements (SLA's) that impose strict monitoring tasks for the Oracle professional.

- In many shops, the end-user community is demanding Oracle performance statistics at a detailed level.

- A financial services company demands that 95% of their Oracle transactions complete within one wallclock second. The DBA most develop a monitoring strategy to ensure compliance. (Note: there are special techniques that can be employed to get end-to-end response time within Oracle).

- A stock brokerage demands that the DBA write a sophisticated real-time proactive monitor. This monitor will quickly diagnose and pinpoint Oracle bottlenecks, but getting the data is a problem. It requires real-time access to all internal Oracle control structures. Running the monitor imposes a measurable burden on overall system load.

- A manufacturing plant requires measuring response time by hour-of-the-day to ensure fast throughput for all shifts.
These are just a few examples of the business demands that drive the Oracle professional to create sophisticated monitoring infrastructures. Let's example some of the most important issues when making these decisions.

Read the entire article on Oracle monitoring tips here:
http://oracle-tips.c.topica.com/maal51fabI6i4bLGJrib/

Important SSD changes coming to Oracle servers


Last week Sun Microsystems made the stunning announcement that virtually all of their new servers will come with 32 gigabytes of super-fast solid-state disk (SSD). Sun notes that their internal SSD consumes one-fifth the power and is a hundred times faster than the magnetic-coated spinning platter disk. They note that SSD is perfect for I/O intensive systems like Oracle applications:. Sun notes:

"The majority of enterprises building I/O-intensive applications will use some amount of flash within a year, Fowler predicted. Databases like Oracle, MySQL and IBM DB2 are ideal candidates, he says."

Today, 99.99% of all Oracle data is stored on the ancient spinning platter disks that were introduced in the 1970's, but disk technology is rapidly being displaced with RAM-based SSD which is hundreds of time faster than magnetic-coated spinning platter disks.

In the 1980's, platter disks was very expensive, and a 1.2 gigabyte 3380 disk sold for more than $200k. Today, you can buy high-speed SSD for Oracle for under $1k per gigabyte. For terabyte sized Oracle shops, SSD is being selectively used for redo and other tablespaces where I/O is a bottleneck.

Following Sun's lead, several other major hardware providers are following suit and hundreds of hardware vendors will be providing SSD for use by Oracle systems.

Oracle benchmarks have shown that SSD can be up to 600 times faster for some operations, and SSD does not experience platter or controller latency, leading to far faster throughput.

As prices continue to fall, SSD will replace platter disks for all mission critical Oracle databases, and this will lead to some fundamental changes in Oracle administration, especially within the SGA. Because all data is stored on RAM, the data buffer cache size becomes far less important.

Read more about the huge benefits of Solid State Oracle here:

2008 Survey of rack-mount SSD vendors:
http://oracle-tips.c.topica.com/maalZ3gabItTubLGJrib/

SSD clobbers disk speed for Oracle:
http://oracle-tips.c.topica.com/maalZ3gabItTvbLGJrib/

Using RAC with SSD:
http://oracle-tips.c.topica.com/maalZ3gabItTwbLGJrib/

Oracle tuning Guru achieves a 20x performance boost



Steve Karam, the Oracle wunderkind (the world's youngest Oracle ACE and Oracle Certified Master), has published an astute case study describing how he tuned a batch update down from 45 minutes to only 10 seconds!

>> Read "Does Size Matter? by Steve Karam, OCM, Oracle ACE

http://oracle-tips.c.topica.com/maalYRxabInjZbLGJrib/

In order to have a statistically valid test, Karam used a the real-world workload, using identical server, disk and instance settings. Karam notes how moving a high-DML table to a smaller blocksize resulted in a 20x throughput improvement:

"By going from a 16k blocksize to a 4k blocksize with all other things being equal, we experienced roughly a twenty times improvement."

Steve is now investigating further areas in the environment that could benefit from block optimization, targeting frequently used and highly concurrent segments as candidates for a smaller blocksize. Let's take a closer look at the use of multiple blocksizes to reduce waste and improve performance throughput.

Read the whole story of tuning Oracle I/O blocksize here:

http://oracle-tips.c.topica.com/maalYRxabInj0bLGJrib/

Invoking hash joins for faster Oracle SQL


Enabling Oracle to perform hash joins

In cases where a very small table is being joined to a large table, the Oracle hash join will often dramatically speed-up the query. Hash joins are far faster than nested loop joins in certain cases, often in cases where your SQL is joining a large table to a small table.

However, in a production database with very large tables, it is not always easy to get your database to invoke hash joins without increasing the RAM regions that control hash joins. For large tables, hash joins requires lots of RAM.

The 200 megabyte limit for hash joins

Oracle places limits of the amount of RAM available for hash joins so that no single session will "hog" all of the RAM. According to this research using Oracle 10g, the hash join maximum is only to only 200 megabytes (5% of pga_aggregate_target).

The Oracle DBA controls the optimizers' propensity to invoke hash joins because the DBA must allocate the RAM resources to Oracle (using the hash_area_size and pga_aggregate_target parameters) for the optimizer to choose a hash join. The CBO will only choose a hash join if you have allocated Oracle enough RAM area in which to perform the hash join.

Sizing your PGA for hash joins

The rules are quite different depending on your release, and you need to focus on the hash_area_size OR the pga_aggregate_target parameters.

Important Changes in 10g Statistics Collection


Each new release of Oracle brings enhancements, and 10g has radically changed some important defaults for Oracle statistics collection (using dbms_stats). The one-size-fits-all approach is a great baseline, but the automatic 10g statistics collection may not be just-right for your database.

Oracle 10g does automatic statistics collection and your original customized dbms_stats job (with your customized parameters) will be overlaid.

You may also see a statistics deficiency (i.e. not enough histograms) causing performance issues. Re-analyze object statistics using dbms_stats and make sure that you collect system statistics.

execute dbms_stats.gather_system_stats('start');
-- wait an hour or so
execute dbms_stats.gather_system_stats('stop');

With Oracle Database 10g, there are some new arguments available for the dbms_stats package subprograms. Those changed 10g dbms_stats parameters are granularity and degree.

There are also cases where you need to disable the automatic statistics collection on Oracle10g.

Read more about importatnt changes in 10g statistics collection here:

http://oracle-tips.c.topica.com/maalOG5abHNJ0bLGJrib/

Important SSD changes coming to Oracle servers



Last week Sun Microsystems made the stunning announcement that virtually all of their new servers will come with 32 gigabytes of super-fast solid-state disk (SSD). Sun notes that their internal SSD consumes one-fifth the power and is a hundred times faster than the magnetic-coated spinning platter disk. They note that SSD is perfect for I/O intensive systems like Oracle applications:. Sun notes:

"The majority of enterprises building I/O-intensive applications will use some amount of flash within a year, Fowler predicted. Databases like Oracle, MySQL and IBM DB2 are ideal candidates, he says."

Today, 99.99% of all Oracle data is stored on the ancient spinning platter disks that were introduced in the 1970's, but disk technology is rapidly being displaced with RAM-based SSD which is hundreds of time faster than magnetic-coated spinning platter disks.

In the 1980's, platter disks was very expensive, and a 1.2 gigabyte 3380 disk sold for more than $200k. Today, you can buy high-speed SSD for Oracle for under $1k per gigabyte. For terabyte sized Oracle shops, SSD is being selectively used for redo and other tablespaces where I/O is a bottleneck.

Following Sun's lead, several other major hardware providers are following suit and hundreds of hardware vendors will be providing SSD for use by Oracle systems.

Oracle benchmarks have shown that SSD can be up to 600 times faster for some operations, and SSD does not experience platter or controller latency, leading to far faster throughput.

As prices continue to fall, SSD will replace platter disks for all mission critical Oracle databases, and this will lead to some fundamental changes in Oracle administration, especially within the SGA. Because all data is stored on RAM, the data buffer cache size becomes far less important.

Read more about the huge benefits of Solid State Oracle here:

**************************************************************

2008 Survey of rack-mount SSD vendors:
http://oracle-tips.c.topica.com/maalUXYabH1w9bLGJrib/

SSD clobbers disk speed for Oracle:
http://oracle-tips.c.topica.com/maalUXYabH1xabLGJrib/


Using RAC with SSD:
http://oracle-tips.c.topica.com/maalUXYabH1xbbLGJrib/

Oracle SQL Loader


Table Content :

The SQL*Loader Environment
A Short SQL*Loader Example
SQL*Loader's Capabilities
Issues when Loading Data
Invoking SQL*Loader
---------------------------------

SQL*Loader is an Oracle-supplied utility that allows you to load data from a flat file into one or more database tables. That's it. That's the sole reason for SQL*Loader's existence.
The basis for almost everything you do with SQL*Loader is a file known as the control file. The SQL*Loader control file is a text file into which you place a description of the data to be loaded. You also use the control file to tell SQL*Loader which database tables and columns should receive the data that you are loading.
Do not confuse SQL*Loader control files with database control files. In a way, it's unfortunate that the same term is used in both cases. Database control files are binary files containing information about the physical structure of your database. They have nothing to do with SQL*Loader. SQL*Loader control files, on the other hand, are text files containing commands that control SQL*Loader's operation.
Once you have a data file to load and a control file describing the data contained in that data file, you are ready to begin the load process. You do this by invoking the SQL*Loader executable and pointing it to the control file that you have written. SQL*Loader reads the control file to get a description of the data to be loaded. Then it reads the input file and loads the input data into the database.
SQL*Loader is a very flexible utility, and this short description doesn't begin to do it justice. The rest of this chapter provides a more detailed description of the SQL*Loader environment and a summary of SQL*Loader's many capabilities.

The SQL*Loader Environment
When we speak of the SQL*Loader environment, we are referring to the database, the SQL*Loader executable, and all the different files that you need to be concerned with when using SQL*Loader.
The functions of the SQL*Loader executable, the database, and the input data file are rather obvious. The SQL*Loader executable does the work of reading the input file and loading the data. The input file contains the data to be loaded, and the database receives the data.
SQL*Loader is capable of loading from multiple files in one session. You'll read more about this in Chapter 2, The Mysterious Control File. When multiple input files are used, SQL*Loader will generate multiple bad files and discard files--one set for each input file.

The SQL*Loader Control File
The SQL*Loader control file is the key to any load process. The control file provides the following information to SQL*Loader:
The name and location of the input data file
The format of the records in the input data file
The name of the table or tables to be loaded
The correspondence between the fields in the input record and the columns in the database tables being loaded
Selection criteria defining which records from the input file contain data to be inserted into the destination database tables.
The names and locations of the bad file and the discard file
Some of the items shown in this list may also be passed to SQL*Loader as command-line parameters. The name and location of the input file, for example, may be passed on the command line instead of in the control file. The same goes for the names and locations of the bad files and the discard files.
It's also possible for the control file to contain the actual data to be loaded. This is sometimes done when small amounts of data need to be distributed to many sites, because it reduces (to just one file) the number of files that need to be passed around. If the data to be loaded is contained in the control file, then there is no need for a separate data file.

The Log File
The log file is a record of SQL*Loader's activities during a load session. It contains information such as the following:
The names of the control file, log file, bad file, discard file, and data file
The values of several command-line parameters
A detailed breakdown of the fields and datatypes in the data file that was loaded
Error messages for records that cause errors
Messages indicating when records have been discarded
A summary of the load that includes the number of logical records read from the data file, the number of rows rejected because of errors, the number of rows discarded because of selection criteria, and the elapsed time of the load
Always review the log file after a load to be sure that no errors occurred, or at least that no unexpected errors occurred. This type of information is written to the log file, but is not displayed on the terminal screen.

The Bad File and the Discard File
Whenever you insert data into a database, you run the risk of that insert failing because of some type of error. Integrity constraint violations undoubtedly represent the most common type of error. However, other problems, such as the lack of free space in a tablespace, can also cause insert operations to fail. Whenever SQL*Loader encounters a database error while trying to load a record, it writes that record to a file known as the bad file.
Discard files, on the other hand, are used to hold records that do not meet selection criteria specified in the SQL*Loader control file. By default, SQL*Loader will attempt to load all the records contained in the input file. You have the option, though, in your control file, of specifying selection criteria that a record must meet before it is loaded. Records that do not meet the specified criteria are not loaded, and are instead written to a file known as the discard file.
Discard files are optional. You will only get a discard file if you've specified a discard file name, and if at least one record is actually discarded during the load. Bad files are not optional. The only way to avoid having a bad file generated is to run a load that results in no errors. If even one error occurs, SQL*Loader will create a bad file and write the offending input record (or records) to that file.
The format of your bad files and discard files will exactly match the format of your input files. That's because SQL*Loader writes the exact records that cause errors, or that are discarded, to those files. If you are running a load with multiple input files, you will get a distinct set of bad files and discard files for each input file.
You'll read more about bad files and discard files, and how to use them, in Chapter 7, Validating and Selectively Loading Data.

A Short SQL*Loader Example
This section contains a short example showing how SQL*Loader is used. For this example, we'll be loading a file of geographic place names taken from the United States Geological Survey's (USGS) Geographic Name Information System (GNIS).
TIP:
Learn more about GNIS data or download it for yourself by visiting http://mapping.usgs.gov/www/gnis/. The specific data file used for this example is also available from http://www.oreilly.com/catalog/orsqlloader and http://gennick.com/sqlldr.

The Data
The particular file used for this example contains the feature name listing for the State of Michigan. It's a delimited text file containing the official names of the many thousands of lakes, streams, waterfalls, and other geographic features in the state. The following example shows three records from that file. The lines wrap on the printed page in this book, but in the file each name is on its own line:

"MI","Agate Falls","falls","Ontonagon","26","131","462851N","0890527W",
"46.48083","-89.09083","","","","","","","Trout Creek"

"MI","Agate Harbor","bay","Keweenaw","26","083","472815N","0880329W",
"47.47083","-88.05806","","","","","","","Delaware"

"MI","Agate Point","cape","Keweenaw","26","083","472820N","0880241W",
"47.47222","-88.04472","","","","","","","Delaware"

As you can see, the data in the file is comma-delimited, and each field is enclosed within double quotes. Table 1-1 shows the contents and maximum length of each field.

Table 1-1: Fields in the GNIS Feature Names File

Field Number Maximum Length Contents

1 2 Alphanumeric state code

2 60 Feature name

3 9 Feature type

4 35 County name

5 2 FIPS state code

6 3 FIPS county code

7 7 Primary latitude in degrees, minutes, and seconds

8 8 Primary longitude in degrees, minutes, and seconds

9 8 Primary latitude in decimal degrees

10 8 Primary longitude in decimal degrees

11 7 Source latitude in degrees, minutes, and seconds

12 8 Source longitude in degrees, minutes, and seconds

13 8 Source latitude in decimal degrees

14 8 Source longitude in decimal degrees

15 5 Elevation (feet above sea level)

16 10 Estimated population

17 30 The name of the USGS 7.5 minute series map on which the feature can be found



We used the following SQL statement to create the table into which all this data will be loaded:

CREATE TABLE gfn_gnis_feature_names (
gfn_state_abbr CHAR(2),
gfn_feature_name VARCHAR2(60),
gfn_feature_type VARCHAR2(9),
gfn_county_name VARCHAR2(35),
gfn_primary_latitude_dms CHAR(7),
gfn_primary_longitude_dms CHAR(8),
gfn_elevation NUMBER(7,2),
gfn_population NUMBER(10),
gfn_cell_name VARCHAR2(30)
) TABLESPACE gnis_data;

As you can see, not all fields in the data file are to be loaded into the table. The source latitude and longitude fields will not be loaded, nor will the decimal versions of the primary latitude and longitude. The FIPS coding will also be omitted.

The Control File
The following control file will be used to load the feature name data for the State of Michigan:

LOAD DATA
APPEND INTO TABLE gfn_gnis_feature_names
(
gfn_state_abbr CHAR TERMINATED BY "," ENCLOSED BY '"',
gfn_feature_name CHAR TERMINATED BY "," ENCLOSED BY '"',
gfn_feature_type CHAR TERMINATED BY "," ENCLOSED BY '"',
gfn_county_name CHAR TERMINATED BY "," ENCLOSED BY '"',
gfn_fips_state_code FILLER INTEGER EXTERNAL
TERMINATED BY "," ENCLOSED BY '"',
gfn_fips_county_code FILLER INTEGER EXTERNAL
TERMINATED BY "," ENCLOSED BY '"',
gfn_primary_latitude_dms CHAR TERMINATED BY "," ENCLOSED BY '"',
gfn_primary_longitude_dms CHAR TERMINATED BY "," ENCLOSED BY '"',
gfn_primary_latitude_dec FILLER DECIMAL EXTERNAL
TERMINATED BY "," ENCLOSED BY '"',
gfn_primary_longitude_dec FILLER DECIMAL EXTERNAL
TERMINATED BY "," ENCLOSED BY '"',
gfn_source_latitude_dms FILLER CHAR
TERMINATED BY "," ENCLOSED BY '"',
gfn_source_longitude_dms FILLER CHAR
TERMINATED BY "," ENCLOSED BY '"',
gfn_source_latitude_dec FILLER DECIMAL EXTERNAL
TERMINATED BY "," ENCLOSED BY '"',
gfn_source_longitude_dec FILLER DECIMAL EXTERNAL
TERMINATED BY "," ENCLOSED BY '"',
gfn_elevation DECIMAL EXTERNAL
TERMINATED BY "," ENCLOSED BY '"',
gfn_population INTEGER EXTERNAL
TERMINATED BY "," ENCLOSED BY '"',
gfn_cell_name CHAR TERMINATED BY "," ENCLOSED BY '"'
)

Some explanations are in order. The LOAD DATA command tells SQL*Loader that you are going to load data from an operating system file into a database table. Everything else that you see in this particular control file represents a clause of the LOAD DATA command.
The destination table is identified by the following INTO TABLE clause:

APPEND INTO TABLE gfn_gnis_feature_names
The APPEND keyword tells SQL*Loader to preserve any preexisting data in the table. Other options allow you to delete preexisting data, or to fail with an error if the table is not empty to begin with.
The field definitions are all contained within parentheses, and are separated from each other by commas. The fields in the data file are delimited by commas, and are also enclosed by double quotes. The following clause is used at the end of each field definition to pass this delimiter and enclosure information to SQL*Loader:

TERMINATED BY "," ENCLOSED BY '"'

The following three datatypes are used in this control file. They have no bearing on, or relationship to, the database datatypes of the columns being loaded. The purpose of the datatypes in the control file is to describe the data being loaded from the input data file:

CHAR
Tells SQL*Loader that a field is a text field.

INTEGER EXTERNAL
Tells SQL*Loader that a field is an integer represented using the text digits "0" through "9".

DECIMAL EXTERNAL
Tells SQL*Loader that a field is a decimal value represented using the text digits "0" through "9" and an optional decimal point (".").
Each field is given a name that is meaningful to SQL*Loader. For the nine fields being loaded into the table, the SQL*Loader name must match the corresponding column name in the table. The keyword FILLER identifies the eight fields that are not being loaded into the database. Their names do not matter, but the same naming convention has been followed as for all the rest of the fields.
TIP: FILLER fields are a new feature in Oracle8i. If you are using a release prior to the Oracle8i release, SQL*Loader will not recognize the FILLER keyword.

The Command Line
The command used to initiate this load needs to invoke SQL*Loader and point it to the control file describing the data. In this case, since the input file name is not provided in the control file, that name needs to be passed in on the command line as well. The following sqlldr command will do the job:
sqlldr gnis/gnis@donna control=gnis log=gnis_michigan data=mi_deci.
There are four parameters for this command:

gnis/gnis@donna
The first parameter consists of a username, password, and net service name. SQL*Loader uses this information to open a connection to the database. The "gnis" user owns the table to be loaded.

control = gnis
The second parameter tells SQL*Loader that the control file name is gnis.ctl. The default control file extension is .ctl, so the parameter needs to specify only the file name in this case.

log = gnis_michigan
The third parameter specifies a log file name of gnis_michigan.log. The default log file extension is .log, so it's not specified explicitly in the parameter setting.

data = mi_deci.
The fourth parameter specifies an input file name of mi_deci. This name ends with an explicit period, because the file name has no extension. Without the period on the end, SQL*Loader would assume the default extension of .dat.

By not including the input file name in the control file, but instead passing it as a command-line parameter, we've made it easy to use the same control file to load feature name data for all 50 states. All we need to do is change the value of the DATA and LOG parameters on the command line. Here's what it looks like to issue this sqlldr command and load the data:

$ sqlldr gnis/gnis@donna control=gnis log=gnis_michigan data=mi_deci.

SQL*Loader: Release 8.1.5.0.0 - Production on Wed Apr 5 13:35:53 2000

(c) Copyright 1999 Oracle Corporation. All rights reserved.

Commit point reached - logical record count 28
Commit point reached - logical record count 56
Commit point reached - logical record count 84
...
Commit point reached - logical record count 32001
Commit point reached - logical record count 32029
Commit point reached - logical record count 32056

Pretty much all you see on the screen when you run SQL*Loader are these "Commit point" messages. If nothing else, they provide some reassurance that the load is progressing, and that your session is not hung. All other information regarding the load is written to the log file.

The Log File
The log file resulting from the load of Michigan's feature name data begins with the SQL*Loader banner. It goes on to list the names of the files involved in the load, and also the values of some important command-line parameters. For example:

SQL*Loader: Release 8.1.5.0.0 - Production on Wed Apr 5 13:35:53 2000

(c) Copyright 1999 Oracle Corporation. All rights reserved.

Control File: gnis.ctl
Data File: mi_deci.
Bad File: mi_deci.bad
Discard File: none specified

(Allow all discards)

Number to load: ALL
Number to skip: 0
Errors allowed: 50
Bind array: 64 rows, maximum of 65536 bytes
Continuation: none specified
Path used: Conventional

You can see that the names of the control file, bad file, and data file are recorded in the log. This information is invaluable if you ever have problems with a load, or if you ever need to backtrack in order to understand what you really did. The log also displays the number of records to be loaded, the number to be skipped, the number of errors to allow before aborting the load, the size of the bind array, and the data path. The data path is an important piece of information. The load in this example is a conventional path load, which means that SQL*Loader loads the data into the database using INSERT statements. There is another type of load called a direct path load, which has the potential for far better performance than a conventional path load. Direct path loads are discussed in Chapter 10, Direct Path Loads.

The next part of the log file identifies the table being loaded, indicates whether or not preexisting data was preserved, and lists the field definitions from the control file:

Table GFN_GNIS_FEATURE_NAMES, loaded from every logical record.
Insert option in effect for this table: APPEND

Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ------------
GFN_STATE_ABBR FIRST * , " CHARACTER
GFN_FEATURE_NAME NEXT * , " CHARACTER
GFN_FEATURE_TYPE NEXT * , " CHARACTER
GFN_COUNTY_NAME NEXT * , " CHARACTER
GFN_FIPS_STATE_CODE NEXT * , " CHARACTER
(FILLER FIELD)
GFN_FIPS_COUNTY_CODE NEXT * , " CHARACTER
(FILLER FIELD)
GFN_PRIMARY_LATITUDE_DMS NEXT * , " CHARACTER
GFN_PRIMARY_LONGITUDE_DMS NEXT * , " CHARACTER
GFN_PRIMARY_LATITUDE_DEC NEXT * , " CHARACTER
(FILLER FIELD)
GFN_PRIMARY_LONGITUDE_DEC NEXT * , " CHARACTER
(FILLER FIELD)
GFN_SOURCE_LATITUDE_DMS NEXT * , " CHARACTER
(FILLER FIELD)
GFN_SOURCE_LONGITUDE_DMS NEXT * , " CHARACTER
(FILLER FIELD)
GFN_SOURCE_LATITUDE_DEC NEXT * , " CHARACTER
(FILLER FIELD)
GFN_SOURCE_LONGITUDE_DEC NEXT * , " CHARACTER
(FILLER FIELD)
GFN_ELEVATION NEXT * , " CHARACTER
GFN_POPULATION NEXT * , " CHARACTER
GFN_CELL_NAME NEXT * , " CHARACTER

The last part of the log file contains summary information about the load. If there were any errors, or any discarded records, you would see messages for those before the summary. The summary tells you how many rows were loaded, how many had errors, how many were discarded, and so forth. It looks like this:

Table GFN_GNIS_FEATURE_NAMES:
32056 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.


Space allocated for bind array: 65016 bytes(28 rows)
Space allocated for memory besides bind array: 0 bytes

Total logical records skipped: 0
Total logical records read: 32056
Total logical records rejected: 0
Total logical records discarded: 0

Run began on Wed Apr 05 13:35:53 2000
Run ended on Wed Apr 05 13:36:34 2000

Elapsed time was: 00:00:41.22
CPU time was: 00:00:03.81

You can see from this summary that 32,056 feature names were loaded into the gfn_gnis_feature_names table for the state of Michigan. There were no errors, and no records were discarded. The elapsed time for the load was a bit over 41 seconds.

SQL*Loader's Capabilities
SQL*Loader is very flexible, and the example in the previous section shows only a small amount of what can be done using the utility. Here are the major SQL*Loader capabilities that you should be aware of:

SQL*Loader can read from multiple input files in a single load session.

SQL*Loader can handle files with fixed-length records, variable-length records, and stream-oriented data.

SQL*Loader supports a number of different datatypes, including text, numeric, zoned decimal, packed decimal, and various machine-specific binary types.

Not only can SQL*Loader read from multiple input files, but it can load that data into several different database tables, all in the same load session.

SQL*Loader allows you to use Oracle's built-in SQL functions to manipulate the data being read from the input file.

SQL*Loader includes functionality for dealing with whitespace, delimiters, and null data.

In addition to standard relational tables, SQL*Loader can load data into object tables, varying arrays (VARRAYs), and nested tables.

SQL*Loader can load data into large object (LOB) columns.

SQL*Loader can handle character set translation between the input data file and the database.

The capabilities in this list describe the types of data that SQL*Loader can handle, and what SQL*Loader can do to with that data. SQL*Loader also implements some strong, performance-related features. SQL*Loader can do direct path loads, which bypass normal SQL statement processing, and which may yield handsome performance benefits. SQL*Loader can also do parallel loads and even direct-path parallel loads; direct path parallel loads allow you to maximize throughput on multiple CPU systems. You'll read more about these performance-related features in Chapter 9, Transaction Size and Performance Issues, and in Chapter 10.

Issues when Loading Data
There are a number of issues that you need to be concerned about whenever you use SQL*Loader to load data into your database--indeed, you need to be concerned about these whenever you load data, period. First, there's the ever-present possibility that the load will fail in some way before it is complete. If that happens, you'll be left with some data loaded, and some data not loaded, and you'll need a way to back out and try again. Other SQL*Loader issues include transaction size, data validation (including referential integrity), and data transformation. Transaction size is partly a performance issue, but it also has an impact on how much data you need to reload in the event that a load fails. Data validation and referential integrity both relate to the need for clean, reliable data.

Recovery from Failure
There are really only two fundamental ways that you can recover from a failed load. One approach is to delete all the data that was loaded before the failure occurred, and simply start over again. Of course, you need to fix whatever caused the failure to occur before you restart the load. The other approach is to determine how many records were loaded successfully, and to restart the load from that point forward. Regardless of which method you choose, you need to think things through before you start a load.

Deleting data and restarting a load from scratch really doesn't require any special functionality on the part of SQL*Loader. The important thing is that you have a reliable way to identify the data that needs to be deleted. SQL*Loader does, however, provide support for continuing an interrupted load from the point where a failure occurred. Using the SKIP command-line parameter, or the SKIP clause in the control file, you can tell SQL*Loader to skip over records that were already processed in order to have the load pick up from where it left off previously. Chapter 6, Recovering from Failure, describes the process for continuing a load in detail, and some of the issues you'll encounter. It's a chapter worth reading, because there are some caveats and gotchas, and you'll want to learn about those before you have a failure, not afterwards.

Transaction Size
Transaction size is an issue related somewhat to performance, and somewhat to recovery from failure. In a conventional load, SQL*Loader allows you to specify the number of rows that will be loaded between commits. The number of rows that you specify has a direct impact on the size of the bind array that SQL*Loader uses, and consequently on the amount of memory required for the load. The bind array is an area in memory where SQL*Loader stores data for rows to be inserted into the database. When the bind array fills, SQL*Loader inserts the data into the table being loaded, and then executes a COMMIT.

The larger the transaction size, the more data you'll need to reprocess if you have to restart the load after a failure. However, that's usually not a significant issue unless your bind array size is quite large. Transaction size can also affect performance. Generally, the more data loaded in one chunk the better. So a larger bind array size typically will lead to better performance. However, it will also lead to fewer commits, resulting in the use of more rollback segment space. Chapter 9 describes these issues in detail.

Data Validation
Data validation is always a concern when loading data. SQL*Loader doesn't provide a lot of support in this area, but there are some features at your disposal that can help you ensure that only good data is loaded into your database.

The one thing that SQL*Loader does do for you is ensure that the data being loaded into a column is valid given the column's datatype. Text data will not be loaded into NUMBER fields, and numbers will not be loaded into DATE fields. This much, at least, you can count on. Records containing data that doesn't convert to the destination datatype are rejected and written to the bad file.

SQL*Loader allows you to selectively load data. Using the WHEN clause in your SQL*Loader control file, you can specify conditions under which a record will be accepted. Records not meeting those conditions are not loaded, and are instead written to the discard file.

Finally, you can take advantage of the referential integrity features built into your database. SQL*Loader won't be able to load data that violates any type of primary key, unique key, foreign key, or check constraint. Chapter 7, Validating and Selectively Loading Data, discusses using SQL*Loader and Oracle features to ensure that only good data gets loaded.

TIP: You don't always have to rely on SQL*Loader's features for data validation. It's entirely feasible to load data into a staging table, run one or more external programs to weed out any rows that are invalid, and then transfer that data to a production table.

Data Transformation
Wouldn't it be great if the data we loaded was always in a convenient format? Unfortunately, it frequently is not. In the real world, you may deal with data from a variety of sources and systems, and the format of that data may not match the format that you are using in your database. Dates, for example, are represented using a wide variety of formats. The date 1/2/2000 means one thing in the United States and quite another in Europe.

For dates and numbers, you can often use Oracle's built-in TO_DATE and TO_NUMBER functions to convert a character-based representation to a value that can be loaded into a database DATE or NUMBER column. In fact, for date fields, you can embed the date format into your control file as part of the field definition.

SQL*Loader allows you access to Oracle's entire library of built-in SQL functions. You aren't limited to just TO_DATE, TO_NUMBER, and TO_CHAR. Not only can you access all the built-in SQL functions, you can also write PL/SQL code to manipulate the data being loaded. This opens up a world of possibilities, which are discussed in Chapter 8, Transforming Data During a Load.

Invoking SQL*Loader
On Unix systems, the command used to invoke SQL*Loader is sqlldr. On Windows systems running Oracle8i, release 8.1 or higher, the command is also sqlldr. Prior to release 8.1, the SQL*Loader command on Windows systems included the first two digits of the Oracle release number. Thus you had sqlldr80 (Oracle8, release 8.0), sqlldr73 (Oracle7, release 7.3), and so forth.

SQL*Loader can be invoked in one of three ways:

sqlldr

sqlldr keyword=value [keyword=value ...]

sqlldr value [value ...]

Issuing the sqlldr command by itself results in a list of valid command-line parameters being displayed. Command-line parameters are usually keyword/value pairs, and may be any combination of the following:

USERID={username[/password][@net_service_name]|/}
CONTROL=control_file_name
LOG=path_file_name
BAD=path_file_name
DATA=path_file_name
DISCARD=path_file_name
DISCARDMAX=logical_record_count
SKIP=logical_record_count
SKIP_INDEX_MAINTENANCE={TRUE | FALSE}
SKIP_UNUSABLE_INDEXES={TRUE | FALSE}
LOAD=logical_record_count
ERRORS=insert_error_count
ROWS=rows_in_bind_array
BINDSIZE=bytes_in_bind_array
SILENT=[(]keyword[,keyword...][)]
DIRECT={TRUE | FALSE}
PARFILE=path_file_name
PARALLEL={TRUE | FALSE}
READSIZE=bytes_in_read_buffer
FILE=database_datafile_name

Command-line parameters may be passed by position instead of by keyword. The rules for doing this are described at the end of the next section.

Command-Line Parameters
The SQL*Loader parameter descriptions are as follows:

USERID = {username[/password] [@net_service_name]|/}
Specifies the username and password to use when connecting to the database. The net_service_name parameter optionally allows you to connect to a remote database. Use a forward-slash character ( / ) to connect to a local database using operating system authentication. On Unix systems, you may want to omit the password and allow SQL*Loader to prompt you for it. If you omit both the username and the password, SQL*Loader will prompt you for both.

TIP: On Unix systems you should generally avoid placing a password on the command line, because that password will be displayed whenever other users issue a command, such as ps -ef, that displays a list of current processes running on the system. Either let SQL*Loader prompt you for your password, or use operating system authentication. (If you don't know what operating system authentication is, ask your DBA.)

CONTROL = control_ file_name
Specifies the name, which may include the path, of the control file. The default extension is .ctl.

LOG = path_ file_name
Specifies the name of the log file to generate for a load session. You may include a path as well. By default, the log file takes on the name of the control file, but with a .log extension, and is written to the same directory as the control file. If you specify a different name, the default extension is still .log. However, if you use the LOG parameter to specify a name for the log file, it will no longer be written automatically to the directory that contains the control file.

BAD = path_ file_name
Specifies the name of the bad file. You may include a path as part of the name. By default, the bad file takes the name of the control file, but with a .bad extension, and is written to the same directory as the control file. If you specify a different name, the default extension is still .bad. However, if you use the BAD parameter to specify a bad file name, the default directory becomes your current working directory. If you are loading data from multiple files, then this bad file name only gets associated with the first file being loaded.

DATA = path_ file_name
Specifies the name of the file containing the data to load. You may include a path as part of the name. By default, the name of the control file is used, but with the .dat extension. If you specify a different name, the default extension is still .dat. If you are loading from multiple files, you can only specify the first file name using this parameter. Place the names of the other files in their respective INFILE clauses in the control file.

DISCARD = path_ file_name
Specifies the name of the discard file. You may include a path as part of the name. By default, the discard file takes the name of the control file, but it has a .dis extension. If you specify a different name, the default extension is still .dis. If you are loading data from multiple files, then this discard file name only gets associated with the first file being loaded.

DISCARDMAX = logical_record_count
Sets an upper limit on the number of logical records that can be discarded before a load will terminate. The limit is actually one less than the value specified for DISCARDMAX. When the number of discarded records becomes equal to the value specified for DISCARDMAX, the load will terminate. The default is to allow an unlimited number of discards. However, since DISCARDMAX only accepts numeric values, it is not possible to explicitly specify the default behavior.

TIP: There is also an undocumented parameter named DISCARDS that functions the same as DISCARDMAX. The use of DISCARDMAX is preferred, but you may occasionally encounter references to DISCARDS.

SKIP = logical_record_count
Allows you to continue an interrupted load by skipping the specified number of logical records. If you are continuing a multiple table direct path load, you may need to use the CONTINUE_LOAD clause in the control file rather than the SKIP parameter on the command line. CONTINUE_LOAD allows you to specify a different number of rows to skip for each table that you are loading.

SKIP_INDEX_MAINTENANCE = {TRUE | FALSE}
Controls whether or not index maintenance is done for a direct path load. This parameter does not apply to conventional path loads. A value of TRUE causes index maintenance to be skipped. Any index segments (partitions) that should have been updated will be marked as unusable. A value of FALSE causes indexes to be maintained as they normally would be. The default is FALSE.

SKIP_UNUSABLE_INDEXES = {TRUE | FALSE}
Controls the manner in which a load is done when a table being loaded has indexes in an unusable state. A value of TRUE causes SQL*Loader to load data into tables even when those tables have indexes marked as unusable. The indexes will remain unusable at the end of the load. One caveat is that if a UNIQUE index is marked as unusable, the load will not be allowed to proceed.

A value of FALSE causes SQL*Loader not to insert records when those records need to be recorded in an index marked as unusable. For a conventional path load, this means that any records that require an unusable index to be updated will be rejected as errors. For a direct path load, this means that the load will be aborted the first time such a record is encountered. The default is FALSE.

LOAD = logical_record_count
Specifies a limit on the number of logical records to load. The default is to load all records. Since LOAD only accepts numeric values, it is not possible to explicitly specify the default behavior.

ERRORS = insert_error_count
Specifies a limit on the number of errors to tolerate before the load is aborted. The default is to abort a load when the error count exceeds 50. There is no way to allow an unlimited number of errors. The best you can do is to specify a very high number for this parameter.

ROWS = rows_in_bind_array
The precise meaning of this parameter depends on whether you are doing a direct path load or a conventional load. If you are doing a conventional load, then you can use this parameter to control the number of rows in the bind array. This represents the number of rows that SQL*Loader loads with each INSERT statement, and also represents the commit frequency. The default is 64 rows.

If you are doing a direct path load, then ROWS specifies the number of rows to read from the input file before saving the data to the database. SQL*Loader will round up the ROWS value to coincide with an even number of database blocks. A data save in a direct path load is analogous to a commit in a conventional path load. The default, when a direct path load is done, is to do one save at the end of the load.

TIP: The BINDSIZE and ROWS parameters both affect the size of the bind array. Chapter 9 discusses this topic in greater detail.

BINDSIZE = bytes_in_bind_array
Specifies the maximum size, in bytes, of the bind array. This parameter overrides any bind array size computed as a result of using the ROWS parameter. The default bind array size is 65,536 bytes, or 64K.

SILENT = [( ]keyword [,keyword... ] [ )]
Allows you to suppress various header and feedback messages that SQL*Loader normally displays during a load session. Table 1-2 describes the effect of each of the keywords.
Table 1-2: Keywords for Use with the SILENT Parameter Keyword
Effect

ALL
Is the same as specifying all the other keywords.

DISCARDS
Suppresses the message that is normally written to the log file each time a record is discarded.

ERRORS
Suppresses the error messages that are normally written to the log file when a record generates an Oracle error.

FEEDBACK
Suppresses the "commit point reached" messages that are normally
displayed each time SQL*Loader executes a commit or a save.

HEADER
Suppresses the messages that SQL*Loader displays on the screen when you first launch the executable. Note, however, that the header
messages are always written to the log file.

PARTITIONS
Suppresses the per-partition statistics that are normally written to the log file when doing a direct path load of a partitioned table.



There are two ways you can specify values for the SILENT parameter. If you have only one keyword, you can supply it following the equals sign (=), as follows:
SILENT = ALL

If you have several keywords to use, you can place them in a comma-delimited list. You may optionally place that list inside parentheses. For example:
SILENT = (DISCARDS,ERRORS)


DIRECT = {TRUE | FALSE}
Determines the data path used for the load. A value of FALSE results in a conventional path load. A value of TRUE results in a direct path load. The default is FALSE.

PARFILE = path_ file_name
Tells SQL*Loader to read command-line parameter values from a text file. This text file is referred to as a parameter file, and contains keyword/value pairs. Usually, the keyword/value pairs are separated by line breaks. Use of the PARFILE parameter can save a lot of typing if you need to perform the same load several times, because you won't need to retype all the command-line parameters each time. There is no default extension for parameter files.

PARALLEL = {TRUE | FALSE}
Indicates whether or not you are doing a direct path parallel load. If you are loading the same object from multiple direct path load sessions, then set this to TRUE. Otherwise, set it to FALSE. The default is FALSE.

READSIZE = bytes_in_read_buffer
Specifies the size of the buffer used by SQL*Loader when reading data from the input file. The default value is 65,536 bytes, or 64K. The values of the READSIZE and BINDSIZE parameters should match. If you supply values for these two parameters that do not match, SQL*Loader will adjust them.

FILE = database_datafile_name
Specifies the database data file from which to allocate extents. Use this parameter when doing parallel loads, to ensure that each load session is using a different disk. If you are not doing a direct path load, this parameter will be ignored.

In addition to being passed by keyword, parameters may also be passed by position. To do this, you simply list the values after the sqlldr command in the correct order. For example, the following two SQL*Loader commands yield identical results:

sqlldr system/manager profile.ctl profile.log
sqlldr userid=system/manager control=profile.ctl log=profile.log

You can even mix the positional and keyword methods of passing command-line parameters. The one rule when doing this is that all positional parameters must come first. Once you start using keywords, you must continue to do so. For example:

sqlldr system/manager control=profile.ctl log=profile.ctl

When you pass parameters positionally, you must not skip any. Also, be sure to get the order right. You must supply parameter values in the order shown earlier in this section. Given the fact that you typically will use only a few parameters out of the many that are possible, it's usually easier to pass those parameters as keyword/value pairs than it is to pass them positionally. Using keyword/value pairs also makes long SQL*Loader commands somewhat self-documenting. The one exception to this rule is that you might wish to pass the username and password positionally, since they come first, and then pass in the rest of the parameters by name.

Command-Line Syntax Rules
There are several syntax rules to be aware of when writing SQL*Loader commands. These rules fall into the following areas:

Case-sensitivity

Separation of parameters
Special characters in the command line
SQL*Loader itself is not case-sensitive. Keywords on the command line may be in either upper- or lowercase--it doesn't matter. However, some operating systems, notably Unix, are case-sensitive. When running SQL*Loader on a case-sensitive operating system, you do need to be careful of the case used in file names. You also need to pay attention to the case used for the command to invoke SQL*Loader. On Unix and other case-sensitive operating systems, the SQL*Loader executable name is usually lowercase. So on Unix, Linux, and so forth, use sqlldr. Under Windows, and other operating systems where case doesn't matter, you can use SQLLDR or sqlldr as you prefer.
Parameters on the command line may be separated by spaces, by commas, or by both spaces and commas. All three of the following commands for example, are legitimate:

sqlldr system/manager,control=product.ctl,log=product.log
sqlldr system/manager, control=product.ctl, log=product.log
sqlldr system/manager control=product.ctl log=product.log

Spaces are acceptable as well, on either side of the equals sign (=), in keyword/value pairs.
Special characters are rarely needed on the command line, but when you do use them in an option value, you must enclose that value within quotes. For example, beginning with release 8.1.7, if you connect as the user SYS, you also need to specify "AS SYSDBA" as part of your connect string. Because of the spaces, you'll need to enclose your entire connect string within quotes. For example, for Windows:
sqlldr 'sys/password AS SYSDBA' control=product.ctl (Windows)
And for Unix:
sqlldr \'sys/password AS SYSDBA\' control=product.ctl (Unix)
The backslash characters that you see in the second command are there because some Unix operating systems require that quotes on the command line be escaped. In this example, the backslash was used as the escape character.

Parameter Precedence
The term "command-line" notwithstanding, most SQL*Loader command-line parameters can actually be specified in three different places:

On the command line
In a parameter file, which is then referenced using the PARFILE parameter

In the control file
Parameters on the command line, including those read in from a parameter file, will always override values specified in the control file. In the case of the bad and discard file names, though, the control file syntax allows for each distinct input file to have its own bad file and discard files. The command line syntax does not allow for this, so bad file and discard file names specified on the command line only apply to the first input file. For any other input files, you need to specify these bad and discard file names in the control file or accept the defaults.
The FILE parameter adds a bit of confusion to the rules stated in the previous paragraph. As with the bad file and discard file names, you can have multiple FILE values in the control file. However, when you specify the FILE parameter on the command line, it does override any and all FILE values specified in the control file.
Parameters read from a parameter file as a result of using the PARFILE parameter may override those specified on the command line. Whether or not that happens depends on the position of the PARFILE parameter with respect to the others. SQL*Loader processes parameters from left to right, and the last setting for a given parameter is the one that SQL*Loader uses.

Calling Oracle Reports from Oracle 10g Forms


Calling Oracle Reports from Oracle 10g Forms


Introduction
Oracle Reports can add some great reporting functionality to your 10g Forms applications making data available in a wide variety of output formats such as XML and PDF. In this document I will explain how to configure 10g Oracle Reports on your PC, how to call Oracle Reports from 10g Forms applications, and application server deployment considerations. This information should advance your usage of Oracle Reports and speed adaptation in your applications.

Configuring the OC4J Runtime Environment for Oracle Reports
Assuming that Oracle Reports 10g is installed on your PC, the following steps will ensure that you are able to successfully use Oracle Reports in your OC4J runtime environment.

Defining the Report Server
The first step is to define a report server. This server will run in your OC4J runtime environment handling all Oracle Report execution requests. You will not be able to run reports in a 10g application unless you have defined and started a Report server. To define a report server do the following:

1. Start your OC4J runtime environment
2. Open a Command window (Start Menu, Run option, type in CMD)
3. Type this command and press enter: rwserver server=rep_fsprod
a. It is important to name the Reports server rep_fsprod. This is the name of the Report server on all of the Application server machines and will greatly simply your deployment if you keep the names synchronized.

Congratulations you just defined a report server! You should now see a dialog box indicating the Report Server is initializing.

Configuring the Report Server
The second step is to configure the Report Server created in step 1. Specifically you need to specify the Source Directory of the report files as a search path for the Report Server to use when individual report requests are made. Here is what you need to do:

1. Open Windows Explorer and navigate to c:\\reports\conf
2. Open rep_fsprod.conf using a text editor.
3. Using the Find or Search tool (depending on the editor used), look for sourceDir
a. You do not need to do a case sensitive search.
b. This variable is typically located near the top of the file. In mine, it is the 14th line of the file.
4. Edit the sourceDir variable to include the path of each directory where you wish the Report server to search for reports. Below is an example with two directory paths specified. One is required, but you can have as many as you need.
(property name="sourceDir" value="c:\env\aap\reports;c:\env\eu_asc\reports;)

5. Save and close the file.

Modifying the Application .env file
The third and final step is modifying each application .env file including a variable specifying the reports path. Here is what you need to do:

1. Open Windows Explorer and navigate to c:\\forms90\server
2. Open one of the application.env files that you have created using a text editor.
3. Add the REPORTS_PATH variable, setting its' value to the location of that applications report directory as shown in the example below:
REPORTS_PATH=c:\env\aap\reports
4. Save and close the file.
5. Repeat for each remaining application.env file.

Calling Reports from 10g Forms
Now that the OC4J environment is configured, we now can focus on programming a solution to call Oracle Reports from Oracle Forms. I'll walk you through the process using a reporting solution that I have used in 5 applications deployed at the Forest Service.
In the applications I have developed, end users typically call reports by selecting a report from a drop list.
A report specific form is display which allows report parameters to be specified.
When the end user has entered the desired parameter values a button is pressed initiating the report.
The WHEN-BUTTON-PRESSED trigger calls a program unit to gather the report parameters and
build the string that is eventually executed by the WEB.SHOW_DOCUMENT command below:


DECLARE
v_host varchar2(100);
v_port varchar2(10);
v_parameter_string varchar2(4000);

v_username varchar2(50);
v_password varchar2(50);
v_database varchar2(50);

BEGIN

-- Get the full domain name of the server from the formsweb.cfg
tool_env.getvar('SERVER_URL',v_host);

-- Get the username/password and database from the .env file
tool_env.getvar('APPLICATION_USERNAME',v_username);
tool_env.getvar('APPLICATION_PASSWORD',v_password);
tool_env.getvar('APPLICATION_DATABASE',v_database);

-- If it is localhost then we are running on windows and need to use the windows port, otherwise we
-- are on AIX and need to use the AIX port
IF v_host = 'localhost' THEN
v_port := '8889';
ELSE
v_port := '7778';
END IF;

-- Now build the parameter string for running the report
v_parameter_string := 'server=rep_fsprod';
v_parameter_string := v_parameter_string||'&report=AAP_TOTAL_CONTRACT_VALUE_RPT.rdf';
v_parameter_string := v_parameter_string||'&userid='||v_username||'/'||v_password||'@'||v_database;
v_parameter_string := v_parameter_string||'&execution_mode=batch';
v_parameter_string := v_parameter_string||'&comm_mode=synchronous';
v_parameter_string := v_parameter_string||'&destype=cache';
v_parameter_string := v_parameter_string||'&desformat=pdf';
v_parameter_string := v_parameter_string||'&p_region='||:rpt_parms.region;
v_parameter_string := v_parameter_string||'&p_province='||:rpt_parms.province;
v_parameter_string := v_parameter_string||'&p_forest='||:rpt_parms.forest;
v_parameter_string := v_parameter_string||'&p_district='||:rpt_parms.district;
v_parameter_string := v_parameter_string||'&p_fiscal_year='||:rpt_parms.fiscal_year;


-- Now run the report
web.show_document('http://'||v_host||':'||v_port||'/reports/rwservlet?'||v_parameter_string,'_blank');

END;

There are a couple of important items to note when reviewing this program unit:

1. Important data elements are not hard coded! Instead, they are stored in the application.env file and referenced at runtime using tool_env.get_var. The username, password, database and server are all referenced this way.
2. The program unit supports execution on Windows (my development platform) and AIX (the deployment platform) without requiring any code modifications.
a. By referencing the SERVER_URL environment variable in the application.env file, I am able to specify the server and port dynamically.
3. When building the string of parameters to be passed you need to:
a. Include the report name as a parameter
b. Prepend an '&' to second parameter and any additional parameters there after.
c. Except for the report name, be sure that all additional parameters specified match a parameter name (system or user) in the Report definition.
4. Mandatory parameters for each report include:
a. Server, report, userid, execution_mode, comm_mode, destype and desformat.
i. Report is the only parameter that will change. The others (likely) will remain constant for all reports in your application.

When the WEB.SHOW_DOCUMENT command is issued from the Form, a new browser window is opened and the request is executed by the Report Server.


Application Server Deployment
When you have completed development in OC4J and are ready to deploy to the Application Server there is a short list of activities that need to be completed to insure success.

First, it is important to note that the Oracle reports in your application need to be deployed in the /reports subdirectory such as /fsapps/fsprod/leimars/reports.
Be sure to have your application.env file modified accordingly by having the REPORTS_PATH variable specify the proper location.

The second and most important item to note is that the rep_fsprod.conf file will not be modified to include each applications /reports directory. Instead the reports in each applications /reports directory are linked to /fsapps/fsprod/fs_reports by an eDBA using a standard utility that has been distributed on each application server. Each time the script is run on a particular /reports directory, symbolic links are created for each report. If you add a new report, you need to have the links created for your application before it will be available.

The third and final thing to note is that Report Definitions do not need to be compiled when you deploy to the application server. Simply move the .rdf file to you application /reports directory, have the links recreated and you are good to go.

Conclusion
You now have the information you need to include Oracle Reports in your 10g Forms applications.
It is a fairly painless process to complete if your application calls for this functionality.
I hope that you found the contents of this paper useful and wish you good luck.