Important Changes in 10g Statistics Collection


Each new release of Oracle brings enhancements, and 10g has radically changed some important defaults for Oracle statistics collection (using dbms_stats). The one-size-fits-all approach is a great baseline, but the automatic 10g statistics collection may not be just-right for your database.

Oracle 10g does automatic statistics collection and your original customized dbms_stats job (with your customized parameters) will be overlaid.

You may also see a statistics deficiency (i.e. not enough histograms) causing performance issues. Re-analyze object statistics using dbms_stats and make sure that you collect system statistics.

execute dbms_stats.gather_system_stats('start');
-- wait an hour or so
execute dbms_stats.gather_system_stats('stop');

With Oracle Database 10g, there are some new arguments available for the dbms_stats package subprograms. Those changed 10g dbms_stats parameters are granularity and degree.

There are also cases where you need to disable the automatic statistics collection on Oracle10g.

Read more about importatnt changes in 10g statistics collection here:

http://oracle-tips.c.topica.com/maalOG5abHNJ0bLGJrib/

Important SSD changes coming to Oracle servers



Last week Sun Microsystems made the stunning announcement that virtually all of their new servers will come with 32 gigabytes of super-fast solid-state disk (SSD). Sun notes that their internal SSD consumes one-fifth the power and is a hundred times faster than the magnetic-coated spinning platter disk. They note that SSD is perfect for I/O intensive systems like Oracle applications:. Sun notes:

"The majority of enterprises building I/O-intensive applications will use some amount of flash within a year, Fowler predicted. Databases like Oracle, MySQL and IBM DB2 are ideal candidates, he says."

Today, 99.99% of all Oracle data is stored on the ancient spinning platter disks that were introduced in the 1970's, but disk technology is rapidly being displaced with RAM-based SSD which is hundreds of time faster than magnetic-coated spinning platter disks.

In the 1980's, platter disks was very expensive, and a 1.2 gigabyte 3380 disk sold for more than $200k. Today, you can buy high-speed SSD for Oracle for under $1k per gigabyte. For terabyte sized Oracle shops, SSD is being selectively used for redo and other tablespaces where I/O is a bottleneck.

Following Sun's lead, several other major hardware providers are following suit and hundreds of hardware vendors will be providing SSD for use by Oracle systems.

Oracle benchmarks have shown that SSD can be up to 600 times faster for some operations, and SSD does not experience platter or controller latency, leading to far faster throughput.

As prices continue to fall, SSD will replace platter disks for all mission critical Oracle databases, and this will lead to some fundamental changes in Oracle administration, especially within the SGA. Because all data is stored on RAM, the data buffer cache size becomes far less important.

Read more about the huge benefits of Solid State Oracle here:

**************************************************************

2008 Survey of rack-mount SSD vendors:
http://oracle-tips.c.topica.com/maalUXYabH1w9bLGJrib/

SSD clobbers disk speed for Oracle:
http://oracle-tips.c.topica.com/maalUXYabH1xabLGJrib/


Using RAC with SSD:
http://oracle-tips.c.topica.com/maalUXYabH1xbbLGJrib/

Oracle SQL Loader


Table Content :

The SQL*Loader Environment
A Short SQL*Loader Example
SQL*Loader's Capabilities
Issues when Loading Data
Invoking SQL*Loader
---------------------------------

SQL*Loader is an Oracle-supplied utility that allows you to load data from a flat file into one or more database tables. That's it. That's the sole reason for SQL*Loader's existence.
The basis for almost everything you do with SQL*Loader is a file known as the control file. The SQL*Loader control file is a text file into which you place a description of the data to be loaded. You also use the control file to tell SQL*Loader which database tables and columns should receive the data that you are loading.
Do not confuse SQL*Loader control files with database control files. In a way, it's unfortunate that the same term is used in both cases. Database control files are binary files containing information about the physical structure of your database. They have nothing to do with SQL*Loader. SQL*Loader control files, on the other hand, are text files containing commands that control SQL*Loader's operation.
Once you have a data file to load and a control file describing the data contained in that data file, you are ready to begin the load process. You do this by invoking the SQL*Loader executable and pointing it to the control file that you have written. SQL*Loader reads the control file to get a description of the data to be loaded. Then it reads the input file and loads the input data into the database.
SQL*Loader is a very flexible utility, and this short description doesn't begin to do it justice. The rest of this chapter provides a more detailed description of the SQL*Loader environment and a summary of SQL*Loader's many capabilities.

The SQL*Loader Environment
When we speak of the SQL*Loader environment, we are referring to the database, the SQL*Loader executable, and all the different files that you need to be concerned with when using SQL*Loader.
The functions of the SQL*Loader executable, the database, and the input data file are rather obvious. The SQL*Loader executable does the work of reading the input file and loading the data. The input file contains the data to be loaded, and the database receives the data.
SQL*Loader is capable of loading from multiple files in one session. You'll read more about this in Chapter 2, The Mysterious Control File. When multiple input files are used, SQL*Loader will generate multiple bad files and discard files--one set for each input file.

The SQL*Loader Control File
The SQL*Loader control file is the key to any load process. The control file provides the following information to SQL*Loader:
The name and location of the input data file
The format of the records in the input data file
The name of the table or tables to be loaded
The correspondence between the fields in the input record and the columns in the database tables being loaded
Selection criteria defining which records from the input file contain data to be inserted into the destination database tables.
The names and locations of the bad file and the discard file
Some of the items shown in this list may also be passed to SQL*Loader as command-line parameters. The name and location of the input file, for example, may be passed on the command line instead of in the control file. The same goes for the names and locations of the bad files and the discard files.
It's also possible for the control file to contain the actual data to be loaded. This is sometimes done when small amounts of data need to be distributed to many sites, because it reduces (to just one file) the number of files that need to be passed around. If the data to be loaded is contained in the control file, then there is no need for a separate data file.

The Log File
The log file is a record of SQL*Loader's activities during a load session. It contains information such as the following:
The names of the control file, log file, bad file, discard file, and data file
The values of several command-line parameters
A detailed breakdown of the fields and datatypes in the data file that was loaded
Error messages for records that cause errors
Messages indicating when records have been discarded
A summary of the load that includes the number of logical records read from the data file, the number of rows rejected because of errors, the number of rows discarded because of selection criteria, and the elapsed time of the load
Always review the log file after a load to be sure that no errors occurred, or at least that no unexpected errors occurred. This type of information is written to the log file, but is not displayed on the terminal screen.

The Bad File and the Discard File
Whenever you insert data into a database, you run the risk of that insert failing because of some type of error. Integrity constraint violations undoubtedly represent the most common type of error. However, other problems, such as the lack of free space in a tablespace, can also cause insert operations to fail. Whenever SQL*Loader encounters a database error while trying to load a record, it writes that record to a file known as the bad file.
Discard files, on the other hand, are used to hold records that do not meet selection criteria specified in the SQL*Loader control file. By default, SQL*Loader will attempt to load all the records contained in the input file. You have the option, though, in your control file, of specifying selection criteria that a record must meet before it is loaded. Records that do not meet the specified criteria are not loaded, and are instead written to a file known as the discard file.
Discard files are optional. You will only get a discard file if you've specified a discard file name, and if at least one record is actually discarded during the load. Bad files are not optional. The only way to avoid having a bad file generated is to run a load that results in no errors. If even one error occurs, SQL*Loader will create a bad file and write the offending input record (or records) to that file.
The format of your bad files and discard files will exactly match the format of your input files. That's because SQL*Loader writes the exact records that cause errors, or that are discarded, to those files. If you are running a load with multiple input files, you will get a distinct set of bad files and discard files for each input file.
You'll read more about bad files and discard files, and how to use them, in Chapter 7, Validating and Selectively Loading Data.

A Short SQL*Loader Example
This section contains a short example showing how SQL*Loader is used. For this example, we'll be loading a file of geographic place names taken from the United States Geological Survey's (USGS) Geographic Name Information System (GNIS).
TIP:
Learn more about GNIS data or download it for yourself by visiting http://mapping.usgs.gov/www/gnis/. The specific data file used for this example is also available from http://www.oreilly.com/catalog/orsqlloader and http://gennick.com/sqlldr.

The Data
The particular file used for this example contains the feature name listing for the State of Michigan. It's a delimited text file containing the official names of the many thousands of lakes, streams, waterfalls, and other geographic features in the state. The following example shows three records from that file. The lines wrap on the printed page in this book, but in the file each name is on its own line:

"MI","Agate Falls","falls","Ontonagon","26","131","462851N","0890527W",
"46.48083","-89.09083","","","","","","","Trout Creek"

"MI","Agate Harbor","bay","Keweenaw","26","083","472815N","0880329W",
"47.47083","-88.05806","","","","","","","Delaware"

"MI","Agate Point","cape","Keweenaw","26","083","472820N","0880241W",
"47.47222","-88.04472","","","","","","","Delaware"

As you can see, the data in the file is comma-delimited, and each field is enclosed within double quotes. Table 1-1 shows the contents and maximum length of each field.

Table 1-1: Fields in the GNIS Feature Names File

Field Number Maximum Length Contents

1 2 Alphanumeric state code

2 60 Feature name

3 9 Feature type

4 35 County name

5 2 FIPS state code

6 3 FIPS county code

7 7 Primary latitude in degrees, minutes, and seconds

8 8 Primary longitude in degrees, minutes, and seconds

9 8 Primary latitude in decimal degrees

10 8 Primary longitude in decimal degrees

11 7 Source latitude in degrees, minutes, and seconds

12 8 Source longitude in degrees, minutes, and seconds

13 8 Source latitude in decimal degrees

14 8 Source longitude in decimal degrees

15 5 Elevation (feet above sea level)

16 10 Estimated population

17 30 The name of the USGS 7.5 minute series map on which the feature can be found



We used the following SQL statement to create the table into which all this data will be loaded:

CREATE TABLE gfn_gnis_feature_names (
gfn_state_abbr CHAR(2),
gfn_feature_name VARCHAR2(60),
gfn_feature_type VARCHAR2(9),
gfn_county_name VARCHAR2(35),
gfn_primary_latitude_dms CHAR(7),
gfn_primary_longitude_dms CHAR(8),
gfn_elevation NUMBER(7,2),
gfn_population NUMBER(10),
gfn_cell_name VARCHAR2(30)
) TABLESPACE gnis_data;

As you can see, not all fields in the data file are to be loaded into the table. The source latitude and longitude fields will not be loaded, nor will the decimal versions of the primary latitude and longitude. The FIPS coding will also be omitted.

The Control File
The following control file will be used to load the feature name data for the State of Michigan:

LOAD DATA
APPEND INTO TABLE gfn_gnis_feature_names
(
gfn_state_abbr CHAR TERMINATED BY "," ENCLOSED BY '"',
gfn_feature_name CHAR TERMINATED BY "," ENCLOSED BY '"',
gfn_feature_type CHAR TERMINATED BY "," ENCLOSED BY '"',
gfn_county_name CHAR TERMINATED BY "," ENCLOSED BY '"',
gfn_fips_state_code FILLER INTEGER EXTERNAL
TERMINATED BY "," ENCLOSED BY '"',
gfn_fips_county_code FILLER INTEGER EXTERNAL
TERMINATED BY "," ENCLOSED BY '"',
gfn_primary_latitude_dms CHAR TERMINATED BY "," ENCLOSED BY '"',
gfn_primary_longitude_dms CHAR TERMINATED BY "," ENCLOSED BY '"',
gfn_primary_latitude_dec FILLER DECIMAL EXTERNAL
TERMINATED BY "," ENCLOSED BY '"',
gfn_primary_longitude_dec FILLER DECIMAL EXTERNAL
TERMINATED BY "," ENCLOSED BY '"',
gfn_source_latitude_dms FILLER CHAR
TERMINATED BY "," ENCLOSED BY '"',
gfn_source_longitude_dms FILLER CHAR
TERMINATED BY "," ENCLOSED BY '"',
gfn_source_latitude_dec FILLER DECIMAL EXTERNAL
TERMINATED BY "," ENCLOSED BY '"',
gfn_source_longitude_dec FILLER DECIMAL EXTERNAL
TERMINATED BY "," ENCLOSED BY '"',
gfn_elevation DECIMAL EXTERNAL
TERMINATED BY "," ENCLOSED BY '"',
gfn_population INTEGER EXTERNAL
TERMINATED BY "," ENCLOSED BY '"',
gfn_cell_name CHAR TERMINATED BY "," ENCLOSED BY '"'
)

Some explanations are in order. The LOAD DATA command tells SQL*Loader that you are going to load data from an operating system file into a database table. Everything else that you see in this particular control file represents a clause of the LOAD DATA command.
The destination table is identified by the following INTO TABLE clause:

APPEND INTO TABLE gfn_gnis_feature_names
The APPEND keyword tells SQL*Loader to preserve any preexisting data in the table. Other options allow you to delete preexisting data, or to fail with an error if the table is not empty to begin with.
The field definitions are all contained within parentheses, and are separated from each other by commas. The fields in the data file are delimited by commas, and are also enclosed by double quotes. The following clause is used at the end of each field definition to pass this delimiter and enclosure information to SQL*Loader:

TERMINATED BY "," ENCLOSED BY '"'

The following three datatypes are used in this control file. They have no bearing on, or relationship to, the database datatypes of the columns being loaded. The purpose of the datatypes in the control file is to describe the data being loaded from the input data file:

CHAR
Tells SQL*Loader that a field is a text field.

INTEGER EXTERNAL
Tells SQL*Loader that a field is an integer represented using the text digits "0" through "9".

DECIMAL EXTERNAL
Tells SQL*Loader that a field is a decimal value represented using the text digits "0" through "9" and an optional decimal point (".").
Each field is given a name that is meaningful to SQL*Loader. For the nine fields being loaded into the table, the SQL*Loader name must match the corresponding column name in the table. The keyword FILLER identifies the eight fields that are not being loaded into the database. Their names do not matter, but the same naming convention has been followed as for all the rest of the fields.
TIP: FILLER fields are a new feature in Oracle8i. If you are using a release prior to the Oracle8i release, SQL*Loader will not recognize the FILLER keyword.

The Command Line
The command used to initiate this load needs to invoke SQL*Loader and point it to the control file describing the data. In this case, since the input file name is not provided in the control file, that name needs to be passed in on the command line as well. The following sqlldr command will do the job:
sqlldr gnis/gnis@donna control=gnis log=gnis_michigan data=mi_deci.
There are four parameters for this command:

gnis/gnis@donna
The first parameter consists of a username, password, and net service name. SQL*Loader uses this information to open a connection to the database. The "gnis" user owns the table to be loaded.

control = gnis
The second parameter tells SQL*Loader that the control file name is gnis.ctl. The default control file extension is .ctl, so the parameter needs to specify only the file name in this case.

log = gnis_michigan
The third parameter specifies a log file name of gnis_michigan.log. The default log file extension is .log, so it's not specified explicitly in the parameter setting.

data = mi_deci.
The fourth parameter specifies an input file name of mi_deci. This name ends with an explicit period, because the file name has no extension. Without the period on the end, SQL*Loader would assume the default extension of .dat.

By not including the input file name in the control file, but instead passing it as a command-line parameter, we've made it easy to use the same control file to load feature name data for all 50 states. All we need to do is change the value of the DATA and LOG parameters on the command line. Here's what it looks like to issue this sqlldr command and load the data:

$ sqlldr gnis/gnis@donna control=gnis log=gnis_michigan data=mi_deci.

SQL*Loader: Release 8.1.5.0.0 - Production on Wed Apr 5 13:35:53 2000

(c) Copyright 1999 Oracle Corporation. All rights reserved.

Commit point reached - logical record count 28
Commit point reached - logical record count 56
Commit point reached - logical record count 84
...
Commit point reached - logical record count 32001
Commit point reached - logical record count 32029
Commit point reached - logical record count 32056

Pretty much all you see on the screen when you run SQL*Loader are these "Commit point" messages. If nothing else, they provide some reassurance that the load is progressing, and that your session is not hung. All other information regarding the load is written to the log file.

The Log File
The log file resulting from the load of Michigan's feature name data begins with the SQL*Loader banner. It goes on to list the names of the files involved in the load, and also the values of some important command-line parameters. For example:

SQL*Loader: Release 8.1.5.0.0 - Production on Wed Apr 5 13:35:53 2000

(c) Copyright 1999 Oracle Corporation. All rights reserved.

Control File: gnis.ctl
Data File: mi_deci.
Bad File: mi_deci.bad
Discard File: none specified

(Allow all discards)

Number to load: ALL
Number to skip: 0
Errors allowed: 50
Bind array: 64 rows, maximum of 65536 bytes
Continuation: none specified
Path used: Conventional

You can see that the names of the control file, bad file, and data file are recorded in the log. This information is invaluable if you ever have problems with a load, or if you ever need to backtrack in order to understand what you really did. The log also displays the number of records to be loaded, the number to be skipped, the number of errors to allow before aborting the load, the size of the bind array, and the data path. The data path is an important piece of information. The load in this example is a conventional path load, which means that SQL*Loader loads the data into the database using INSERT statements. There is another type of load called a direct path load, which has the potential for far better performance than a conventional path load. Direct path loads are discussed in Chapter 10, Direct Path Loads.

The next part of the log file identifies the table being loaded, indicates whether or not preexisting data was preserved, and lists the field definitions from the control file:

Table GFN_GNIS_FEATURE_NAMES, loaded from every logical record.
Insert option in effect for this table: APPEND

Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ------------
GFN_STATE_ABBR FIRST * , " CHARACTER
GFN_FEATURE_NAME NEXT * , " CHARACTER
GFN_FEATURE_TYPE NEXT * , " CHARACTER
GFN_COUNTY_NAME NEXT * , " CHARACTER
GFN_FIPS_STATE_CODE NEXT * , " CHARACTER
(FILLER FIELD)
GFN_FIPS_COUNTY_CODE NEXT * , " CHARACTER
(FILLER FIELD)
GFN_PRIMARY_LATITUDE_DMS NEXT * , " CHARACTER
GFN_PRIMARY_LONGITUDE_DMS NEXT * , " CHARACTER
GFN_PRIMARY_LATITUDE_DEC NEXT * , " CHARACTER
(FILLER FIELD)
GFN_PRIMARY_LONGITUDE_DEC NEXT * , " CHARACTER
(FILLER FIELD)
GFN_SOURCE_LATITUDE_DMS NEXT * , " CHARACTER
(FILLER FIELD)
GFN_SOURCE_LONGITUDE_DMS NEXT * , " CHARACTER
(FILLER FIELD)
GFN_SOURCE_LATITUDE_DEC NEXT * , " CHARACTER
(FILLER FIELD)
GFN_SOURCE_LONGITUDE_DEC NEXT * , " CHARACTER
(FILLER FIELD)
GFN_ELEVATION NEXT * , " CHARACTER
GFN_POPULATION NEXT * , " CHARACTER
GFN_CELL_NAME NEXT * , " CHARACTER

The last part of the log file contains summary information about the load. If there were any errors, or any discarded records, you would see messages for those before the summary. The summary tells you how many rows were loaded, how many had errors, how many were discarded, and so forth. It looks like this:

Table GFN_GNIS_FEATURE_NAMES:
32056 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.


Space allocated for bind array: 65016 bytes(28 rows)
Space allocated for memory besides bind array: 0 bytes

Total logical records skipped: 0
Total logical records read: 32056
Total logical records rejected: 0
Total logical records discarded: 0

Run began on Wed Apr 05 13:35:53 2000
Run ended on Wed Apr 05 13:36:34 2000

Elapsed time was: 00:00:41.22
CPU time was: 00:00:03.81

You can see from this summary that 32,056 feature names were loaded into the gfn_gnis_feature_names table for the state of Michigan. There were no errors, and no records were discarded. The elapsed time for the load was a bit over 41 seconds.

SQL*Loader's Capabilities
SQL*Loader is very flexible, and the example in the previous section shows only a small amount of what can be done using the utility. Here are the major SQL*Loader capabilities that you should be aware of:

SQL*Loader can read from multiple input files in a single load session.

SQL*Loader can handle files with fixed-length records, variable-length records, and stream-oriented data.

SQL*Loader supports a number of different datatypes, including text, numeric, zoned decimal, packed decimal, and various machine-specific binary types.

Not only can SQL*Loader read from multiple input files, but it can load that data into several different database tables, all in the same load session.

SQL*Loader allows you to use Oracle's built-in SQL functions to manipulate the data being read from the input file.

SQL*Loader includes functionality for dealing with whitespace, delimiters, and null data.

In addition to standard relational tables, SQL*Loader can load data into object tables, varying arrays (VARRAYs), and nested tables.

SQL*Loader can load data into large object (LOB) columns.

SQL*Loader can handle character set translation between the input data file and the database.

The capabilities in this list describe the types of data that SQL*Loader can handle, and what SQL*Loader can do to with that data. SQL*Loader also implements some strong, performance-related features. SQL*Loader can do direct path loads, which bypass normal SQL statement processing, and which may yield handsome performance benefits. SQL*Loader can also do parallel loads and even direct-path parallel loads; direct path parallel loads allow you to maximize throughput on multiple CPU systems. You'll read more about these performance-related features in Chapter 9, Transaction Size and Performance Issues, and in Chapter 10.

Issues when Loading Data
There are a number of issues that you need to be concerned about whenever you use SQL*Loader to load data into your database--indeed, you need to be concerned about these whenever you load data, period. First, there's the ever-present possibility that the load will fail in some way before it is complete. If that happens, you'll be left with some data loaded, and some data not loaded, and you'll need a way to back out and try again. Other SQL*Loader issues include transaction size, data validation (including referential integrity), and data transformation. Transaction size is partly a performance issue, but it also has an impact on how much data you need to reload in the event that a load fails. Data validation and referential integrity both relate to the need for clean, reliable data.

Recovery from Failure
There are really only two fundamental ways that you can recover from a failed load. One approach is to delete all the data that was loaded before the failure occurred, and simply start over again. Of course, you need to fix whatever caused the failure to occur before you restart the load. The other approach is to determine how many records were loaded successfully, and to restart the load from that point forward. Regardless of which method you choose, you need to think things through before you start a load.

Deleting data and restarting a load from scratch really doesn't require any special functionality on the part of SQL*Loader. The important thing is that you have a reliable way to identify the data that needs to be deleted. SQL*Loader does, however, provide support for continuing an interrupted load from the point where a failure occurred. Using the SKIP command-line parameter, or the SKIP clause in the control file, you can tell SQL*Loader to skip over records that were already processed in order to have the load pick up from where it left off previously. Chapter 6, Recovering from Failure, describes the process for continuing a load in detail, and some of the issues you'll encounter. It's a chapter worth reading, because there are some caveats and gotchas, and you'll want to learn about those before you have a failure, not afterwards.

Transaction Size
Transaction size is an issue related somewhat to performance, and somewhat to recovery from failure. In a conventional load, SQL*Loader allows you to specify the number of rows that will be loaded between commits. The number of rows that you specify has a direct impact on the size of the bind array that SQL*Loader uses, and consequently on the amount of memory required for the load. The bind array is an area in memory where SQL*Loader stores data for rows to be inserted into the database. When the bind array fills, SQL*Loader inserts the data into the table being loaded, and then executes a COMMIT.

The larger the transaction size, the more data you'll need to reprocess if you have to restart the load after a failure. However, that's usually not a significant issue unless your bind array size is quite large. Transaction size can also affect performance. Generally, the more data loaded in one chunk the better. So a larger bind array size typically will lead to better performance. However, it will also lead to fewer commits, resulting in the use of more rollback segment space. Chapter 9 describes these issues in detail.

Data Validation
Data validation is always a concern when loading data. SQL*Loader doesn't provide a lot of support in this area, but there are some features at your disposal that can help you ensure that only good data is loaded into your database.

The one thing that SQL*Loader does do for you is ensure that the data being loaded into a column is valid given the column's datatype. Text data will not be loaded into NUMBER fields, and numbers will not be loaded into DATE fields. This much, at least, you can count on. Records containing data that doesn't convert to the destination datatype are rejected and written to the bad file.

SQL*Loader allows you to selectively load data. Using the WHEN clause in your SQL*Loader control file, you can specify conditions under which a record will be accepted. Records not meeting those conditions are not loaded, and are instead written to the discard file.

Finally, you can take advantage of the referential integrity features built into your database. SQL*Loader won't be able to load data that violates any type of primary key, unique key, foreign key, or check constraint. Chapter 7, Validating and Selectively Loading Data, discusses using SQL*Loader and Oracle features to ensure that only good data gets loaded.

TIP: You don't always have to rely on SQL*Loader's features for data validation. It's entirely feasible to load data into a staging table, run one or more external programs to weed out any rows that are invalid, and then transfer that data to a production table.

Data Transformation
Wouldn't it be great if the data we loaded was always in a convenient format? Unfortunately, it frequently is not. In the real world, you may deal with data from a variety of sources and systems, and the format of that data may not match the format that you are using in your database. Dates, for example, are represented using a wide variety of formats. The date 1/2/2000 means one thing in the United States and quite another in Europe.

For dates and numbers, you can often use Oracle's built-in TO_DATE and TO_NUMBER functions to convert a character-based representation to a value that can be loaded into a database DATE or NUMBER column. In fact, for date fields, you can embed the date format into your control file as part of the field definition.

SQL*Loader allows you access to Oracle's entire library of built-in SQL functions. You aren't limited to just TO_DATE, TO_NUMBER, and TO_CHAR. Not only can you access all the built-in SQL functions, you can also write PL/SQL code to manipulate the data being loaded. This opens up a world of possibilities, which are discussed in Chapter 8, Transforming Data During a Load.

Invoking SQL*Loader
On Unix systems, the command used to invoke SQL*Loader is sqlldr. On Windows systems running Oracle8i, release 8.1 or higher, the command is also sqlldr. Prior to release 8.1, the SQL*Loader command on Windows systems included the first two digits of the Oracle release number. Thus you had sqlldr80 (Oracle8, release 8.0), sqlldr73 (Oracle7, release 7.3), and so forth.

SQL*Loader can be invoked in one of three ways:

sqlldr

sqlldr keyword=value [keyword=value ...]

sqlldr value [value ...]

Issuing the sqlldr command by itself results in a list of valid command-line parameters being displayed. Command-line parameters are usually keyword/value pairs, and may be any combination of the following:

USERID={username[/password][@net_service_name]|/}
CONTROL=control_file_name
LOG=path_file_name
BAD=path_file_name
DATA=path_file_name
DISCARD=path_file_name
DISCARDMAX=logical_record_count
SKIP=logical_record_count
SKIP_INDEX_MAINTENANCE={TRUE | FALSE}
SKIP_UNUSABLE_INDEXES={TRUE | FALSE}
LOAD=logical_record_count
ERRORS=insert_error_count
ROWS=rows_in_bind_array
BINDSIZE=bytes_in_bind_array
SILENT=[(]keyword[,keyword...][)]
DIRECT={TRUE | FALSE}
PARFILE=path_file_name
PARALLEL={TRUE | FALSE}
READSIZE=bytes_in_read_buffer
FILE=database_datafile_name

Command-line parameters may be passed by position instead of by keyword. The rules for doing this are described at the end of the next section.

Command-Line Parameters
The SQL*Loader parameter descriptions are as follows:

USERID = {username[/password] [@net_service_name]|/}
Specifies the username and password to use when connecting to the database. The net_service_name parameter optionally allows you to connect to a remote database. Use a forward-slash character ( / ) to connect to a local database using operating system authentication. On Unix systems, you may want to omit the password and allow SQL*Loader to prompt you for it. If you omit both the username and the password, SQL*Loader will prompt you for both.

TIP: On Unix systems you should generally avoid placing a password on the command line, because that password will be displayed whenever other users issue a command, such as ps -ef, that displays a list of current processes running on the system. Either let SQL*Loader prompt you for your password, or use operating system authentication. (If you don't know what operating system authentication is, ask your DBA.)

CONTROL = control_ file_name
Specifies the name, which may include the path, of the control file. The default extension is .ctl.

LOG = path_ file_name
Specifies the name of the log file to generate for a load session. You may include a path as well. By default, the log file takes on the name of the control file, but with a .log extension, and is written to the same directory as the control file. If you specify a different name, the default extension is still .log. However, if you use the LOG parameter to specify a name for the log file, it will no longer be written automatically to the directory that contains the control file.

BAD = path_ file_name
Specifies the name of the bad file. You may include a path as part of the name. By default, the bad file takes the name of the control file, but with a .bad extension, and is written to the same directory as the control file. If you specify a different name, the default extension is still .bad. However, if you use the BAD parameter to specify a bad file name, the default directory becomes your current working directory. If you are loading data from multiple files, then this bad file name only gets associated with the first file being loaded.

DATA = path_ file_name
Specifies the name of the file containing the data to load. You may include a path as part of the name. By default, the name of the control file is used, but with the .dat extension. If you specify a different name, the default extension is still .dat. If you are loading from multiple files, you can only specify the first file name using this parameter. Place the names of the other files in their respective INFILE clauses in the control file.

DISCARD = path_ file_name
Specifies the name of the discard file. You may include a path as part of the name. By default, the discard file takes the name of the control file, but it has a .dis extension. If you specify a different name, the default extension is still .dis. If you are loading data from multiple files, then this discard file name only gets associated with the first file being loaded.

DISCARDMAX = logical_record_count
Sets an upper limit on the number of logical records that can be discarded before a load will terminate. The limit is actually one less than the value specified for DISCARDMAX. When the number of discarded records becomes equal to the value specified for DISCARDMAX, the load will terminate. The default is to allow an unlimited number of discards. However, since DISCARDMAX only accepts numeric values, it is not possible to explicitly specify the default behavior.

TIP: There is also an undocumented parameter named DISCARDS that functions the same as DISCARDMAX. The use of DISCARDMAX is preferred, but you may occasionally encounter references to DISCARDS.

SKIP = logical_record_count
Allows you to continue an interrupted load by skipping the specified number of logical records. If you are continuing a multiple table direct path load, you may need to use the CONTINUE_LOAD clause in the control file rather than the SKIP parameter on the command line. CONTINUE_LOAD allows you to specify a different number of rows to skip for each table that you are loading.

SKIP_INDEX_MAINTENANCE = {TRUE | FALSE}
Controls whether or not index maintenance is done for a direct path load. This parameter does not apply to conventional path loads. A value of TRUE causes index maintenance to be skipped. Any index segments (partitions) that should have been updated will be marked as unusable. A value of FALSE causes indexes to be maintained as they normally would be. The default is FALSE.

SKIP_UNUSABLE_INDEXES = {TRUE | FALSE}
Controls the manner in which a load is done when a table being loaded has indexes in an unusable state. A value of TRUE causes SQL*Loader to load data into tables even when those tables have indexes marked as unusable. The indexes will remain unusable at the end of the load. One caveat is that if a UNIQUE index is marked as unusable, the load will not be allowed to proceed.

A value of FALSE causes SQL*Loader not to insert records when those records need to be recorded in an index marked as unusable. For a conventional path load, this means that any records that require an unusable index to be updated will be rejected as errors. For a direct path load, this means that the load will be aborted the first time such a record is encountered. The default is FALSE.

LOAD = logical_record_count
Specifies a limit on the number of logical records to load. The default is to load all records. Since LOAD only accepts numeric values, it is not possible to explicitly specify the default behavior.

ERRORS = insert_error_count
Specifies a limit on the number of errors to tolerate before the load is aborted. The default is to abort a load when the error count exceeds 50. There is no way to allow an unlimited number of errors. The best you can do is to specify a very high number for this parameter.

ROWS = rows_in_bind_array
The precise meaning of this parameter depends on whether you are doing a direct path load or a conventional load. If you are doing a conventional load, then you can use this parameter to control the number of rows in the bind array. This represents the number of rows that SQL*Loader loads with each INSERT statement, and also represents the commit frequency. The default is 64 rows.

If you are doing a direct path load, then ROWS specifies the number of rows to read from the input file before saving the data to the database. SQL*Loader will round up the ROWS value to coincide with an even number of database blocks. A data save in a direct path load is analogous to a commit in a conventional path load. The default, when a direct path load is done, is to do one save at the end of the load.

TIP: The BINDSIZE and ROWS parameters both affect the size of the bind array. Chapter 9 discusses this topic in greater detail.

BINDSIZE = bytes_in_bind_array
Specifies the maximum size, in bytes, of the bind array. This parameter overrides any bind array size computed as a result of using the ROWS parameter. The default bind array size is 65,536 bytes, or 64K.

SILENT = [( ]keyword [,keyword... ] [ )]
Allows you to suppress various header and feedback messages that SQL*Loader normally displays during a load session. Table 1-2 describes the effect of each of the keywords.
Table 1-2: Keywords for Use with the SILENT Parameter Keyword
Effect

ALL
Is the same as specifying all the other keywords.

DISCARDS
Suppresses the message that is normally written to the log file each time a record is discarded.

ERRORS
Suppresses the error messages that are normally written to the log file when a record generates an Oracle error.

FEEDBACK
Suppresses the "commit point reached" messages that are normally
displayed each time SQL*Loader executes a commit or a save.

HEADER
Suppresses the messages that SQL*Loader displays on the screen when you first launch the executable. Note, however, that the header
messages are always written to the log file.

PARTITIONS
Suppresses the per-partition statistics that are normally written to the log file when doing a direct path load of a partitioned table.



There are two ways you can specify values for the SILENT parameter. If you have only one keyword, you can supply it following the equals sign (=), as follows:
SILENT = ALL

If you have several keywords to use, you can place them in a comma-delimited list. You may optionally place that list inside parentheses. For example:
SILENT = (DISCARDS,ERRORS)


DIRECT = {TRUE | FALSE}
Determines the data path used for the load. A value of FALSE results in a conventional path load. A value of TRUE results in a direct path load. The default is FALSE.

PARFILE = path_ file_name
Tells SQL*Loader to read command-line parameter values from a text file. This text file is referred to as a parameter file, and contains keyword/value pairs. Usually, the keyword/value pairs are separated by line breaks. Use of the PARFILE parameter can save a lot of typing if you need to perform the same load several times, because you won't need to retype all the command-line parameters each time. There is no default extension for parameter files.

PARALLEL = {TRUE | FALSE}
Indicates whether or not you are doing a direct path parallel load. If you are loading the same object from multiple direct path load sessions, then set this to TRUE. Otherwise, set it to FALSE. The default is FALSE.

READSIZE = bytes_in_read_buffer
Specifies the size of the buffer used by SQL*Loader when reading data from the input file. The default value is 65,536 bytes, or 64K. The values of the READSIZE and BINDSIZE parameters should match. If you supply values for these two parameters that do not match, SQL*Loader will adjust them.

FILE = database_datafile_name
Specifies the database data file from which to allocate extents. Use this parameter when doing parallel loads, to ensure that each load session is using a different disk. If you are not doing a direct path load, this parameter will be ignored.

In addition to being passed by keyword, parameters may also be passed by position. To do this, you simply list the values after the sqlldr command in the correct order. For example, the following two SQL*Loader commands yield identical results:

sqlldr system/manager profile.ctl profile.log
sqlldr userid=system/manager control=profile.ctl log=profile.log

You can even mix the positional and keyword methods of passing command-line parameters. The one rule when doing this is that all positional parameters must come first. Once you start using keywords, you must continue to do so. For example:

sqlldr system/manager control=profile.ctl log=profile.ctl

When you pass parameters positionally, you must not skip any. Also, be sure to get the order right. You must supply parameter values in the order shown earlier in this section. Given the fact that you typically will use only a few parameters out of the many that are possible, it's usually easier to pass those parameters as keyword/value pairs than it is to pass them positionally. Using keyword/value pairs also makes long SQL*Loader commands somewhat self-documenting. The one exception to this rule is that you might wish to pass the username and password positionally, since they come first, and then pass in the rest of the parameters by name.

Command-Line Syntax Rules
There are several syntax rules to be aware of when writing SQL*Loader commands. These rules fall into the following areas:

Case-sensitivity

Separation of parameters
Special characters in the command line
SQL*Loader itself is not case-sensitive. Keywords on the command line may be in either upper- or lowercase--it doesn't matter. However, some operating systems, notably Unix, are case-sensitive. When running SQL*Loader on a case-sensitive operating system, you do need to be careful of the case used in file names. You also need to pay attention to the case used for the command to invoke SQL*Loader. On Unix and other case-sensitive operating systems, the SQL*Loader executable name is usually lowercase. So on Unix, Linux, and so forth, use sqlldr. Under Windows, and other operating systems where case doesn't matter, you can use SQLLDR or sqlldr as you prefer.
Parameters on the command line may be separated by spaces, by commas, or by both spaces and commas. All three of the following commands for example, are legitimate:

sqlldr system/manager,control=product.ctl,log=product.log
sqlldr system/manager, control=product.ctl, log=product.log
sqlldr system/manager control=product.ctl log=product.log

Spaces are acceptable as well, on either side of the equals sign (=), in keyword/value pairs.
Special characters are rarely needed on the command line, but when you do use them in an option value, you must enclose that value within quotes. For example, beginning with release 8.1.7, if you connect as the user SYS, you also need to specify "AS SYSDBA" as part of your connect string. Because of the spaces, you'll need to enclose your entire connect string within quotes. For example, for Windows:
sqlldr 'sys/password AS SYSDBA' control=product.ctl (Windows)
And for Unix:
sqlldr \'sys/password AS SYSDBA\' control=product.ctl (Unix)
The backslash characters that you see in the second command are there because some Unix operating systems require that quotes on the command line be escaped. In this example, the backslash was used as the escape character.

Parameter Precedence
The term "command-line" notwithstanding, most SQL*Loader command-line parameters can actually be specified in three different places:

On the command line
In a parameter file, which is then referenced using the PARFILE parameter

In the control file
Parameters on the command line, including those read in from a parameter file, will always override values specified in the control file. In the case of the bad and discard file names, though, the control file syntax allows for each distinct input file to have its own bad file and discard files. The command line syntax does not allow for this, so bad file and discard file names specified on the command line only apply to the first input file. For any other input files, you need to specify these bad and discard file names in the control file or accept the defaults.
The FILE parameter adds a bit of confusion to the rules stated in the previous paragraph. As with the bad file and discard file names, you can have multiple FILE values in the control file. However, when you specify the FILE parameter on the command line, it does override any and all FILE values specified in the control file.
Parameters read from a parameter file as a result of using the PARFILE parameter may override those specified on the command line. Whether or not that happens depends on the position of the PARFILE parameter with respect to the others. SQL*Loader processes parameters from left to right, and the last setting for a given parameter is the one that SQL*Loader uses.

Calling Oracle Reports from Oracle 10g Forms


Calling Oracle Reports from Oracle 10g Forms


Introduction
Oracle Reports can add some great reporting functionality to your 10g Forms applications making data available in a wide variety of output formats such as XML and PDF. In this document I will explain how to configure 10g Oracle Reports on your PC, how to call Oracle Reports from 10g Forms applications, and application server deployment considerations. This information should advance your usage of Oracle Reports and speed adaptation in your applications.

Configuring the OC4J Runtime Environment for Oracle Reports
Assuming that Oracle Reports 10g is installed on your PC, the following steps will ensure that you are able to successfully use Oracle Reports in your OC4J runtime environment.

Defining the Report Server
The first step is to define a report server. This server will run in your OC4J runtime environment handling all Oracle Report execution requests. You will not be able to run reports in a 10g application unless you have defined and started a Report server. To define a report server do the following:

1. Start your OC4J runtime environment
2. Open a Command window (Start Menu, Run option, type in CMD)
3. Type this command and press enter: rwserver server=rep_fsprod
a. It is important to name the Reports server rep_fsprod. This is the name of the Report server on all of the Application server machines and will greatly simply your deployment if you keep the names synchronized.

Congratulations you just defined a report server! You should now see a dialog box indicating the Report Server is initializing.

Configuring the Report Server
The second step is to configure the Report Server created in step 1. Specifically you need to specify the Source Directory of the report files as a search path for the Report Server to use when individual report requests are made. Here is what you need to do:

1. Open Windows Explorer and navigate to c:\\reports\conf
2. Open rep_fsprod.conf using a text editor.
3. Using the Find or Search tool (depending on the editor used), look for sourceDir
a. You do not need to do a case sensitive search.
b. This variable is typically located near the top of the file. In mine, it is the 14th line of the file.
4. Edit the sourceDir variable to include the path of each directory where you wish the Report server to search for reports. Below is an example with two directory paths specified. One is required, but you can have as many as you need.
(property name="sourceDir" value="c:\env\aap\reports;c:\env\eu_asc\reports;)

5. Save and close the file.

Modifying the Application .env file
The third and final step is modifying each application .env file including a variable specifying the reports path. Here is what you need to do:

1. Open Windows Explorer and navigate to c:\\forms90\server
2. Open one of the application.env files that you have created using a text editor.
3. Add the REPORTS_PATH variable, setting its' value to the location of that applications report directory as shown in the example below:
REPORTS_PATH=c:\env\aap\reports
4. Save and close the file.
5. Repeat for each remaining application.env file.

Calling Reports from 10g Forms
Now that the OC4J environment is configured, we now can focus on programming a solution to call Oracle Reports from Oracle Forms. I'll walk you through the process using a reporting solution that I have used in 5 applications deployed at the Forest Service.
In the applications I have developed, end users typically call reports by selecting a report from a drop list.
A report specific form is display which allows report parameters to be specified.
When the end user has entered the desired parameter values a button is pressed initiating the report.
The WHEN-BUTTON-PRESSED trigger calls a program unit to gather the report parameters and
build the string that is eventually executed by the WEB.SHOW_DOCUMENT command below:


DECLARE
v_host varchar2(100);
v_port varchar2(10);
v_parameter_string varchar2(4000);

v_username varchar2(50);
v_password varchar2(50);
v_database varchar2(50);

BEGIN

-- Get the full domain name of the server from the formsweb.cfg
tool_env.getvar('SERVER_URL',v_host);

-- Get the username/password and database from the .env file
tool_env.getvar('APPLICATION_USERNAME',v_username);
tool_env.getvar('APPLICATION_PASSWORD',v_password);
tool_env.getvar('APPLICATION_DATABASE',v_database);

-- If it is localhost then we are running on windows and need to use the windows port, otherwise we
-- are on AIX and need to use the AIX port
IF v_host = 'localhost' THEN
v_port := '8889';
ELSE
v_port := '7778';
END IF;

-- Now build the parameter string for running the report
v_parameter_string := 'server=rep_fsprod';
v_parameter_string := v_parameter_string||'&report=AAP_TOTAL_CONTRACT_VALUE_RPT.rdf';
v_parameter_string := v_parameter_string||'&userid='||v_username||'/'||v_password||'@'||v_database;
v_parameter_string := v_parameter_string||'&execution_mode=batch';
v_parameter_string := v_parameter_string||'&comm_mode=synchronous';
v_parameter_string := v_parameter_string||'&destype=cache';
v_parameter_string := v_parameter_string||'&desformat=pdf';
v_parameter_string := v_parameter_string||'&p_region='||:rpt_parms.region;
v_parameter_string := v_parameter_string||'&p_province='||:rpt_parms.province;
v_parameter_string := v_parameter_string||'&p_forest='||:rpt_parms.forest;
v_parameter_string := v_parameter_string||'&p_district='||:rpt_parms.district;
v_parameter_string := v_parameter_string||'&p_fiscal_year='||:rpt_parms.fiscal_year;


-- Now run the report
web.show_document('http://'||v_host||':'||v_port||'/reports/rwservlet?'||v_parameter_string,'_blank');

END;

There are a couple of important items to note when reviewing this program unit:

1. Important data elements are not hard coded! Instead, they are stored in the application.env file and referenced at runtime using tool_env.get_var. The username, password, database and server are all referenced this way.
2. The program unit supports execution on Windows (my development platform) and AIX (the deployment platform) without requiring any code modifications.
a. By referencing the SERVER_URL environment variable in the application.env file, I am able to specify the server and port dynamically.
3. When building the string of parameters to be passed you need to:
a. Include the report name as a parameter
b. Prepend an '&' to second parameter and any additional parameters there after.
c. Except for the report name, be sure that all additional parameters specified match a parameter name (system or user) in the Report definition.
4. Mandatory parameters for each report include:
a. Server, report, userid, execution_mode, comm_mode, destype and desformat.
i. Report is the only parameter that will change. The others (likely) will remain constant for all reports in your application.

When the WEB.SHOW_DOCUMENT command is issued from the Form, a new browser window is opened and the request is executed by the Report Server.


Application Server Deployment
When you have completed development in OC4J and are ready to deploy to the Application Server there is a short list of activities that need to be completed to insure success.

First, it is important to note that the Oracle reports in your application need to be deployed in the /reports subdirectory such as /fsapps/fsprod/leimars/reports.
Be sure to have your application.env file modified accordingly by having the REPORTS_PATH variable specify the proper location.

The second and most important item to note is that the rep_fsprod.conf file will not be modified to include each applications /reports directory. Instead the reports in each applications /reports directory are linked to /fsapps/fsprod/fs_reports by an eDBA using a standard utility that has been distributed on each application server. Each time the script is run on a particular /reports directory, symbolic links are created for each report. If you add a new report, you need to have the links created for your application before it will be available.

The third and final thing to note is that Report Definitions do not need to be compiled when you deploy to the application server. Simply move the .rdf file to you application /reports directory, have the links recreated and you are good to go.

Conclusion
You now have the information you need to include Oracle Reports in your 10g Forms applications.
It is a fairly painless process to complete if your application calls for this functionality.
I hope that you found the contents of this paper useful and wish you good luck.



The SQL Optimizers

Whenever you execute a SQL statement, a component of the database known as the optimizer must decide how best to access the data operated on by that statement.
Oracle supports two optimizers: the rule-base optimizer (which was the original), and the cost-based optimizer.

To figure out the optimal execution path for a statement, the optimizers consider the following:
a. The syntax you've specified for the statement
b. Any conditions that the data must satisfy (the WHERE clauses)
c. The database tables your statement will need to access
d. All possible indexes that can be used in retrieving data from the table
e. The Oracle RDBMS version
f. The current optimizer mode
g. SQL statement hints
h. All available object statistics (generated via the ANALYZE command)
i. The physical table location (distributed SQL)
j. INIT.ORA settings (parallel query, async I/O, etc.)

Oracle gives you a choice of two optimizing alternatives:
the predictable rule-based optimizer and the more intelligent cost-based optimizer.

Understanding the Rule-Based Optimizer

The rule-based optimizer (RBO) uses a predefined set of precedence rules to figure out which path it will use to access the database.
The RDBMS kernel defaults to the rule-based optimizer under a number of conditions, including:

a. OPTIMIZER_MODE = RULE is specified in your INIT.ORA file
b. OPTIMIZER_MODE = CHOOSE is specified in your INIT.ORA file, and no statistics exist for any table involved in the statement
c. An ALTER SESSION SET OPTIMIZER_MODE = RULE command has been issued
d. An ALTER SESSION SET OPTIMIZER_MODE = CHOOSE command has been issued, and no statistics exist for any table involved in the statement
e. The rule hint (e.g., SELECT /*+ RULE */. . .) has been used in the statement

The rule-based optimizer is driven primarily by 20 condition rankings, or "golden rules."
These rules instruct the optimizer how to determine the execution path for a statement, when to choose one index over another,
and when to perform a full table scan.
These rules, shown in Table 1, are fixed, predetermined, and, in contrast with the cost-based optimizer, not influenced by outside sources (table volumes, index distributions, etc.).

Table 1: Rule-based optimizer condition rankings

Rank Condition

1 ROWID = constant
2 Cluster join with unique or primary key = constant
3 Hash cluster key with unique or primary key = constant
4 Entire Unique concatenated index = constant
5 Unique indexed column = constant
6 Entire cluster key = corresponding cluster key of another table in the same cluster
7 Hash cluster key = constant
8 Entire cluster key = constant
9 Entire non-UNIQUE CONCATENATED index = constant
10 Non-UNIQUE index merge
11 Entire concatenated index = lower bound
12 Most leading column(s) of concatenated index = constant
13 Indexed column between low value and high value or indexed column LIKE "ABC%" (bounded range)
14 Non-UNIQUE indexed column between low value and high value or indexed column like `ABC%' (bounded range)
15 UNIQUE indexed column or constant (unbounded range)
16 Non-UNIQUE indexed column or constant (unbounded range)
17 Equality on non-indexed = column or constant (sort/merge join)
18 MAX or MIN of single indexed columns
19 ORDER BY entire index
20 Full table scans



While knowing the rules is helpful, they alone do not tell you enough about how to tune for the rule-based optimizer. To overcome this deficiency, the following sections provide some information that the rules don't tell you.

What the RBO rules don't tell you #1
Only single column indexes are ever merged. Consider the following SQL and indexes:

SELECT col1, ...
FROM emp
WHERE emp_name = 'GURRY'
AND emp_no = 127
AND dept_no = 12

Index1 (dept_no)
Index2 (emp_no, emp_name)

The SELECT statement looks at all three indexed columns. Many people believe that Oracle will merge the two indexes, which involve those three columns, to return the requested data. In fact, only the two-column index is used; the single-column index is not used. While Oracle will merge two single-column indexes, it will not merge a multi-column index with another index.
There is one thing to be aware of with respect to this scenario. If the single-column index is a unique or primary key index, that would cause the single-column index to take precedence over the multi-column index. Compare rank 4 with rank 9 in Table 1.
NOTE: Oracle8i introduced a new hint, INDEX_JOIN, that allows you to join multi-column indexes.

What the RBO rules don't tell you #2
If all columns in an index are specified in the WHERE clause, that index will be used in preference to other indexes for which some columns are referenced. For example:

SELECT col1, ...
FROM emp
WHERE emp_name = 'GURRY'
AND emp_no = 127
AND dept_no = 12

Index1 (emp_name)
Index2 (emp_no, dept_no, cost_center)

In this example, only Index1 is used, because the WHERE clause includes all columns for that index, but does not include all columns for Index2.

What the RBO rules don't tell you #3
If multiple indexes can be applied to a WHERE clause, and they all have an equal number of columns specified, only the index created last will be used. For example:

SELECT col1, ...
FROM emp
WHERE emp_name = 'GURRY'
AND emp_no = 127
AND dept_no = 12
AND emp_category = 'CLERK'

Index1 (emp_name, emp_category) Created 4pm Feb 11th 2002
Index2 (emp_no, dept_no) Created 5pm Feb 11th 2002

In this example, only Index2 is used, because it was created at 5 p.m. and the other index was created at 4 p.m. This behavior can pose a problem, because if you rebuild indexes in a different order than they were first created, a different index may suddenly be used for your queries. To deal with this problem, many sites have a naming standard requiring that indexes are named in alphabetical order as they are created. Then, if a table is rebuilt, the indexes can be rebuilt in alphabetical order, preserving the correct creation order. You could, for example, number your indexes. Each new index added to a table would then be given the next number.

What the RBO rules don't tell you #4
If multiple columns of an index are being accessed with an = operator, that will override other operators such as LIKE or BETWEEN. Two ='s will override two ='s and a LIKE. For example:

SELECT col1, ...
FROM emp
WHERE emp_name LIKE 'GUR%'
AND emp_no = 127
AND dept_no = 12
AND emp_category = 'CLERK'
AND emp_class = 'C1'

Index1 (emp_category, emp_class, emp_name)
Index2 (emp_no, dept_no)

In this example, only Index2 is utilized despite Index1 having three columns accessed and Index2 having only two column accessed.

What the RBO rules don't tell you #5
A higher percentage of columns accessed will override a lower percentage of columns accessed. So generally, the optimizer will choose to use the index from which you specify the highest percentage of columns. However, as stated previously, all columns specified in a unique or primary key index will override the use of all other indexes. For example:

SELECT col1, ...
FROM emp
WHERE emp_name = 'GURRY'
AND emp_no = 127
AND emp_class = 'C1'

Index1 (emp_name, emp_class, emp_category)
Index2 (emp_no, dept_no)

In this example, only Index1 is utilized, because 66% of the columns are accessed. Index2 is not used because a lesser 50% of the indexed columns are used.

What the RBO rules don't tell you #6
If you join two tables, the rule-based optimizer needs to select a driving table. The table selected can have a significant impact on performance, particularly when the optimizer decides to use nested loops. A row will be returned from the driving table, and then the matching rows selected from the other table. It is important that as few rows as possible are selected from the driving table.

The rule-based optimizer uses the following rules to select the driving table:

1. A unique or primary key index will always cause the associated table to be selected as the driving table in front of a non-unique or non-primary key index.
2. An index for which you apply the equality operator (=) to all columns will take precedence over indexes from which you use only some columns, and will result in the underlying table being chosen as the driving table for the query.
3. The table that has a higher percentage of columns in an index will override the table that has a lesser percentage of columns indexed.
4. A table that satisfies one two-column index in the WHERE clause of a query will be chosen as the driving table in front of a table that satisfies two single-column indexes.
5. If two tables have the same number of index columns satisfied, the table that is listed last in the FROM clause will be the driving table. In the SQL below, the EMP table will be the driving table because it is listed last in the FROM clause.

SELECT ....
FROM DEPT d, EMP e
WHERE e.emp_name = 'GURRY'
AND d.dept_name = 'FINANCE'
AND d.dept_no = e.dept_no

What the RBO rules don't tell you #7
If a WHERE clause has a column that is the leading column on any index, the rule-based optimizer will use that index. The exception is if a function is placed on the leading index column in the WHERE clause. For example:

SELECT col1, ...
FROM emp
WHERE emp_name = 'GURRY'

Index1 (emp_name, emp_class, emp_category)
Index2 (emp_class, emp_name, emp_category)

Index1 will be used, because emp_name (used in the WHERE clause) is the leading column. Index2 will not be used, because emp_name is not the leading column.

The following example illustrates what happens when a function is applied to an indexed column:

SELECT col1, ...
FROM emp
WHERE LTRIM(emp_name) = 'GURRY'

In this case, because the LTRIM function has been applied to the column, no index will be used.

Understanding the Cost-Based Optimizer
The cost-based optimizer is a more sophisticated facility than the rule-based optimizer. To determine the best execution path for a statement, it uses database information such as table size, number of rows, key spread, and so forth, rather than rigid rules.

The information required by the cost-based optimizer is available once a table has been analyzed via the ANALYZE command, or via the DBMS_STATS facility. If a table has not been analyzed, the cost-based optimizer can use only rule-based logic to select the best access path. It is possible to run a schema with a combination of cost-based and rule-based behavior by having some tables analyzed and others not analyzed.

NOTE: The ANALYZE command and the DBMS_STATS functions collect statistics about tables, clusters, and indexes, and store those statistics in the data dictionary.

A SQL statement will default to the cost-based optimizer if any one of the tables involved in the statement has been analyzed. The cost-based optimizer then makes an educated guess as to the best access path for the other tables based on information in the data dictionary.

The RDBMS kernel defaults to using the cost-based optimizer under a number of situations, including the following:

1. OPTIMIZER_MODE = CHOOSE has been specified in the INIT.ORA file, and statistics exist for at least one table involved in the statement
2. An ALTER SESSION SET OPTIMIZER_MODE = CHOOSE command has been executed, and statistics exist for at least one table involved in the statement
3. An ALTER SESSION SET OPTIMIZER_MODE = FIRST_ROWS (or ALL_ROWS) command has been executed, and statistics exist for at least one table involved in the statement
4. A statement uses the FIRST_ROWS or ALL_ROWS hint (e.g., SELECT /*+ FIRST_ROWS */. . .)

ANALYZE command
The way that you analyze your tables can have a dramatic effect on your SQL performance. If your DBA forgets to analyze tables or indexes after a table re-build, the impact on performance can be devastating. If your DBA analyzes each weekend, a new threshold may be reached and Oracle may change its execution plan. The new plan will more often than not be an improvement, but will occasionally be worse.
I cannot stress enough that if every SQL statement has been tuned, do not analyze just for the sake of it. One site that I tuned had a critical SQL statement that returned data in less than a second. The DBA analyzed each weekend believing that the execution path would continue to improve. One Monday, morning I got a phone call telling me that the response time had risen to 310 seconds.
If you do want to analyze frequently, use DBMS_STATS.EXPORT_SCHEMA_STATS to back up the existing statistics prior to re-analyzing. This gives you the ability to revert back to the previous statistics if things screw up.
When you analyze, you can have Oracle look at all rows in a table (ANALYZE COMPUTE) or at a sampling of rows (ANALYZE ESTIMATE). Typically, I use ANALYZE ESTIMATE for very large tables (1,000,000 rows or more), and ANALYZE COMPUTE for small to medium tables.
I strongly recommend that you analyze FOR ALL INDEXED COLUMNS for any table that can have severe data skewness. For example, if a large percentage of rows in a table has the same value in a given column, that represents skewness. The FOR ALL INDEXED COLUMNS option makes the cost-based optimizer aware of the skewness of a column's data in addition to the cardinality (number-distinct values) of that data.
When a table is analyzed using ANALYZE, all associated indexes are analyzed as well. If an index is subsequently dropped and recreated, it must be re-analyzed. Be aware that the procedures DBMS_STATS.GATHER_SCHEMA_STATS and GATHER_TABLE_STATS analyze only tables by default, not their indexes. When using those procedures, you must specify the CASCADE=>TRUE option for indexes to be analyzed as well.

Following are some sample ANALYZE statements:

ANALYZE TABLE EMP ESTIMATE STATISTICS SAMPLE 5 PERCENT FOR ALL INDEXED COLUMNS;

ANALYZE INDEX EMP_NDX1 ESTIMATE STATISTICS SAMPLE 5 PERCENT FOR ALL INDEXED COLUMNS;

ANALYZE TABLE EMP COMPUTE STATISTICS FOR ALL INDEXED COLUMNS;

If you analyze a table by mistake, you can delete the statistics. For example:

ANALYZE TABLE EMP DELETE STATISTICS;

Analyzing can take an excessive amount of time if you use the COMPUTE option on large objects. We find that on almost every occasion, ANALYZE ESTIMATE 5 PERCENT on a large table forces the optimizer make the same decision as ANALYZE COMPUTE.

Tuning prior to releasing to production
A major dilemma that exists with respect to the cost-based optimizer (CBO) is how to tune the SQL for production prior to it being released. Most development and test databases will contain substantially fewer rows than a production database. It is therefore highly likely that the CBO will make different decisions on execution plans. Many sites can't afford the cost and inconvenience of copying the production database into a pre-production database.

Oracle8i and later provides various features to overcome this problem, including DBMS_STATS and the outline facility. Each is explained in more detail later in this book.

Inner workings of the cost-based optimizer
Unlike the rule-based optimizer, the cost-based optimizer does not have hard and fast path evaluation rules. The cost-based optimizer is flexible and can adapt to its environment. This adaptation is possible only once the necessary underlying object statistics have been refreshed (re-analyzed). What is constant is the method by which the cost-based optimizer calculates each possible execution plan and evaluates its cost (efficiency).

The cost-based optimizer's functionality can be (loosely) broken into the following steps:

Parse the SQL (check syntax, object privileges, etc.).

Generate a list of all potential execution plans.

Calculate (estimate) the cost of each execution plan using all available object statistics.

Select the execution plan with the lowest cost.

The cost-based optimizer will be used only if at least one table within a SQL statement has statistics (table statistics for unanalyzed tables are estimated). If no statistics are available for any table involved in the SQL, the RDBMS will resort to the rule-based optimizer, unless the cost-based optimizer is forced via statement-level HINTS or by an optimizer goal of ALL_ROWS or FIRST_ROWS.

To understand how the cost-based optimizer works and, ultimately, how to exploit it, we need to understand how it thinks.

Primary key and/or UNIQUE index equality
A UNIQUE index's selectivity is recognized as 100%. No other indexed access method is more precise. For this reason, a unique index is always used when available.

Non-UNIQUE index equality
For non-UNIQUE indexes, index selectivity is calculated. The cost-based optimizer makes the assumption that the table (and subsequent indexes) have uniform data spread unless you use the FOR ALL INDEXED COLUMNS option of the ANALYZE. That option will make the cost-based optimizer aware of how the data in the indexed columns is skewed.

Range evaluation
For index range execution plans, selectivity is evaluated. This evaluation is based on a column's most recent high-value and low-value statistics. Again, the cost-based optimizer makes the assumption that the table (and subsequent indexes) have uniform data spread unless you use the FOR ALL INDEXED COLUMNS option when analyzing the table.

Range evaluation over bind variables
For index range execution plans, selectivity is guessed. Prior to Oracle9i, because bind variable values are not available at parse time (values are passed to the cursor after the execution plan has been decided), the optimizer cannot make decisions based on bind variable values. The optimizer assumes a rule of thumb of 25% selectivity for unbounded bind variable ranges (e.g., WHERE dept_no = :b1) and 50% selectivity for bounded ranges (WHERE dept_no > :b1 AND dept_no < :b2). Beginning with Oracle9i, the cost-based optimizer obtains bind variable values prior to determining an execution plan.

Histograms
Prior to the introduction of histograms in Oracle 7.3, The cost-based optimizer could not distinguish grossly uneven key data spreads.

System resource usage
By default, the cost-based optimizer assumes that you are the only person accessing the database. Oracle9i gives you the ability to store information about system resource usage, and can make much better informed decisions based on workload (read up on the DBMS_STATS.GATHER_SYSTEM_STATS package).

Current statistics are important
The cost-based optimizer can make poor execution plan choices when a table has been analyzed but its indexes have not been, or when indexes have been analyzed but not the tables.

You should not force the database to use the cost-based optimizer via inline hints when no statistics are available for any table involved in the SQL.

Using old (obsolete) statistics can be more dangerous than estimating the statistics at runtime, but keep in mind that changing statistics frequently can also blow up in your face, particularly on a mission-critical system with lots of online users. Always back up your statistics before you re-analyze by using DBMS_STATS.EXPORT_SCHEMA_STATS.

Analyzing large tables and their associated indexes with the COMPUTE option will take a long, long time, requiring lots of CPU, I/O, and temporary tablespace resources. It is often overkill. Analyzing with a consistent value, for example, estimate 3%, will usually allow the cost-based optimizer to make optimal decisions

Combining the information provided by the selectivity rules with other database I/O information allows the cost-based optimizer to calculate the cost of an execution plan.

EXPLAIN PLAN for the cost-based optimizer
Oracle provides information on the cost of query execution via the EXPLAIN PLAN facility. EXPLAIN PLAN can be used to display the calculated execution cost(s) via some extensions to the utility. In particular, the plan table's COST column returns a value that increases or decreases to show the relative cost of a query. For example:

EXPLAIN PLAN FOR
SELECT count(*)
FROM winners, horses
WHERE winners.owner=horses.owner
AND winners.horse_name LIKE 'Mr %'

COLUMN "SQL" FORMAT a56

SELECT lpad(' ',2*level)||operation||''
||options ||' '||object_name||
decode(OBJECT_TYPE, '', '',
'('||object_type||')') "SQL",
cost "Cost", cardinality "Num Rows"
FROM plan_table
CONNECT BY prior id = parent_id
START WITH id = 0;

SQL Cost Num Rows
-----------------------------------------------
SELECT STATEMENT 44 1
SORT AGGREGATE
HASH JOIN 44 100469
INDEX RANGE SCAN MG1(NON-UNIQUE)
2 1471
INDEX FAST FULL SCAN OWNER_PK(UNIQUE)
4 6830

By manipulating the cost-based optimizer (i.e., via inline hints, by creating/removing indexes, or by adjusting the way that indexes or tables are analyzed), we can see the differences in the execution cost as calculated by the optimizer. Use EXPLAIN PLAN to look at different variations on a query, and choose the variation with the lowest relative cost.

For absolute optimal performance, many sites have the majority of the tables and indexes analyzed but a small number of tables that are used in isolation are not analyzed. This is usually to force rule-based behavior on the tables that are not analyzed. However, it is important that tables that have not been analyzed are not joined with tables that have been analyzed.

Some Common Optimizer Misconceptions
Let's clear up some common misconceptions regarding the optimizers:

Oracle8i and Oracle9i don't support the rule-based optimizer
This is totally false. Certain publications mentioned this some time ago, but Oracle now assures us that this is definitely not true.

Hints can't be used with the rule-based optimizer
The large majority of hints can indeed be applied to SQL statements using the rule-based optimizer.

SQL tuned for rule will run well in cost
If you are very lucky it may, but when you transfer to cost, expect a handful of SQL statements that require tuning. However, there is not a single site that I have transferred and been unable to tune.

SQL tuned for cost will run well in rule
This is highly unlikely, unless the code was written with knowledge of the rule-based optimizer.

You can't run rule and cost together
You can run both together by setting the INIT.ORA parameter OPTIMIZER_MODE to CHOOSE, and having some tables analyzed and others not. Be careful that you don't join tables that are analyzed with tables that are not analyzed.

Which Optimizer to Use?
If you are currently using the rule-based optimizer, I strongly recommend that you transfer to the cost-based optimizer. Here is a list of the reasons why:

1. The time spent coding is reduced.
2. Coders do not need to be aware of the rules.
3. There are more features, and far more tuning tools, available for the cost-based optimizer.
4. The chances of third-party packages performing well has been improved considerably. Many third-party packages are written to run on DB2, Informix, and SQL*Server, as well as on Oracle. The code has not been written to suit the rule-based optimizer; it has been written in a generic fashion.
5. End users can develop tuned code without having to learn a large set of optimizer rules.
6. The cost-based optimizer has improved dramatically from one version of Oracle to the next. Development of the rule-based optimizer is stalled.
7. There is less risk from adding new indexes.
8. There are many features that are available only with the cost-based optimizer. These features include recognition of materialized views, star transformation, the use of function indexes, and so on. The number of such features is huge, and as time goes on, the gap between cost and rule will widen.
9. Oracle has introduced features such as the DBMS_STATS package and outlines to get around known problems with the inconsistency of the cost-based optimizer across environments.


Oracle Export Options


In addition to export modes, the export utility enables the user to specify runtime parameters interactively, on the command line, or defined in a parameter file (PARFILE). These options include:

· buffer – Specifies the size, in bytes, of the buffer used to fetch the rows. If 0 is specified, only one row is fetched at a time. This parameter only applies to conventional (non direct) exports.

· compress – When “Y”, export will mark the table to be loaded as one extent for the import utility. If “N”, the current storage options defined for the table will be used. Although this option is only implemented on import, it can only be specified on export.

· consistent – [N] Specifies the set transaction read only statement for export, ensuring data consistency. This option should be set to “Y” if activity is anticipated while the exp command is executing. If ‘Y’ is set, confirm that there is sufficient undo segment space to avoid the export session getting the ORA-1555 Snapshot too old error.

· constraints – [Y] Specifies whether table constraints should be exported with table data.

· direct – [N] Determines whether to use direct or conventional path export. Direct path exports bypass the SQL command, thereby enhancing performance.

· feedback – [0] Determines how often feedback is displayed. A value of feedback=n displays a dot for every n rows processed. The display shows all tables exported not individual ones. From the output below, each of the 20 dots represent 50,000 rows, totaling 1 million rows for the table.

About to export specified tables via Direct Path ...

. . exporting table TABLE_WITH_ONE_MILLION_ROWS

....................

1000000 rows exported


· file – The name of the export file. Multiple files can be listed, separated by commas. When export fills the filesize, it will begin writing to the next file in the list.

· filesize – The maximum file size, specified in bytes.

· flashback_scn – The system change number (SCN) that export uses to enable flashback.

· flashback_time – Export will discover the SCN that is closest to the specified time. This SCN is used to enable flashback.

· full – The entire database is exported.

· grants – [Y] Specifies object grants to export.

· help – Shows command line options for export.

· indexes – [Y] Determines whether index definitions are exported. The index data is never exported.

· log – The filename used by export to write messages. The same messages that appear on the screen are written to this file:


Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
About to export specified tables via Direct Path ...

. . exporting table TABLE_WITH_ONE_MILLION_ROWS 1000000 rows exported

Export terminated successfully without warnings.



· object_consistent – [N] Specifies whether export uses SET TRANSACTION READ ONLY to ensure that the data being exported is consistent.

· owner – Only the owner’s objects will be exported.

· parfile – The name of the file that contains the export parameter options. This file can be used instead of specifying all the options on the command line for each export.

· query – Allows a subset of rows from a table to be exported, based on a SQL where clause (discussed later in this chapter).

· recordlength – Specifies the length of the file record in bytes. This parameter affects the amount of data that accumulates before it is written to disk. If not specified, this parameter defaults to the value specific to that platform. The highest value is 64KB.

· resumable – [N] Enables and disables resumable space allocation. When “Y”, the parameters resumable_name and resumable_timeout are utilized.

· resumable_name – User defined string that helps identify a resumable statement that has been suspended. This parameter is ignored unless resumable = Y.

· resumable_timeout – [7200 seconds] The time period in which an export error must be fixed. This parameter is ignored unless resumable = Y.

· rows – [Y] Indicates whether or not the table rows should be exported.

· statistics – [ESTIMATE] Indicates the level of statistics generated when the data is imported. Other options include COMPUTE and NONE.

· tables – Indicates that the type of export is table-mode and lists the tables to be exported. Table partitions and sub partitions can also be specified.

· tablespaces – Indicates that the type of export is tablespace-mode, in which all tables assigned to the listed tablespaces will be exported. This option requires the EXP_FULL_DATABASE role.

· transport_tablespace – [N] Enables the export of metadata needed for transportable tablespaces.

· triggers – [Y] Indicates whether triggers defined on export tables will also be exported.

· tts_full_check – [FALSE] When TRUE, export will verify that when creating a transportable tablespace, a consistent set of objects is exported.

· userid – Specifies the userid/password of the user performing the export.

· volsize – Specifies the maximum number of bytes in an export file on each tape volume.

The functionality of the export utility has been significantly enhanced in recent versions of Oracle.
To check which options are available in any release use:
exp help=y

Overview of Oracle JInitiator 1.1.7.11

Overview of Oracle JInitiator 1.1.7.11
Oracle JInitiator enables end users to run Oracle Developer Server applications directly within Netscape Navigator or Internet Explorer on the Windows 95 and Windows NT4.0 platforms.Oracle JInitiator is implemented as a plug-in (Netscape Navigator) or ActiveX Object (Microsoft Internet Explorer), Oracle JInitiator allows you to specify the use of the Oracle certified Java Virtual Machine (JVM) on web clients instead of relying on the default JVM provided by the browser.Oracle JInitiator delivers several key benefits to enterprises deploying Oracle Developer Server applications:Full JDK 1.1 support: Oracle JInitiator allows enterprise developers to develop and deploy Oracle Developer applications taking full advantage of JDK 1.1 features and functionality.Full Java Compatible support: Oracle JInitiator utilizes the latest release of the Oracle certified JDK (OJDK) which is fully compliant with the Java Compatibility Kit (JCK) test suite.Future-ready JDK Architecture: Oracle JInitiator features an architecture that makes it easy for Oracle to bring new JDK features, functionality, and bug fixes - including those contained in JDK 1.2 and the high performance Java HotSpot virtual machine - to Windows desktops much more quickly than has been previously possible.Easy install: Oracle JInitiator utilizes the plug-in/ActiveX Object installation mechanism provided by the browsers to install and update itself when necessary. Oracle JInitiator requires minimal intervention on the end users behalf
Supports several MIME types: Oracle JInitiator supports the following MIME typesapplication/x-jinit-appletapplication/x-jinit-applet;version=1.1.5.3application/x-jinit-applet;version=1.1.5.21.1application/x-jinit-applet;version=1.1.7.10application/x-jinit-applet;version=1.1.7.11
JAR file caching: Oracle JInitiator caches the downloaded JAR file on the client-machines hard disk. Cache allocations can be modified on the client via the Oracle JInitiator control panel.Supported configurations
The Oracle JInitiator supports the following configurations: Platform Internet Explorer 4.0 Navigator 4.0** Navigator 4.5 Windows 95* Windows NT*
* Oracle JInitiator 1.1.7.11 does not currently support Microsoft's Active Desktop.
** Oracle JInitiator 1.1.7.11 does not support Netscape Navigator 4.04 with the JDK 1.1 Smart Update, due to a bug in the Netscape Smart Update. Please use Netscape Navigator 4.04 without the Smart Update, or use other versions of Netscape Navigator.
About using Oracle JInitiatorOracle JInitiator does not replace or modify the default JVM provided by the browser. Rather, it provides an alternative JVM via the standard plug-in architecture of the browser which allows web application developers to specify the use of the Oracle certified JVM instead.Oracle JInitiator is automatically downloaded to a client machine from the application server the first time the client web browser encounters an HTML file that specifies the use of Oracle JInitiator. The installation and updating of Oracle JInitiator is performed using the standard plug-in mechanism provided by the browser.Oracle JInitiator installation performs the required steps to run Oracle Developer applications as trusted applets in the Oracle JInitiator environment.About using with Netscape NavigatorOracle JInitiator leverages the Netscape Navigator plug-in architecture in order to run inside the browser in the same way other plug-ins such as QuickTime movies or Shockwave animation's operate. Using the Netscape HTML (embed) tag, web application developers can specify plug-ins to be run as part of a web page. This is what makes it possible for Oracle JInitiator to run inside the web browser with minimal user intervention.When Navigator first encounters an HTML page that specifies the use of Oracle JInitiator, users will see a "Plug-in Not Loaded" dialog on the HTML page, which directs the user to the Oracle JInitiator download page. Users can then download the version of Oracle JInitiator for their operating system and install it. Once Oracle JInitiator is installed, users must shut down Navigator, restart it, and then revisit the original HTML page. Oracle JInitiator will then run and use the parameters in the (embed) tag to render the applet. The next time Navigator encounters a web page that specifies Oracle JInitiator, Navigator will seamlessly load and run the plug-in from the local disk, requiring no user intervention.About using with Microsoft Internet ExplorerOracle JInitiator leverages the Microsoft Internet Explorer extension mechanism for downloading and caching ActiveX controls and COM components. Using the HTML (object) tag, web application developers can specify to run ActiveX controls or COM components as part of a web page, including Oracle JInitiator.When Internet Explorer first encounters an HTML file that has been modified to specify the use of Oracle JInitiator, Internet Explorer will ask the user if it is okay to download an ActiveX control signed with a VeriSign digital signature by Oracle Corporation. If the user clicks "Yes," Internet Explorer will begin downloading Oracle JInitiator. Oracle JInitiator will then run and use its parameters in the (object) tag to render the applet. The next time Internet Explorer encounters a web page modified to support Oracle JInitiator, Internet Explorer will seamlessly load and run Oracle JInitiator from the local disk, requiring no user intervention.Oracle Java Runtime Environment version
Oracle JInitiator includes a version of Oracle Java Runtime Environment (JRE), which is downloaded and installed together with Oracle JInitiator.The version of the Oracle JRE in this release of Oracle JInitiator is Oracle JRE 1.1.7.11o. This is based on Oracle JDK 1.1.7.11o, which is in turn based on JavaSoft JDK 1.1.7.

Copyright ©1999 Oracle Corporation All Rights Reserved. Last update: 02/18/1999. For questions, comments, etc.: Please contact your local Oracle support representative. Desktop Considerations Java applets are particularly CPU intensive since they are run locally on the user's PC. A faster CPU has a more noticeable affect on performance, all other things being equal. JInitiator 1.1.8.7 and certain other Oracle tools are incompatible with Pentium 4 Processors (minimum of 1.1.8.10 required) Automatic download and installation of the JInitiator requires temporary changes to IE security settings Periodic "cleaning" of cache and other temporary files is necessary to reduce browser/memory issues while accessing the Oracle Applications. (see Clean Cache in Free Tools section at
http://www.solutionbeacon.com )

Display Iconic Images In Forms Using Oracle AS

Display Iconic Images In Forms Using Oracle AS

PURPOSE
=======

This document describes how to implement gif/jpeg icons on buttons for Forms using Oracle Application Server. This is done by defining a virtual directory containing the icons and directing the Forms Runtime to use that directory.

SCOPE & APPLICATION
====================
Audience: Forms Developer and Application Architects responsible for the deployment of forms applications over the web.This document ONLY describes the steps required for deploying icons in the Forms Services/ Runtime environment within Oracle 9iAS Rel 2. For information on deploying icons for Forms 9i/10g Builder / Runtime as part of 9i/10g Developers Suite, refer to:
Note 203846.1 How to Display Iconic Images Using Forms 9.0.x/ 10.x?
When running an Oracle Forms 9i/10g application the icon files used must be in a web enabled format such as JPG or GIF (GIF is the default format) (This is unlike older versions of forms running in client-server mode when the file format is .ico.)
Icon image files can either be retrieved by Forms as individual files on the filesystem or from a Java Archive (JAR file). If an application uses lots of icon images it is recommended that they are stored in a JAR file to reduce the number of HTTP round trips.
Reference:Note 232413.1 How to Deploy Iconic Images via a JAR File in Forms 9i/10g?
The example below details steps to configure iconic images as individual gif or jpeg files. It is a first approach recommended for initial testing and understanding.
Example:

========
This document has been written for use with both Windows and UNIX operating systems: Windows will use the "\" character to delimit directories and UNIX uses the "/" character. Any other differences are noted in the document.
1) Determine the physical location of the icons on your web server. For this example, assume that the icons are stored in D:\Myfiles\icons.
2) Create the virtual directory in the forms90.conf file that point to the location of your Forms 9i/10g icons. This file is located in:
$ORACLE_HOME\forms90\server\forms90.conf
To define a virtual directory forms90/icons, use the following syntax:
# Virtual path for ICONS (used to show icons in a form ) AliasMatch ^/forms90/icons/(..*) "D:\Myfiles/icons/$1"
Below is an example of the forms90.conf after it has been modified to include the /forms90/icons virtual directory.
# Virtual path mapping for Forms Java jar and class files (codebase) AliasMatch ^/forms90/java/(..*) "E:\IASR2\APP/forms90/java/$1" # Virtual path for JInitiator downloadable executable and download page AliasMatch ^/forms90/jinitiator/(..*) "E:\IASR2\APP/jinit/$1" # Virtual path for runform.htm (used to run a form for testing purposes) AliasMatch ^/forms90/html/(..*) "E:\IASR2\APP/tools/web90/html/$1"
# Virtual path for ICONS (used to show icons in a form ) AliasMatch ^/forms90/icons/(..*) "D:\Myfiles/icons/$1"
( For 10g use instead of "forms90" -> "forms")
3) Direct forms to use the /forms90/icons or /forms/icons virtual directory when running on the web by modifying the Registry.dat file in the $ORACLE_HOME\forms90\java\oracle\forms\registry or $ORACLE_HOME\forms\java\oracle\forms\registry
directory. Note that for UNIX, both the path name and file name are case sensitive. You must specify "Registry.dat". Modify the default.icons.iconpath entry as follows: default.icons.iconpath=http:///forms90/icons/ or default.icons.iconpath=http:///forms/icons/

If the URL used to bring up forms on the web is: http://abc.oracle.com:7778/forms90/f90servlet or http://abc.oracle.com:7778/forms/frmservlet

Then the entry required for the Registry.dat will be: default.icons.iconpath=http://abc.oracle.com:7778/forms90/icons or default.icons.iconpath=http://abc.oracle.com:7778/forms/icons default.icons.iconextension=gif
4) Now use EM Website to: a) Restart the OC4J-BI-Forms Instance b) Restart the BI-FORMS HTTP Server Instance.
5) Verify that the virtual directory has been defined properly and that the icons can be viewed in a browser. Assuming that exeqry.gif exists in the D:\Myfiles\icons directory, the following URL should show the icon:
http://abc.oracle.com:7778/forms90/icons/exeqry.gif or http://abc.oracle.com:7778/forms/icons/exeqry.gif

6) Check the form :
http://abc.oracle.com:7778/forms90/f90servlet?form=D:\Testcases\F90\ICON_CHECK_IASR2.fmx or http://abc.oracle.com:7778/forms/frmservlet?form=D:\Testcases\F10g\ICON_CHECK_IASR2.fmx

CHECKLISTS

=============
Icons do not appear in the browser using the URL: o Verify that you are using the correct machine name and port for the OC4J_BI_Forms object.o Check the virtual directory name to make sure that there are no typographical errors. o Check for typographical errors in the forms90.conf or forms.conf for the virtual directory. o Check the name of the icon on the file system. If the file is saved as Exeqry.gif, then it will not be found using the URL http://abc.oracle.com:7778/forms90/ Close all your browser sessions and open a new browser window. Execute the URL to bring up from the Form Builder runtime or from the browser.
( For 10G use "forms" instead of "forms90")
USEFUL REFERENCES:==================
[PDF] Oracle9iAS Forms Services Deployment Guide (P/N B10170-01)Configuring Oracle9iAS Forms Services -> Deploying Icons and Images Used by Oracle9iAS Forms Services
http://otn.oracle.com/products/forms/pdf/B10170_01.pdf
[PDF] Oracle9iForms Services - How to Deploy Iconshttp://otn.oracle.com/products/forms/pdf/webicons.pdf
Copyright © 2005, Oracle. All rights reserved.