Sunday, March 14, 2010

How to find index is fragmented in Oracle Database?

First analyze index

SQL>analyze index INDEX_NAME validate structure;

Then query INDEX_STATS view
1. If del_lf_rows/lf_rows is > .2 then index should be rebuild.
2. If height is 4 then index should be rebuild.
3. If lf_rows is lower than lf_blks then index should be rebuild.

SQL> column status format a10
SQL> select trunc((del_lf_rows/lf_rows)*100,2)||'%' "status" from index_stats;
status
----------
21.83%

How to remove index fragmentation?

There are two way to remove fragmentation.
1. index coalesce
2. index rebuild

What is difference between coalesce and rebuild please go through below link for more details
http://download.oracle.com/docs/cd/B14117_01/server.101/b10739/indexes.htm#g1007548

SQL> alter index IDX_OBJ_ID coalesce;

SQL> alter index IDX_OBJ_ID rebuild;

SQL> alter index IDX_OBJ_ID rebuild online;

Note: If any DML statement is running on base table then we have to use ONLINE keyword with index rebuilding.

SQL> analyze index idx_obj_id validate structure;
Index analyzed.

SQL> select trunc((del_lf_rows/lf_rows)*100,2)||'%' "status" from index_stats;
status
-------
40.85%

SQL> alter index IDX_OBJ_ID rebuild online;
Index altered.

SQL> analyze index idx_obj_id validate structure;
Index analyzed.

SQL> select trunc((del_lf_rows/lf_rows)*100,2)||'%' "status" from index_stats;
status
--------
0%

Original Export and Import Versus Data Pump Export and Import in Oracle Database

It is important to understand that many of the concepts behind them do not apply to Data Pump Export (expdp) and Data Pump Import (impdp). In particular:

* Data Pump Export and Import operate on a group of files called a dump file set rather than on a single sequential dump file.

* Data Pump Export and Import access files on the server rather than on the client. This results in improved performance. It also means that directory objects are required when you specify file locations.

* The Data Pump Export and Import modes operate symmetrically, whereas original export and import did not always exhibit this behavior.

For example, suppose you perform an export with FULL=Y, followed by an import using SCHEMAS=HR. This will produce the same results as if you performed an export with SCHEMAS=HR, followed by an import with FULL=Y.

* Data Pump Export and Import use parallel execution rather than a single stream of execution, for improved performance. This means that the order of data within dump file sets and the information in the log files is more variable.

* Data Pump Export and Import represent metadata in the dump file set as XML documents rather than as DDL commands. This provides improved flexibility for transforming the metadata at import time.

* Data Pump Export and Import are self-tuning utilities. Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.

* At import time there is no option to perform interim commits during the restoration of a partition. This was provided by the COMMIT parameter in original Import.

* There is no option to merge extents when you re-create tables. In original Import, this was provided by the COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target table.

* Sequential media, such as tapes and pipes, are not supported.

* The Data Pump method for moving data between different database versions is different than the method used by original Export/Import. With original Export, you had to run an older version of Export (exp) to produce a dump file that was compatible with an older database version. With Data Pump, you can use the current Export (expdp) version and simply use the VERSION parameter to specify the target database version.

* When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates an active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs any rows that are in violation and continues with the load.

* Data Pump Export and Import consume more undo tablespace than original Export and Import. This is due to additional metadata queries during export and some relatively long-running master table queries during import. As a result, for databases with large amounts of metadata, you may receive an ORA-01555: snapshot too old error. To avoid this, consider adding additional undo tablespace or increasing the value of the UNDO_RETENTION initialization parameter for the database.

* If a table has compression enabled, Data Pump Import attempts to compress the data being loaded. Whereas, the original Import utility loaded data in such a way that if a even table had compression enabled, the data was not compressed upon import.

* Data Pump supports character set conversion for both direct path and external tables. Most of the restrictions that exist for character set conversions in the original Import utility do not apply to Data Pump. The one case in which character set conversions are not supported under the Data Pump is when using transportable tablespaces.

How to check Tablespace Fragmentation in Oracle Database?

Oracle Tablespace Fragmentation

Honeycomb fragmentation

Which is easy to fix. All that needs to be done is to combine adjacent free segments into one by issuing a coalesce statement:

alter tablespace USERS coalesce;

Bubble fragmentation

Which is more difficult to handle. Of course, the best course of action is to prevent it in the first place. And as discussed earlier, the best weapon for this is to use locally-managed tablespaces. It may sound too simple, but in reality, implementing these storage structures in your database can just about remove the need to perform full tablespace reorganizations.

However, what do you do if you are in a situation where you have many databases that were set up with dictionary-managed tablespaces? You have two options, at least if you are running Oracle 8i (the version that gave birth to locally-managed tablespaces). First, you can create new locally-managed tablespaces in your database and then perform one last, full tablespace reorganization of all the database objects into the new tablespaces.

Needless to say, this can be a difficult task if you have large databases and no third-party reorganization tool. However, it will be worth the effort as you will likely never have to reorganize your tablespaces again, and your objects should never again encounter a maximum extent limit.

If you are using Oracle 8.1.6 or higher, you can convert any current dictionary-managed tablespaces to locally-managed tablespaces. Buried in the Oracle documentation is a procedure for converting a tablespace's extent management from dictionary to local or vice-versa.

convert DMT to LMT:

sys.dbms_space_admin .tablespace_migrate_to_local('USERS');

If you are afraid of how long this procedure might take on large tablespaces, do not be. It actually runs very fast.

convert LMT to DMT:

sys.dbms_space_admin .tablespace_migrate_from_local('USERS');

Detecting Oracle Tablespace Fragmentation

How can you tell if your tablespaces are suffering from fragmentation problems and then identify the type of fragmentation? The detection and diagnosis is not hard to make at all. To determine if your tablespaces are having a problem with fragmentation, you can use the tsfrag.sql script:

select
tablespace_name,
count(*) free_chunks,
decode(
round((max(bytes) / 1024000),2),
null,0,
round((max(bytes) / 1024000),2)) largest_chunk,
nvl(round(sqrt(max(blocks)/sum(blocks))*(100/sqrt(sqrt(count(blocks)) )),2),
0) fragmentation_index
from
sys.dba_free_space
group by
tablespace_name
order by
2 desc, 1;


When you examine the script output, you need to hone in on a couple of columns in particular. First, notice the fragmentationindex column. This will give your tablespace an overall ranking with respect to how badly it is actually fragmented. A 100% score indicates no fragmentation at all. Lesser scores verify the presence of fragmentation.

The free chunks count column will tell you how many segments of free space are scattered throughout the tablespace. One thing to keep in mind is that tablespaces with multiple datafiles will always show a free chunk count greater than one because each datafile will likely have at least one pocket of free space.

What is a Latch in Oracle Database?

Latch is a low level locking mechanism provided by oracle. In this article we will look at the latch concept and how to tune the server to reduce latch contention.

What is a Latch ?

A Latch is a low level serialization mechanism that ( released as quickly as it is acquired ) protects shared data structures. A process acquires and holds the latch as long as the the data structure is in use. The basic idea is to prevent concurrent access to shared data structures in in the SGA. In case the process dies without releasing the latch, the PMON process will clean up the lock on the data structure and releases the latch.

If a process is not able to obtain a latch, it must wait for the latch to be freed up by process holding it. This causes additional spinning ( looking for availability at fixed intervals of time ) of the process, there by causing extra load on the CPU. This process will spin until the latch is available. A dba has to monitor the latches for contention and make sure that CPU cycles are not being burnt on process spinning.

The Tunable Trio ( .. 3 latches I monitor )
You cannot monitor and tune all the available internal latches in oracle. The few latches I monitor are Redo Allocation Latch, Redo Copy Latch and Row Cache objects Latch.
• Redo Allocation Latch controls the allocation of space for redo entries in the redo log buffer.
• Redo Copy Latch is used when the size of a redo entry is greater than the value of LOG_SMALL_ENTRY_MAX_SIZE.
• Row Cache Latch is used when a process attempts to access the data dictionary information from the cache.

select c.name,a.gets,a.misses,a.sleeps, a.immediate_gets,a.immediate_misses,b.pid
from v$latch a, v$latchholder b, v$latchname c
where a.addr = b.laddr(+)
and a.latch# = c.latch#
and (c.name like 'redo%' or c.name like 'row%'
order by a.latch#;

When you execute the above script, the output will display the name of the latch and various other statistics about the redo and row cache latches. The values of gets, misses and sleeps are willing to wait operations, i.e, the requesting process will wait a short time, if the latch is busy, and request the latch again. The immediate_gets and immediate_misses columns represent the number of successful and unsuccessful immediate requests for the latches.

To make the database perform well, try to adjust parameters to keep the miss ratio [(misses/gets)*100 ] for each of these latches to a minimum.( < 1 )

Tuning Process ( What to do If I have Contention )
In case of redo allocation latch contention, try to decrease the log_small_entry_max_size so that the redo copy latch is used more.

If the query shows heavy redo copy latch contention, set the value of log_simultaneous_copies to the number of cpu's. If it still does not help and the contention on the allocation latch is low, then try increasing the log_small_entry_max_size to shift some load to the allocation latch.

In order to decrease the contention on the row cache latch, increase the shared_pool_size since the data dictionary is a part of the shared pool.

When tuning Oracle, one has to remember that there are no absolute values for tuning and the best balance between the values has to be obtained by incremental tuning.

How to check locks and kill sessions in Oracle Database?

The below script will tell which session is holding the lock

select
c.owner,
c.object_name,
c.object_type,
b.sid,
b.serial#,
b.status,
b.osuser,
b.machine
from
v$locked_object a ,
v$session b,
dba_objects c
where
b.sid = a.session_id
and
a.object_id = c.object_id
/


The below command is used to kill the session from oracle

ALTER SYSTEM KILL SESSION 'SID,Searial#' IMMEDIATE ;

ALTER SYSTEM KILL SESSION '1626,19894' IMMEDIATE ;

ALTER SYSTEM DISCONNECT SESSION '13, 8' IMMEDIATE ;

How to compile all the invalid objects in Oracle Database?

Below is the script to generate dynamic script to compile all the invalid objects in Oracle Database.

set echo off
set feed off
set sqlp ''
set head off
set pages 0
spool spool_output.sql
select 'ALTER '||
DECODE(object_type, 'VIEW', 'VIEW', 'PACKAGE') || ' '||
LOWER(object_name)|| ' COMPILE '||
DECODE(object_type, 'PACKAGE', 'SPECIFICATION', 'PACKAGE BODY', 'BODY', '')|| ' ; ' from user_objects
where status = 'INVALID'
order by DECODE(object_type, 'VIEW', 1, 'PACKAGE', 2, 'PACKAGE BODY', 3), object_name ;
spool off
set echo on
set feed on
set sqlp 'SQL>'
set head on

select 'alter '||object_type||' '||object_name||' compile;' from user_objects where status='INVALID' order by object_type,object_name;

How to backup control file to trace in Oracle Database?

Below is the script to backup control file to trace on unix to user defined location.

===========================================================
#------------------------------------------------------------
# backup controlfile
#------------------------------------------------------------
#------------------------------------------------------------
# find the user_dump_dest directory through a sqlplus call
# and assign it to an environment variable
#------------------------------------------------------------
ORA_DUMP=`${ORACLE_HOME}/bin/sqlplus -s << EOF
/ as sysdba
set heading off
set feedback off
SELECT value
FROM v\\$parameter
WHERE name='user_dump_dest';
exit
EOF`
#------------------------------------------------------------
# create backup controlfile
#------------------------------------------------------------
${ORACLE_HOME}/bin/sqlplus -s <<> /dev/null 2>&1
/ as sysdba
set heading off
set feedback off
ALTER DATABASE BACKUP CONTROLFILE TO trace;
exit
EOF
#------------------------------------------------------------
# find the trace file that the backup controlfile is in
# this should be the last trace file generated
# validate it is a controlfile and copy to backup_control.sql
#------------------------------------------------------------
DATETIME="`date '+%d%m%y'`"
CONTROL=`ls -t $ORA_DUMP/*.trc | head -1`
if [ ! -z "$CONTROL" ]; then
grep 'CONTROL' ${CONTROL} 1> /dev/null
if test $? -eq 0; then
cp ${CONTROL} /home/oracle/backup_control_file_$DATETIME.sql
fi
fi
#end
=========================================================

How to check database size in Oracle Database?

Below is the script to check the size of the Oracle Database

select (a.data_size+b.temp_size+c.redo_size)/(1024*1024*1024) "total_size GB"
from ( select sum(bytes) data_size
from dba_data_files ) a,
( select nvl(sum(bytes),0) temp_size
from dba_temp_files ) b,
( select sum(bytes) redo_size
from sys.v_$log ) c;

How to check datafile usage in Oracle Database?

Below is the script script which will list datafile wise Allocated size, Used Size and Free Size
Size details are displayed in MB (Mega Bytes)

SELECT SUBSTR (df.NAME, 1, 40) file_name, df.bytes / 1024 / 1024 "Allocated Size(MB)",
((df.bytes / 1024 / 1024) - NVL (SUM (dfs.bytes) / 1024 / 1024, 0)) "Used Size (MB)",
NVL (SUM (dfs.bytes) / 1024 / 1024, 0) "Free Size(MB)"
FROM v$datafile df, dba_free_space dfs
WHERE df.file# = dfs.file_id(+)
GROUP BY dfs.file_id, df.NAME, df.file#, df.bytes
ORDER BY file_name;

How to check the tablespace usage in Oracle Database?

Below is the script which will list tablespace wise free space and used space as well as total space.
The Size details are displayed in MB.

SELECT Total.name "Tablespace Name",
nvl(Free_space, 0) "Free Size(MB)",
nvl(total_space-Free_space, 0) "Used Size(MB)",
total_space "Total Size(MB)"
FROM
(select tablespace_name, sum(bytes/1024/1024) free_space
from sys.dba_free_space
group by tablespace_name
) Free,
(select b.name, sum(bytes/1024/1024) total_space
from sys.v_$datafile a, sys.v_$tablespace B
where a.ts# = b.ts#
group by b.name
) Total
WHERE Free.Tablespace_name(+) = Total.name
ORDER BY Total.name;

How to check which tablespace cannot extent causing tablespace full error?

Below is the script to check which tablespace cannot extent causing "tablespace full error"

select a.owner||'.'||a.segment_name "Segment Name",
a.segment_type "Segment Type",
a.bytes/1024/1024 "Size(MB)",
a.next_extent/1024/1024 "Next Extent",
a.tablespace_name "Tablespace Name"
from sys.dba_segments a
where a.tablespace_name not like 'T%MP%' -- Exclude TEMP tablespaces
and next_extent * 1 > ( -- Cannot extend 1x, can change to 2x...
select max(b.bytes)
from dba_free_space b
where a.tablespace_name = b.tablespace_name)
order by 3 desc;

How to check the locked objects in the Oracle Database?

Below is the script to check the locked objects in the Oracle Database

SELECT oracle_username USERNAME,
owner OBJECT_OWNER,
object_name, object_type, s.osuser,
s.SID SID,
s.SERIAL# SERIAL,
DECODE(l.block,
0, 'Not Blocking',
1, 'Blocking',
2, 'Global') STATUS,
DECODE(v.locked_mode,
0, 'None',
1, 'Null',
2, 'Row-S (SS)',
3, 'Row-X (SX)',
4, 'Share',
5, 'S/Row-X (SSX)',
6, 'Exclusive', TO_CHAR(lmode)
) MODE_HELD
FROM gv$locked_object v, dba_objects d,
gv$lock l, gv$session s
WHERE v.object_id = d.object_id
AND (v.object_id = l.id1)
and v.session_id = s.sid
ORDER BY oracle_username, session_id;

Note: Run the above select statement as sys user.

Tuesday, March 9, 2010

How to change SSO Login page to Custom Login Page in Oracle Application server (OAS) ?

To change SSO Login page follow the below steps:

1) The Login page is a jsp file "login.jsp" which can be found in $INFRA_HOME/j2ee/OC4J_SECURITY/applications/sso/web/jsp, you can create your own login file say "my_login_page.jsp" and place it into that directory.

2) Then go to $INFRA_HOME/sso/conf and modify "policy.properties" file (backup the original one first). You have to find the line:
loginPageUrl = /sso/jsp/login.jsp

Replace "login.jsp" with your new login file name "my_login_page.jsp"
loginPageUrl = /sso/jsp/my_login_page.jsp

3) Restart OC4J_SECURITY and HTTP_Server

You can however customize more, check [chapter 12|http://download.oracle.com/docs/cd/B14099_19/idmanage.1012/b14078/custom.htm#i1015535] in the SSO Administration Guide (Creating Deployment-Specific Pages) for more options.