Saturday, July 24, 2010

How to change Oracle forms and reports standalone configuration for custom directory???

Recently i encountered one issue to configure Oracle Forms and Reports Standalone to point to custom directory.

Below is the solution for that

Got to httpd.conf file which is located in ORACLE_HOME/Apache/Apache/conf directory.

Before changing anything in httpd.conf, take backup of httpd.conf.

Change DocumentRoot value to your directory as shown below

DocumentRoot "your custom directory"

Hope this helps!!!

How to change the logo on the connection page of discoverer portal

Below is a way change the logo on the connection page of discoverer portal

Go to
E:\Oracle\10gAS_BI\j2ee\OC4J_BI_Forms\applications\discoverer\discoverer\images

change logo_plus.gif to logo_plus_bak.gif and place your logo there and rename it to logo_plus.gif.

Now relogin to the Discoverer portal you will see your logo there.

Enjoy!!!

How to extract the DB Links DDL with the encripted password???

Below is a simple script to extract the DB Links DDL with the encripted password.

SET LONG 9000 — to print the complete string

SELECT DBMS_METADATA.GET_DDL (‘DB_LINK’,a.db_link,a.owner) FROM dba_db_links a;

Hope this helps as it has helped in alot of times.

Monday, June 7, 2010

How to change OAS port to 80???

If you are on a UNIX system and you are changing the Listen port to a number less than 1024, perform these steps before you change the Oracle HTTP Server Listen port.

By default, Oracle HTTP Server runs as a non-root user (the user that installed Oracle Application Server). On UNIX systems, if you change the Oracle HTTP Server Listen port number to a value less than 1024, you must enable Oracle HTTP Server to run as root, as follows:



execute the below command as root

cd ORACLE_HOME/Apache/Apache/bin
chown root .apachectl
chmod 6750 .apachectl

change the value of port and Listen in httpd.conf to 80 and restart the OAS as Oracle user.

For more information refer to the below link.http://download.oracle.com/docs/cd/B32110_01/core.1013/b32196/ports.htm#CIHJEEJH

How to assign 2 ports to the same website on the same server (OAS)

I recently have to assign 2 ports to same portal on the same OAS server.

Below is the example to do the same.

add the below entries according to ur requirement in httpd.conf

Listen 80
Listen 8080

NameVirtualHost 172.20.30.40:80
NameVirtualHost 172.20.30.40:8080


<VirtualHost 172.20.30.40:80>
ServerName www.example1.com
DocumentRoot /www/domain-80
</VirtualHost>

<VirtualHost 172.20.30.40:8080>
ServerName www.example1.com
DocumentRoot /www/domain-8080
</VirtualHost>

<VirtualHost 172.20.30.40:80>
ServerName www.example2.org
DocumentRoot /www/otherdomain-80
</VirtualHost>

<VirtualHost 172.20.30.40:8080>
ServerName www.example2.org
DocumentRoot /www/otherdomain-8080
</VirtualHost>

The above information have been collected from the below link
http://httpd.apache.org/docs/2.0/vhosts/examples.html

Thursday, June 3, 2010

orcladmin password expired???

Beginning with OID version 9.0.4, the default value for Password Expiry Time is set to 5184000 (60 days). Unless you have changed the default password policies from the installation defaults, exactly 60 days from the date of installation all passwords will automatically expire. Furthermore, if you keep trying to login using the old password, there exists the possibility that you will exceed the max grace logins exceeded, and consequently the account becomes locked.

To clarify the fact that the superuser, cn=orcladmin ldapbind succeeds you need to understand the following:

1. There are 2 orcladmin accounts, the superuser account, cn=orcladmin and the REALM administrator cn=orcladmin. For example cn=orcladmin,cn=users,dc=oracle,dc-com.

2. The superuser account is hard coded and you cannot see it in the Oracle Directory Manager nor the Oracle Directory Services Manager in 11g.

3. When you login to OIDDAS the REALM administrator account is the one that is being used. Since this user account exists within the REALM it is also under control of the default password policies.

Solution:

Step 1. If the superuser, cn=orcladmin account is locked, use the oidpasswd utility to unlock the super user orcladmin account, for example:
$ oidpasswd connect=asdb unlock_su_acct=true

This unlocks the OID Super User account, cn=orcladmin ONLY. Do not confuse this account with the default realm cn=orcladmin,cn=users,dc=xxxxx,dc=yyyyy. Again, they are two separate accounts. After resetting the orcladmin super user account, you will still not be able to login to SSO using the orcladmin account until you complete all steps in this document. Do not reset any other account passwords at this time (on this step).

Step 2. Launch the Oracle Directory Manager Console (or the ODSM for 11g).

Pre-11g:
on Unix, type oidadmin at the unix prompt
on Windows, Start->Programs->Oracle - SID->Integrated Management Tools/Oracle Internet Directory

11g:
Open a browser to the Oracle Fusion Middleware Enterprise Manager.
Select the OID instance
Click the OID dropdown list in the right frame
Select Oracle Directory Services Manager

Step 3. Login as "cn=orcladmin" and navigate to Password Policy Management area.
Edit each of these and change the pwdmaxage to an appropriate value:
5184000 = 60 days (default)
7776000 = 90 days
10368000 = 120 days
15552000 = 180 days
31536000 = 1 year
999999999 = never expire

Note: Realm policies can be different than the root policy, so if in doubt change both policies the first time, then change the realm policy one to different values as desired.

Step 4. Launch Oracle Directory Manager (or ODSM if 11g as previously described) and navigate to Entry Management and the Users container under your default realm (for example:
cn=orcladmin,cn=users,dc=oracle,dc=com)
Reset the UserPassword field by entering a new value.

After you have changed the cn=orcladmin for the REALM, you should be able to login to SSO/OIDDAS and reset/change any user passwords that have expired or are locked.

Note: at 11g there is not SSO nor OIDDAS, but you can still use the ODSM to reset user passwords.

Pre-11g Only
Rerun the odisrvreg utility to reset the DIP randomly generated password, for example:
odisrvreg -D cn=orcladmin -w -p 3060
Already Registered...Updating DIS password...
DIS registration successful

If you are unsure of your OID Port, check $ORACLE_HOME/install/portlist.ini file, which shows the ports at install time.

The above information have been collected from metalink.oracle.com

Wednesday, April 21, 2010

How to unzip cpio files on IBM AIX???

I faced a lot of problems to unzip .cpio files on IBM AIX. The errors which i used to get was out of phase, the file is not proper. After doing some R&D i found that to unzip on IBM AIX used the below command.

cpio -idcmv [cpio_filename]

It has worked for me. Hope this helps to you also......

Sunday, March 14, 2010

How to find index is fragmented in Oracle Database?

First analyze index

SQL>analyze index INDEX_NAME validate structure;

Then query INDEX_STATS view
1. If del_lf_rows/lf_rows is > .2 then index should be rebuild.
2. If height is 4 then index should be rebuild.
3. If lf_rows is lower than lf_blks then index should be rebuild.

SQL> column status format a10
SQL> select trunc((del_lf_rows/lf_rows)*100,2)||'%' "status" from index_stats;
status
----------
21.83%

How to remove index fragmentation?

There are two way to remove fragmentation.
1. index coalesce
2. index rebuild

What is difference between coalesce and rebuild please go through below link for more details
http://download.oracle.com/docs/cd/B14117_01/server.101/b10739/indexes.htm#g1007548

SQL> alter index IDX_OBJ_ID coalesce;

SQL> alter index IDX_OBJ_ID rebuild;

SQL> alter index IDX_OBJ_ID rebuild online;

Note: If any DML statement is running on base table then we have to use ONLINE keyword with index rebuilding.

SQL> analyze index idx_obj_id validate structure;
Index analyzed.

SQL> select trunc((del_lf_rows/lf_rows)*100,2)||'%' "status" from index_stats;
status
-------
40.85%

SQL> alter index IDX_OBJ_ID rebuild online;
Index altered.

SQL> analyze index idx_obj_id validate structure;
Index analyzed.

SQL> select trunc((del_lf_rows/lf_rows)*100,2)||'%' "status" from index_stats;
status
--------
0%

Original Export and Import Versus Data Pump Export and Import in Oracle Database

It is important to understand that many of the concepts behind them do not apply to Data Pump Export (expdp) and Data Pump Import (impdp). In particular:

* Data Pump Export and Import operate on a group of files called a dump file set rather than on a single sequential dump file.

* Data Pump Export and Import access files on the server rather than on the client. This results in improved performance. It also means that directory objects are required when you specify file locations.

* The Data Pump Export and Import modes operate symmetrically, whereas original export and import did not always exhibit this behavior.

For example, suppose you perform an export with FULL=Y, followed by an import using SCHEMAS=HR. This will produce the same results as if you performed an export with SCHEMAS=HR, followed by an import with FULL=Y.

* Data Pump Export and Import use parallel execution rather than a single stream of execution, for improved performance. This means that the order of data within dump file sets and the information in the log files is more variable.

* Data Pump Export and Import represent metadata in the dump file set as XML documents rather than as DDL commands. This provides improved flexibility for transforming the metadata at import time.

* Data Pump Export and Import are self-tuning utilities. Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.

* At import time there is no option to perform interim commits during the restoration of a partition. This was provided by the COMMIT parameter in original Import.

* There is no option to merge extents when you re-create tables. In original Import, this was provided by the COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target table.

* Sequential media, such as tapes and pipes, are not supported.

* The Data Pump method for moving data between different database versions is different than the method used by original Export/Import. With original Export, you had to run an older version of Export (exp) to produce a dump file that was compatible with an older database version. With Data Pump, you can use the current Export (expdp) version and simply use the VERSION parameter to specify the target database version.

* When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates an active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs any rows that are in violation and continues with the load.

* Data Pump Export and Import consume more undo tablespace than original Export and Import. This is due to additional metadata queries during export and some relatively long-running master table queries during import. As a result, for databases with large amounts of metadata, you may receive an ORA-01555: snapshot too old error. To avoid this, consider adding additional undo tablespace or increasing the value of the UNDO_RETENTION initialization parameter for the database.

* If a table has compression enabled, Data Pump Import attempts to compress the data being loaded. Whereas, the original Import utility loaded data in such a way that if a even table had compression enabled, the data was not compressed upon import.

* Data Pump supports character set conversion for both direct path and external tables. Most of the restrictions that exist for character set conversions in the original Import utility do not apply to Data Pump. The one case in which character set conversions are not supported under the Data Pump is when using transportable tablespaces.

How to check Tablespace Fragmentation in Oracle Database?

Oracle Tablespace Fragmentation

Honeycomb fragmentation

Which is easy to fix. All that needs to be done is to combine adjacent free segments into one by issuing a coalesce statement:

alter tablespace USERS coalesce;

Bubble fragmentation

Which is more difficult to handle. Of course, the best course of action is to prevent it in the first place. And as discussed earlier, the best weapon for this is to use locally-managed tablespaces. It may sound too simple, but in reality, implementing these storage structures in your database can just about remove the need to perform full tablespace reorganizations.

However, what do you do if you are in a situation where you have many databases that were set up with dictionary-managed tablespaces? You have two options, at least if you are running Oracle 8i (the version that gave birth to locally-managed tablespaces). First, you can create new locally-managed tablespaces in your database and then perform one last, full tablespace reorganization of all the database objects into the new tablespaces.

Needless to say, this can be a difficult task if you have large databases and no third-party reorganization tool. However, it will be worth the effort as you will likely never have to reorganize your tablespaces again, and your objects should never again encounter a maximum extent limit.

If you are using Oracle 8.1.6 or higher, you can convert any current dictionary-managed tablespaces to locally-managed tablespaces. Buried in the Oracle documentation is a procedure for converting a tablespace's extent management from dictionary to local or vice-versa.

convert DMT to LMT:

sys.dbms_space_admin .tablespace_migrate_to_local('USERS');

If you are afraid of how long this procedure might take on large tablespaces, do not be. It actually runs very fast.

convert LMT to DMT:

sys.dbms_space_admin .tablespace_migrate_from_local('USERS');

Detecting Oracle Tablespace Fragmentation

How can you tell if your tablespaces are suffering from fragmentation problems and then identify the type of fragmentation? The detection and diagnosis is not hard to make at all. To determine if your tablespaces are having a problem with fragmentation, you can use the tsfrag.sql script:

select
tablespace_name,
count(*) free_chunks,
decode(
round((max(bytes) / 1024000),2),
null,0,
round((max(bytes) / 1024000),2)) largest_chunk,
nvl(round(sqrt(max(blocks)/sum(blocks))*(100/sqrt(sqrt(count(blocks)) )),2),
0) fragmentation_index
from
sys.dba_free_space
group by
tablespace_name
order by
2 desc, 1;


When you examine the script output, you need to hone in on a couple of columns in particular. First, notice the fragmentationindex column. This will give your tablespace an overall ranking with respect to how badly it is actually fragmented. A 100% score indicates no fragmentation at all. Lesser scores verify the presence of fragmentation.

The free chunks count column will tell you how many segments of free space are scattered throughout the tablespace. One thing to keep in mind is that tablespaces with multiple datafiles will always show a free chunk count greater than one because each datafile will likely have at least one pocket of free space.

What is a Latch in Oracle Database?

Latch is a low level locking mechanism provided by oracle. In this article we will look at the latch concept and how to tune the server to reduce latch contention.

What is a Latch ?

A Latch is a low level serialization mechanism that ( released as quickly as it is acquired ) protects shared data structures. A process acquires and holds the latch as long as the the data structure is in use. The basic idea is to prevent concurrent access to shared data structures in in the SGA. In case the process dies without releasing the latch, the PMON process will clean up the lock on the data structure and releases the latch.

If a process is not able to obtain a latch, it must wait for the latch to be freed up by process holding it. This causes additional spinning ( looking for availability at fixed intervals of time ) of the process, there by causing extra load on the CPU. This process will spin until the latch is available. A dba has to monitor the latches for contention and make sure that CPU cycles are not being burnt on process spinning.

The Tunable Trio ( .. 3 latches I monitor )
You cannot monitor and tune all the available internal latches in oracle. The few latches I monitor are Redo Allocation Latch, Redo Copy Latch and Row Cache objects Latch.
• Redo Allocation Latch controls the allocation of space for redo entries in the redo log buffer.
• Redo Copy Latch is used when the size of a redo entry is greater than the value of LOG_SMALL_ENTRY_MAX_SIZE.
• Row Cache Latch is used when a process attempts to access the data dictionary information from the cache.

select c.name,a.gets,a.misses,a.sleeps, a.immediate_gets,a.immediate_misses,b.pid
from v$latch a, v$latchholder b, v$latchname c
where a.addr = b.laddr(+)
and a.latch# = c.latch#
and (c.name like 'redo%' or c.name like 'row%'
order by a.latch#;

When you execute the above script, the output will display the name of the latch and various other statistics about the redo and row cache latches. The values of gets, misses and sleeps are willing to wait operations, i.e, the requesting process will wait a short time, if the latch is busy, and request the latch again. The immediate_gets and immediate_misses columns represent the number of successful and unsuccessful immediate requests for the latches.

To make the database perform well, try to adjust parameters to keep the miss ratio [(misses/gets)*100 ] for each of these latches to a minimum.( < 1 )

Tuning Process ( What to do If I have Contention )
In case of redo allocation latch contention, try to decrease the log_small_entry_max_size so that the redo copy latch is used more.

If the query shows heavy redo copy latch contention, set the value of log_simultaneous_copies to the number of cpu's. If it still does not help and the contention on the allocation latch is low, then try increasing the log_small_entry_max_size to shift some load to the allocation latch.

In order to decrease the contention on the row cache latch, increase the shared_pool_size since the data dictionary is a part of the shared pool.

When tuning Oracle, one has to remember that there are no absolute values for tuning and the best balance between the values has to be obtained by incremental tuning.

How to check locks and kill sessions in Oracle Database?

The below script will tell which session is holding the lock

select
c.owner,
c.object_name,
c.object_type,
b.sid,
b.serial#,
b.status,
b.osuser,
b.machine
from
v$locked_object a ,
v$session b,
dba_objects c
where
b.sid = a.session_id
and
a.object_id = c.object_id
/


The below command is used to kill the session from oracle

ALTER SYSTEM KILL SESSION 'SID,Searial#' IMMEDIATE ;

ALTER SYSTEM KILL SESSION '1626,19894' IMMEDIATE ;

ALTER SYSTEM DISCONNECT SESSION '13, 8' IMMEDIATE ;

How to compile all the invalid objects in Oracle Database?

Below is the script to generate dynamic script to compile all the invalid objects in Oracle Database.

set echo off
set feed off
set sqlp ''
set head off
set pages 0
spool spool_output.sql
select 'ALTER '||
DECODE(object_type, 'VIEW', 'VIEW', 'PACKAGE') || ' '||
LOWER(object_name)|| ' COMPILE '||
DECODE(object_type, 'PACKAGE', 'SPECIFICATION', 'PACKAGE BODY', 'BODY', '')|| ' ; ' from user_objects
where status = 'INVALID'
order by DECODE(object_type, 'VIEW', 1, 'PACKAGE', 2, 'PACKAGE BODY', 3), object_name ;
spool off
set echo on
set feed on
set sqlp 'SQL>'
set head on

select 'alter '||object_type||' '||object_name||' compile;' from user_objects where status='INVALID' order by object_type,object_name;

How to backup control file to trace in Oracle Database?

Below is the script to backup control file to trace on unix to user defined location.

===========================================================
#------------------------------------------------------------
# backup controlfile
#------------------------------------------------------------
#------------------------------------------------------------
# find the user_dump_dest directory through a sqlplus call
# and assign it to an environment variable
#------------------------------------------------------------
ORA_DUMP=`${ORACLE_HOME}/bin/sqlplus -s << EOF
/ as sysdba
set heading off
set feedback off
SELECT value
FROM v\\$parameter
WHERE name='user_dump_dest';
exit
EOF`
#------------------------------------------------------------
# create backup controlfile
#------------------------------------------------------------
${ORACLE_HOME}/bin/sqlplus -s <<> /dev/null 2>&1
/ as sysdba
set heading off
set feedback off
ALTER DATABASE BACKUP CONTROLFILE TO trace;
exit
EOF
#------------------------------------------------------------
# find the trace file that the backup controlfile is in
# this should be the last trace file generated
# validate it is a controlfile and copy to backup_control.sql
#------------------------------------------------------------
DATETIME="`date '+%d%m%y'`"
CONTROL=`ls -t $ORA_DUMP/*.trc | head -1`
if [ ! -z "$CONTROL" ]; then
grep 'CONTROL' ${CONTROL} 1> /dev/null
if test $? -eq 0; then
cp ${CONTROL} /home/oracle/backup_control_file_$DATETIME.sql
fi
fi
#end
=========================================================

How to check database size in Oracle Database?

Below is the script to check the size of the Oracle Database

select (a.data_size+b.temp_size+c.redo_size)/(1024*1024*1024) "total_size GB"
from ( select sum(bytes) data_size
from dba_data_files ) a,
( select nvl(sum(bytes),0) temp_size
from dba_temp_files ) b,
( select sum(bytes) redo_size
from sys.v_$log ) c;

How to check datafile usage in Oracle Database?

Below is the script script which will list datafile wise Allocated size, Used Size and Free Size
Size details are displayed in MB (Mega Bytes)

SELECT SUBSTR (df.NAME, 1, 40) file_name, df.bytes / 1024 / 1024 "Allocated Size(MB)",
((df.bytes / 1024 / 1024) - NVL (SUM (dfs.bytes) / 1024 / 1024, 0)) "Used Size (MB)",
NVL (SUM (dfs.bytes) / 1024 / 1024, 0) "Free Size(MB)"
FROM v$datafile df, dba_free_space dfs
WHERE df.file# = dfs.file_id(+)
GROUP BY dfs.file_id, df.NAME, df.file#, df.bytes
ORDER BY file_name;

How to check the tablespace usage in Oracle Database?

Below is the script which will list tablespace wise free space and used space as well as total space.
The Size details are displayed in MB.

SELECT Total.name "Tablespace Name",
nvl(Free_space, 0) "Free Size(MB)",
nvl(total_space-Free_space, 0) "Used Size(MB)",
total_space "Total Size(MB)"
FROM
(select tablespace_name, sum(bytes/1024/1024) free_space
from sys.dba_free_space
group by tablespace_name
) Free,
(select b.name, sum(bytes/1024/1024) total_space
from sys.v_$datafile a, sys.v_$tablespace B
where a.ts# = b.ts#
group by b.name
) Total
WHERE Free.Tablespace_name(+) = Total.name
ORDER BY Total.name;

How to check which tablespace cannot extent causing tablespace full error?

Below is the script to check which tablespace cannot extent causing "tablespace full error"

select a.owner||'.'||a.segment_name "Segment Name",
a.segment_type "Segment Type",
a.bytes/1024/1024 "Size(MB)",
a.next_extent/1024/1024 "Next Extent",
a.tablespace_name "Tablespace Name"
from sys.dba_segments a
where a.tablespace_name not like 'T%MP%' -- Exclude TEMP tablespaces
and next_extent * 1 > ( -- Cannot extend 1x, can change to 2x...
select max(b.bytes)
from dba_free_space b
where a.tablespace_name = b.tablespace_name)
order by 3 desc;

How to check the locked objects in the Oracle Database?

Below is the script to check the locked objects in the Oracle Database

SELECT oracle_username USERNAME,
owner OBJECT_OWNER,
object_name, object_type, s.osuser,
s.SID SID,
s.SERIAL# SERIAL,
DECODE(l.block,
0, 'Not Blocking',
1, 'Blocking',
2, 'Global') STATUS,
DECODE(v.locked_mode,
0, 'None',
1, 'Null',
2, 'Row-S (SS)',
3, 'Row-X (SX)',
4, 'Share',
5, 'S/Row-X (SSX)',
6, 'Exclusive', TO_CHAR(lmode)
) MODE_HELD
FROM gv$locked_object v, dba_objects d,
gv$lock l, gv$session s
WHERE v.object_id = d.object_id
AND (v.object_id = l.id1)
and v.session_id = s.sid
ORDER BY oracle_username, session_id;

Note: Run the above select statement as sys user.

Tuesday, March 9, 2010

How to change SSO Login page to Custom Login Page in Oracle Application server (OAS) ?

To change SSO Login page follow the below steps:

1) The Login page is a jsp file "login.jsp" which can be found in $INFRA_HOME/j2ee/OC4J_SECURITY/applications/sso/web/jsp, you can create your own login file say "my_login_page.jsp" and place it into that directory.

2) Then go to $INFRA_HOME/sso/conf and modify "policy.properties" file (backup the original one first). You have to find the line:
loginPageUrl = /sso/jsp/login.jsp

Replace "login.jsp" with your new login file name "my_login_page.jsp"
loginPageUrl = /sso/jsp/my_login_page.jsp

3) Restart OC4J_SECURITY and HTTP_Server

You can however customize more, check [chapter 12|http://download.oracle.com/docs/cd/B14099_19/idmanage.1012/b14078/custom.htm#i1015535] in the SSO Administration Guide (Creating Deployment-Specific Pages) for more options.

Saturday, February 27, 2010

How to Change / Reset ias_admin Password???

— ias_admin is user required to use Enterprise Manager Web (aka Application Server Console - iASConsole) Site.

— ias_admin password is set during Installation of Oracle Application Server (902, 904, 10.1.2.X) or Oracle Identity Management (Infra Tier 10.1.4.X)

—ias_admin account is NOT stored in OID (Oracle Internet Directory), It is stored in XML file (JAZN-XML - Java AuthoriZatioN) (More on JAZN-XML & JAZN-LDAP coming soon..)


You can reset/change ias_admin password in following ways

1. Using Enterprise Manager (Application Server Control) Web Site
–Login to Instance Home Page
–Click on Preferences on top right
–In new screen, click on “Change Password” on left menu
–Enter current password and New Password

2. Using Command line tool
emctl set password <old_password> <new_password>
like
emctl set password welcome1 welcome2
(Here welcome1 is current ias_admin password and welcome2 is new password which you
wish to reset)


If you don’t know current ias_admin password then change it in configuration file

3. Change ias_admin password directly in configuration file
–Backup $ORACLE_HOME/sysman/j2ee/config/jazn-data.xml
–Search for entry like below

<user>
<name>ias_admin</name>
<credentials>{903}8QkQ/crno3lX0f3+67dj6WxW9KJMXaCu</credentials>
</user>

and Update new password (welcome1 like )

<user>
<name>ias_admin</name>
<credentials>!welcome1</credentials>
</user>

Note: ! (Exclamation Mark in front of password. This signifies that password is stored in clear text)

Sunday, February 21, 2010

Managing Ports in Oracle Application Server (OAS)

The change of ports is required in Oracle Application Sever (OAS) is commonly required when doing any new Installation or reconfiguring the Installation. To change ports in Oracle Application Server (OAS) is a very difficult task and to be done very carefully.

Below is the link that i generally use whenever i require to change any port in Oracle Application Server (OAS).
Note: The below link applies to Oracle Application Server (OAS) 10g.

Managing Ports in Oracle Application Server (OAS)

If you have any queries/doubts, please leave a comment I'll reply ASAP.

Friday, January 22, 2010

Difference between Physical and Logical Standby Database

Physical Standby Database:
A physical standby database is a exact copy of the primary database. Oracle uses the primary database archive log file to recover the physical standby database.

We can open a physical standby database in read only mode, but at the time of read only mode, the received logs will not be applied.

When the logs are applied, the database is not accessible.

Logical Standby Database:
Logical Standby database are opened in read/write mode, even while they are is applied mode. That is, they can be used to generate reports and the like. It is indeed a fully functional database.

Bulk Delete Users From Oracle Internet Directory (OID)

Using ldapsearch and ldapdelete commands to remove OID users

1. First, you need to know what is the directory tree of users that you want to perform the deletion on (s_UserContainerDN), source of subscriber DN.

2. Get all the user entries first using ldapsearch command and spool it to a file

$ ldapsearch -h [oid_hostname] -p [oid_ldap_port] -D "cn=orcladmin" -w [password] -s one -b "cn=Users,[source subscriber DN]" "(objectclass=orcluserv2)" "cn=*" > users.ldif
Example:

$ ldapsearch -h [oid_hostname] -p [oid_ldap_port] -D "cn=orcladmin" -w [password] -s one -b "cn=Users,dc=spain,dc=oracle,dc=com" "(objectclass=orcluserv2)" "cn=*" > users.ldif

3. Edit the spool output ldif file, users.ldif to remove entires which you want to preserve such as orcladmin,Portal and public. You may want to preserve the following seeded users:

cn=orcladmin,cn=Users,[source subscriber DN]
cn=ias_admin,cn=Users,[source subscriber DN]
cn=PUBLIC,cn=Users,[source subscriber DN]
cn=Portal<*>,cn=Users,[source subscriber DN]
cn=sample<*>,cn=Users,[source subscriber DN]
cn=synd<*>,cn=Users,[source subscriber DN]
cn=uddi<*>,cn=Users,[source subscriber DN]

(Note:Example above include users/entries that could be seeded by certain midtier installs, so they may or may not exist in a particular OID)

4. Use the ldapdelete command along with the edited output ldif file to delete all the users:

$ ldapdelete -h [oid_hostname] -p [oid_ldap_port] -D "cn=orcladmin" -w [password] -c -v -f users.ldif

deleting entry cn=UNCLE_SAM,cn=Users,dc=spain,dc=oracle,dc=com
entry removed
deleting entry cn=LARRY_ELISSON,cn=users,dc=spain,dc=oracle,dc=com
entry removed

Note:Before removing users remove the users from the users.ldif which you dont want to delete.

There is one more command which is given below which might be helpful in some cases.
ldapadd -h myhost -p 389 -D "cn=orcladmin" -w welcome -f users.ldif

Sunday, January 10, 2010

Identifying unused indexes in Oracle9i

PURPOSE
-------

The purpose of this document is to explain how to find unused indexes
using the new feature in Oracle9: "Identifying Unused Indexes" via
ALTER INDEX MONITORING USAGE, as mentioned in Oracle9i Database
Administrator's Guide, Chapter 11.

The clause MONITORING / NOMONITORING USAGE is useful in determining
whether an index is being used.


SCOPE & APPLICATION
-------------------

This article is intended for database Administrators who want to
identify unused indexes in their database.


IDENTIFYING UNUSED INDEXES
--------------------------

You can find indexes that are not being used by using the ALTER INDEX
MONITORING USAGE functionality over a period of time that is
representative of your workload.

PART 1 will demonstrate the new feature using a simple example.

PART 2 will give a detailed instruction how to identify all unused
indexes in the database.


PART 1 - Monitoring usage of indexes - a simple example
---------------------------------------------------------

To demonstrate the new feature, you can use the following example:
(a) Create and populate a small test table
(b) Create Primary Key index on that table
(c) Query v$object_usage: the monitoring has not started yet
(d) Start monitoring of the index usage
(e) Query v$object_usage to see the monitoring in progress
(f) Issue the SELECT statement which uses the index
(g) Query v$object_usage again to see that the index has been used
(h) Stop monitoring of the index usage
(i) Query v$object_usage to see that the monitoring stopped


Detailed steps:

(a) Create and populate a small test table

create table products
(prod_id number(3),
prod_name_code varchar2(5));

insert into products values(1,'aaaaa');
insert into products values(2,'bbbbb');
insert into products values(3,'ccccc');
insert into products values(4,'ddddd');
commit;


(b) Create Primary Key index on that table

alter table products
add (constraint products_pk primary key (prod_id));


(c) Query v$object_usage: the monitoring has not started yet

column index_name format a12
column monitoring format a10
column used format a4
column start_monitoring format a19
column end_monitoring format a19
select index_name,monitoring,used,start_monitoring,end_monitoring
from v$object_usage;

no rows selected


(d) Start monitoring of the index usage

alter index products_pk monitoring usage;

Index altered.


(e) Query v$object_usage to see the monitoring in progress

select index_name,monitoring,used,start_monitoring,end_monitoring
from v$object_usage;

INDEX_NAME MONITORING USED START_MONITORING END_MONITORING
------------ ---------- ---- ------------------- -------------------
PRODUCTS_PK YES NO 04/25/2001 15:43:13

Note: Column MONITORING='YES', START_MONITORING gives the timestamp.


(f) Issue the SELECT statement which uses the index

First, make sure that index will be used for this statement.
Create plan_table in your schema, as required by Oracle Autotrace
utility:

@$ORACLE_HOME/rdbms/admin/utlxplan

Table created.

Use Oracle Autotrace utility to obtain the execution plan:

set autotrace on explain
select * from products where prod_id = 2;
.
.
Execution Plan
------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'PRODUCTS'
2 1 INDEX (UNIQUE SCAN) OF 'PRODUCTS_PK' (UNIQUE)

set autotrace off

Now, since you know the index will be used for this query, issue the
actual SELECT statement:

select * from products where prod_id = 2;

PROD_ID PROD_
---------- -----
2 bbbbb


(g) Query v$object_usage again to see that the index has been used

select index_name,monitoring,used,start_monitoring,end_monitoring
from v$object_usage;

INDEX_NAME MONITORING USED START_MONITORING END_MONITORING
------------ ---------- ---- ------------------- -------------------
PRODUCTS_PK YES YES 04/25/2001 15:43:13

Note: Column USED='YES'.


(h) Stop monitoring of the index usage

alter index products_pk nomonitoring usage;

Index altered.


(i) Query v$object_usage to see that the monitoring stopped

select index_name,monitoring,used,start_monitoring,end_monitoring
from v$object_usage;

INDEX_NAME MONITORING USED START_MONITORING END_MONITORING
------------ ---------- ---- ------------------- -------------------
PRODUCTS_PK NO YES 04/25/2001 15:43:13 04/25/2001 15:48:44

Note: Column MONITORING='NO', END_MONITORING gives the timestamp.



PART 2 - How to identify all unused indexes in the database
-------------------------------------------------------------

To identify all unused indexes in the database, you can do the
following:
(a) Create a SQL script to start monitoring all indexes except those
owned by users SYS and SYSTEM
(b) Create another script to stop monitoring all indexes except those
owned by users SYS and SYSTEM
(c) Connect as a user with ALTER ANY INDEX system privilege and run
the start monitoring script
(d) Perform normal activities in your database
(e) After a period of time that is representative of your workload,
run the stop monitoring script
(f) Query v$object_usage to see what indexes have not been used


Detailed steps:

(a) Create a SQL script to start monitoring all indexes except those
owned by users SYS and SYSTEM

set heading off
set echo off
set feedback off
set pages 10000
spool startmonitor.sql
select 'alter index '||owner||'.'||index_name||' monitoring usage;'
from dba_indexes
where owner not in ('SYS','SYSTEM');
spool off


(b) Create another script to stop monitoring all indexes except those
owned by users SYS and SYSTEM

set heading off
set echo off
set feedback off
set pages 10000
spool stopmonitor.sql
select 'alter index '||owner||'.'||index_name||' nomonitoring usage;'
from dba_indexes
where owner not in ('SYS','SYSTEM');
spool off


(c) Connect as a user with ALTER ANY INDEX system privilege and run
the newly created script to start monitoring.

@startmonitor

(d) Perform normal activities in your database


(e) After a period of time that is representative of your workload,
connect as a user with ALTER ANY INDEX system privilege and run
the script to stop monitoring.

@stopmonitor


(f) Query v$object_usage in join with dba_indexes, to see what indexes
have not been used

select d.owner, v.index_name
from dba_indexes d, v$object_usage v
where v.used='NO' and d.index_name=v.index_name;



RELATED DOCUMENTS
-----------------

Note 1033478.6 Script Monitoring the Usage of Indexes(prior Oracle9i)
Note 1015945.102 How to Monitor Most Recently Accessed Indexessing script

Oracle9i Database Administrator's Guide

Saturday, January 9, 2010

Sql Scripts for knowing the transaction status....

select * from v$transaction;

--All tranasactions + sid and username
select s.sid,s.username, t.start_time, t.used_ublk, t.used_urec
from v$transaction t, v$session s
where t.ses_addr=s.saddr;

--All tranasactions + sid and username + first 64 bytes of SQL
select s.sid,s.username, t.start_time, t.used_ublk, t.used_urec,sql.sql_text
from v$transaction t, v$session s, v$sql sql
where t.ses_addr=s.saddr
and s.sql_address=sql.address and s.sql_hash_value=sql.hash_value;

How to do Object level tunning in Oracle???

Object-level tuning

1. Last analyzed date of all objects (table as well as indexes).
Command to analyze:
Analyze table table_name compute statistics; /* will estimate with 100 % of total records as default size*/
Analyze table table_name estimate statistics; /*will estimate with 1024 bytes as default size*/
Analyze table table_name estimate statistics for table sample 20 PERCENT; /* will estimate with20 % of total records as default size*/

Else we may use DBMS_STATS.GATHER_TABLE_STATS package e.g.

EXEC DBMS_STATS.GATHER_TABLE_STATS
(OWNNAME =>null,
TABNAME => 'DW_CSC_COLLN',
CASCADE => TRUE,
METHOD_OPT => 'FOR COLUMNS’,
DEGREE=>8,
ESTIMATE_PERCENT=>100);

==> GATHER_INDEX_STATS Index statistics
==> GATHER_TABLE_STATS Table, column, and index statistics
==> GATHER_SCHEMA_STATS Statistics for all objects in a schema
==> GATHER_DATABASE_STATS Statistics for all objects in a database

Questions:
1. How to check the particular table has been analyzed with what percentage?
==> Sample_size column of dba_tables.

2. How to know when to analyze the object?
==> 10 % of DML occurrence is the thumb rule in particular time period( you may query user_tab_modifications to find out the occurrences of dmls since last analyzed statement executed on that particular object). To query from user_tab_modifications, monitoring should be on or execute alter table table_name monitoring on;
==> When 10 % of difference is there in num_rows column of dba_tables (information with data dictionary of total number of rows in one particular table) and count(*) output from “select count(*) from table_name;”

3. What does Analyzing of an object do?
By analyzing the objects, we can collect the latest object information (mentioned below) and stores/updates in Data Dictionary. It basically collects or updates statistics about specific objects in order to opt for best optimized execution plan.
==> Table Statistics
==> Number of rows
==> Number of blocks that have been used
==> Number of blocks never used
==> Average available free space
==> Number of chained rows
==> Average row length
==> Number of distinct values in a column
==> The low value and high value in a column
==> Index Statistics
==> Index level
==> Number of leaf blocks
==> Number of distinct keys

2. Check the blevel of indexes from dba_indexes
If Blevel of index is >=3 then
==> Rebuild the index with alter index index_name rebuild online;

3. Check the Type of Index
On the basis of cardinality we have to check whether the existing index on particular column is correct or not.
==> Less than 5% of difference in distinct_keys and count(*)/num_rows forces us to make bitmap indexes.
==> If the cardinality is less (<5%) then
Preferred index type is BITMAP
==> If the column is using a particular single row function (eg:UPPER,LOWER,SUBSTR etc) then
Preferred index type is FUNCTION BASED INDEX

4. Check the Status of Index
==> Status of indexes should be valid.

5. Check if the tables and indexes are in different tablespaces.

6. Check if there are any chained rows
How to check?
==> Chain_cnts column of dba_tables.
How to solve?
==> Reorganize the table by Exporting to different tablespace and import the same table back to original tablespace?
==> Or data move command. Alter table table_name data move;

7. Table fragmentation the tables.
How to check?
==> Compare blocks and empty_blocks columns of dba_tables, if difference is very high apply below solution. Below commands will work in same way as defragmentation works in windows.
How to solve?
Alter table shrink space cascade;
Before this we have to enable the row movement to allow shrinking.
Alter table table_name enable row movement;


Query To check all above things at one go…..
Query to obtain stats
select a.table_name "Table_name",
a.index_name "Index_name",
c.tablespace_name "Table's tablespace",
a.tablespace_name "Index Tablespace",
a.index_type,
a.blevel,
c.num_rows,
distinct_keys,
b.column_name,
a.status,
a.last_analyzed "index_lastanalyzed",
c.last_analyzed "table_lastanalyzed",
a.logging "index logging",
c.logging "table logging",
c.chain_cnt
from dba_indexes a, dba_ind_columns b, dba_tables c
where a.index_name=b.index_name
and a.table_name=c.table_name
and a.table_name =''

How to encrypt password when connecting to the server???

QUESTIONS & ANSWERS
-------------------
1. Is my password encrypted when I logon?

Answer
------
Yes, the Oracle password protocol provides security for client-server and
server-server password communication by encrypting passwords passed over a
network, . The Oracle password protocol uses a session key, valid for a single
database connection attempt to encrypt the user's password. Each connection
attempt uses a separate key for encryption, making the encryption more
difficult to decipher.

As of version 10.2 password encryption during session authentication is based
on the new AES 128 standard except for JDBC thin clients that will support
AES as of 11g . Older clients use a modified Data Encryption Standard (DES)
algorithm, the best algorithm available in both the server and client is
negotiated during the session handshake for reasons of backward compatibility.

2. What about parameters ora_encrypt_login and dblink_encrypt_login ?

Do not use these parameters as of version 9.2 as they have become obsolete,
some documentation may refer to them as being necessary, but the problems
with the clear text passwords were all fixed as of Oracle version 7.1 .

Password during a logon is ALWAYS encrypted. We do this since Oracle 7.1. the
parameters addressed the issue that if a 7.1 client connected to a 7.0 server
which did no credential encryption that by default a second unencrypted transfer
was done. To specify if this second unencrypted transfer should happen or not we
had ora_encrypt_login for login by a client, dblink_encrypt_login for login by
using a database link. So the parameters today would make sense only
if a newer client connects to a 7.0 database. A 9.2 client cannot even connect
to a 7.3 and lower database, so the parameters are not needed. The second
unencrypted transfer does not happen.

3. What encryption does Oracle use to store the passwords?

After the key-encrypted password is passed to the server, the server decrypts it,
then re-encrypts it using a hashing algorithm and then compares it with the password
stored in the data dictionary table USER$. If they match, the user successfully
connects to the database. Before 11g a proprietary Data Encryption Standard (DES)
based one-way (irreversible) algorithm was used. In 11g the case insensitive
password hashing algorithm was replaced by the 160 bit SHA-1 hashing algorithm in
better alignment with industry standards.



goal:- How To Ensure Oracle Encrypts Passwords Transferred from a Remote Client
fix:

Password's sent over Net8 are encrypted by default. Password encryption was
introduced in release 7.1. Prior to Oracle 7.1 passwords could be passed in
clear text. To disable this functionality set ORA_ENCRYPT_LOGIN=true.

Set ora_encrypt_login=true :
- MS Windows put an entry in the registry.
- UNIX boxes create an environment variable.

These parameters, ora_encrypt_login & dblink_encrypt_login are not
applicable for 9i onwards.
So, setting this parameter for 9i onwards is not supported.

The above information is collected from oracle metalink.

Friday, January 8, 2010

Creating a Data Guard Configuration

1) Ensure the Primary database is in ARCHIVELOG mode:

SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /export/home/oracle/temp/oracle/arch
Oldest online log sequence 7
Current log sequence 9
SQL> alter database close;

Database altered.

SQL> alter database archivelog;

Database altered.

SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.

Modify the Primary database init.ora so that log_archive_start=true and restart
the instance. Verify that database is in archive log mode and that automatic
archiving is enabled.

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /export/home/oracle/temp/oracle/arch
Oldest online log sequence 7
Next log sequence to archive 9
Current log sequence 9

2) Create a backup of the Primary database:

You can use an existing backup of the Primary database as long as you have the
archive logs that have been generated since that backup. You may also take a
hot backup as long as you have all archive logs through the end of the backup
of the last tablespace. To create a cold backup do the following:

SQL>
SQL> select name from v$datafile;

NAME
----------------------------------------------------------------------
/export/home/oracle/temp/oracle/data/sys.dbf

SQL>
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit

Create a backup of all datafiles and online redo logs using an OS command or
utility. A backup of the online redo logs is necessary to facilitate switchover.

Once complete startup the instance:

SQL> startup
ORACLE instance started.

Total System Global Area 80512156 bytes
Fixed Size 279708 bytes
Variable Size 71303168 bytes
Database Buffers 8388608 bytes
Redo Buffers 540672 bytes
Database mounted.
Database opened.

3) Connect to the primary database and create the standby control file:

SQL> alter database create standby controlfile as
'/export/home/oracle/temp/oracle/data/backup/standby.ctl';

Database altered.

4) Copy files to the Standby host:

Copy the backup datafiles, standby controlfile, all available archived redo logs,
and online redo logs from the primary site to the standby site. Copying of the
online redo logs is necessary to facilitate switchover.

If the standby is on a separate site with the same directory structure as the
primary database then you can use the same path names for the standby files as
the primary files. In this way, you do not have to rename the primary datafiles
in the standby control file. If the standby is on the same site as the primary
database, or the standby database is on a separate site with a different
directory structure the you must rename the primary datafiles in the standby
control file after copying them to the standby site. This can be done using
the db_file_name_convert and log_file_name_convert parameters or by manually
using the alert database statements.

5) Set the initialization parameters for the primary database:

It is suggested that you maintain two init.ora’s on both the primary and the
standby. This allows you to facilitate role reversal during switchover
operations more easily.

Primary init.ora on Primary host:

log_archive_dest_1='LOCATION=/export/home/oracle/temp/oracle/arch'
log_archive_dest_2='SERVICE=DGD01_hasunclu2 reopen=60'
log_archive_dest_state_1=enable
log_archive_dest_state_2=enable
log_archive_format=%t_%s.dbf
log_archive_start=true
remote_archive_enable=true

Create the standby initialization parameter file and set the initialization
parameters for the standby database. Depending on your configuration, you may
need to set filename conversion parameters.

Standby init.ora on Primary host:

log_archive_dest_1='LOCATION=/export/home/oracle/temp/oracle/arch'
log_archive_dest_state_1=enable
log_archive_format=%t_%s.dbf
log_archive_start=true
standby_archive_dest=/export/home/oracle/temp/oracle/arch
standby_file_management=auto
fal_server=DGD01_hasunclu2
fal_client=DGD01_hasunclu1
remote_arhive_enable=true

NOTE: In the above example db_file_name_convert and log_file_name_convert are
not needed as the directory structure on the two hosts are the same. If the
directory structure is not the same then setting of these parameters is
recommended. Please reference notes 47325.1 and 47343.1 for further
information.

Copy the two init.ora’s from the Primary host to the Standby host. You must
modify the Primary init.ora on the Standby host to have log_archive_dest_2 use
the alias that points to the Primary host (ie DGD01_hasunclu1). You must
modify the Standby init.ora on the standby host to have fal_server and
fal_client use the aliases when standby is running on the Primary host
(ie fal_server=DGD01_hasunclu1 and fal_client=DGD01_hasunclu2).

6) Configure networking components:

On the Primary host create a net service name that the Primary database can
use to connect to the Standby database. On the Primary host create a net
service name that Standby, when running on the Primary host, can use to
connect to the Primary, when it is running on the Standby host. Example from
Primary’s host tnsnames.ora:

DGD01_hasunclu1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hasunclu1)(PORT = 1521))
)
(CONNECT_DATA =
(SID = DGD01)
(SERVER = DEDICATED)
)
)
DGD01_hasunclu2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hasunclu2)(PORT = 1521))
)
(CONNECT_DATA =
(SID = DGD01)
(SERVER = DEDICATED)
)
)

The above two net service names must exist in the Standby hosts tnsnames.ora
also.

You must also configure a listener on the standby database. If you plan to
manage this standby database using the Data Guard broker, you must configure
the listener to use the TCP/IP protocol and statically register the standby
database service using its SID.


7) Start the standby instance and mount the standby database.

oracle@hasunclu2:/export/home/oracle/temp/oracle> sqlplus "/ as sysdba"

SQL*Plus: Release 9.0.1.0.0 - Production on Thu Mar 14 18:00:57 2002

(c) Copyright 2001 Oracle Corporation. All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile=?/dbs/initDGD.ora
ORACLE instance started.

Total System Global Area 80512156 bytes
Fixed Size 279708 bytes
Variable Size 71303168 bytes
Database Buffers 8388608 bytes
Redo Buffers 540672 bytes
SQL> alter database mount standby database;

Database altered.

SQL>

8) Create standby redo log files, if necessary:

Standby redo logs are necessary for the higher protection levels such as
Guaranteed, Instant, and Rapid. In these protection modes LGWR from the
Primary host writes transactions directly to the standby redo logs.
This enables no data loss solutions and reduces the amount of data loss
in the event of failure. Standby redo logs are not necessary if you are using
the delayed protection mode.

If you configure standby redo on the standby then you should also configure
standby redo logs on the primary database. Even though the standby redo logs
are not used when the database is running in the primary role, configuring
the standby redo logs on the primary database is recommended in preparation
for an eventual switchover operation.

Standby redo logs must be archived before the data can be applied to the
standby database. The standby archival operation occurs automatically, even if
the standby database is not in ARCHIVELOG mode. However, the archiver process
must be started on the standby database. Note that the use of the archiver
process (ARCn) is a requirement for selection of a standby redo log.

You must have the same number of standby redo logs on the standby as you have
online redo logs on production. They must also be exactly the same size.

The following syntax is used to create standby redo logs:

SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_1.dbf' size 20m;

Database altered.
SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_2.dbf' size 20m;

Database altered.

SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_3.dbf' size 20m;

Database altered.

SQL>

9) Manually change the names of the primary datafiles and redo logs in the
standby control file for all files not automatically renamed using
DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT as noted in step 5. Datafile
names can be changed on the standby at a mounted state with the following
syntax:

SQL> alter database rename file
2 '/export/home/oracle/temp/oracle/data/sys.dbf'
3 to
4 '/export/home/oracle/temp/oracle/data2/sys.dbf';

10) Stop and restart the listeners:

On the primary database, and start the listener on the standby database so that
changes made to the listener.ora can be implemented.

11) Activate parameter changes:

Manually enable initialization parameter changes on the primary database
so that it can initiate archiving to the standby site.

At runtime, the LOG_ARCHIVE_DEST_n initialization parameter can be changed
using ALTER SYSTEM and ALTER SESSION statements. Activate the changes made to
these parameters by either bouncing the instance or activating via alter system.
For example:

SQL> alter system set log_archive_dest_2='SERVICE=DGD01_hasunclu2 reopen=60';

System altered.

12) Verify that automatic archiving is occurring:

On the Primary database switch a log and verfy that it has been shipped
properly using the v$archive_dest view.

SQL> alter system switch logfile;

System altered.

SQL> select status, error from v$archive_dest where dest_id=2;

STATUS ERROR
--------- -------------------------------------------------------
VALID

SQL>

13) Optionally place Standby database in managed recovery:

SQL> recover managed standby database disconnect;
Media recovery complete.
SQL> exit

How to Automate the caculation of Database Growth and scheduling it in DBMS JOBS....

1.Create a Table By the Name db_growth as shown below...

Name Null? Type
----------------------------------------- -------- ----------------------------
DAY DATE
DATABASE_SIZE_MB NUMBER
DAILY_GROWTH_MB NUMBER


2.create or replace PROCEDURE database_growth
AS
today_size NUMBER;
yesterday_size NUMBER;
growth_size NUMBER;
cnt NUMBER;
BEGIN
SELECT sum(bytes)/(1024*1024) INTO today_size FROM SM$TS_USED;
SELECT COUNT(1) INTO cnt FROM db_growth ;
IF cnt > 0
THEN
SELECT database_size_mb INTO yesterday_size FROM db_growth WHERE to_date(d
ay,'dd-mon-yy')=to_date(SYSDATE -1,'dd-mon-yy');
ELSE
yesterday_size:=today_size;
END IF;
growth_size := today_size - yesterday_size;
INSERT INTO db_growth VALUES(sysdate,today_size,growth_size);
EXCEPTION
WHEN no_data_found THEN
INSERT INTO db_growth VALUES(sysdate,today_size,0);
DBMS_OUTPUT.PUT_LINE(SQLERRM);
END;

3.Submit in DBMS_JOBS

variable jobno number;
begin
dbms_job.submit(
:jobno,
'database_growth ;',
trunc(sysdate+1) + 4/24,
'trunc(sysdate+1) + 4/24'
);
commit;
end;
/
print :jobno

How to change hostname on Linux (Red Hat or Oracle)

This is post for those who wish to change hostname .

How to find hostname on Linux/OEL ?
Use command “hostname” to find host name of Linux

How to change hostname on Linux/OEL ?

1. Open file /etc/sysconfig/network using editor like vi , nano or ed

2. Change entry HOSTNAME=XXXXXX to HOSTNAME=New_Host_Name

3. Restart xinetd service or reboot Linux/OEL
Restart xinetd service:- xinetd restart
Reboot Linux:- reboot

4. If you have defined hostname resolution in hosts file then change entry in /etc/hosts