Friday, January 22, 2010

Difference between Physical and Logical Standby Database

Physical Standby Database:
A physical standby database is a exact copy of the primary database. Oracle uses the primary database archive log file to recover the physical standby database.

We can open a physical standby database in read only mode, but at the time of read only mode, the received logs will not be applied.

When the logs are applied, the database is not accessible.

Logical Standby Database:
Logical Standby database are opened in read/write mode, even while they are is applied mode. That is, they can be used to generate reports and the like. It is indeed a fully functional database.

Bulk Delete Users From Oracle Internet Directory (OID)

Using ldapsearch and ldapdelete commands to remove OID users

1. First, you need to know what is the directory tree of users that you want to perform the deletion on (s_UserContainerDN), source of subscriber DN.

2. Get all the user entries first using ldapsearch command and spool it to a file

$ ldapsearch -h [oid_hostname] -p [oid_ldap_port] -D "cn=orcladmin" -w [password] -s one -b "cn=Users,[source subscriber DN]" "(objectclass=orcluserv2)" "cn=*" > users.ldif
Example:

$ ldapsearch -h [oid_hostname] -p [oid_ldap_port] -D "cn=orcladmin" -w [password] -s one -b "cn=Users,dc=spain,dc=oracle,dc=com" "(objectclass=orcluserv2)" "cn=*" > users.ldif

3. Edit the spool output ldif file, users.ldif to remove entires which you want to preserve such as orcladmin,Portal and public. You may want to preserve the following seeded users:

cn=orcladmin,cn=Users,[source subscriber DN]
cn=ias_admin,cn=Users,[source subscriber DN]
cn=PUBLIC,cn=Users,[source subscriber DN]
cn=Portal<*>,cn=Users,[source subscriber DN]
cn=sample<*>,cn=Users,[source subscriber DN]
cn=synd<*>,cn=Users,[source subscriber DN]
cn=uddi<*>,cn=Users,[source subscriber DN]

(Note:Example above include users/entries that could be seeded by certain midtier installs, so they may or may not exist in a particular OID)

4. Use the ldapdelete command along with the edited output ldif file to delete all the users:

$ ldapdelete -h [oid_hostname] -p [oid_ldap_port] -D "cn=orcladmin" -w [password] -c -v -f users.ldif

deleting entry cn=UNCLE_SAM,cn=Users,dc=spain,dc=oracle,dc=com
entry removed
deleting entry cn=LARRY_ELISSON,cn=users,dc=spain,dc=oracle,dc=com
entry removed

Note:Before removing users remove the users from the users.ldif which you dont want to delete.

There is one more command which is given below which might be helpful in some cases.
ldapadd -h myhost -p 389 -D "cn=orcladmin" -w welcome -f users.ldif

Sunday, January 10, 2010

Identifying unused indexes in Oracle9i

PURPOSE
-------

The purpose of this document is to explain how to find unused indexes
using the new feature in Oracle9: "Identifying Unused Indexes" via
ALTER INDEX MONITORING USAGE, as mentioned in Oracle9i Database
Administrator's Guide, Chapter 11.

The clause MONITORING / NOMONITORING USAGE is useful in determining
whether an index is being used.


SCOPE & APPLICATION
-------------------

This article is intended for database Administrators who want to
identify unused indexes in their database.


IDENTIFYING UNUSED INDEXES
--------------------------

You can find indexes that are not being used by using the ALTER INDEX
MONITORING USAGE functionality over a period of time that is
representative of your workload.

PART 1 will demonstrate the new feature using a simple example.

PART 2 will give a detailed instruction how to identify all unused
indexes in the database.


PART 1 - Monitoring usage of indexes - a simple example
---------------------------------------------------------

To demonstrate the new feature, you can use the following example:
(a) Create and populate a small test table
(b) Create Primary Key index on that table
(c) Query v$object_usage: the monitoring has not started yet
(d) Start monitoring of the index usage
(e) Query v$object_usage to see the monitoring in progress
(f) Issue the SELECT statement which uses the index
(g) Query v$object_usage again to see that the index has been used
(h) Stop monitoring of the index usage
(i) Query v$object_usage to see that the monitoring stopped


Detailed steps:

(a) Create and populate a small test table

create table products
(prod_id number(3),
prod_name_code varchar2(5));

insert into products values(1,'aaaaa');
insert into products values(2,'bbbbb');
insert into products values(3,'ccccc');
insert into products values(4,'ddddd');
commit;


(b) Create Primary Key index on that table

alter table products
add (constraint products_pk primary key (prod_id));


(c) Query v$object_usage: the monitoring has not started yet

column index_name format a12
column monitoring format a10
column used format a4
column start_monitoring format a19
column end_monitoring format a19
select index_name,monitoring,used,start_monitoring,end_monitoring
from v$object_usage;

no rows selected


(d) Start monitoring of the index usage

alter index products_pk monitoring usage;

Index altered.


(e) Query v$object_usage to see the monitoring in progress

select index_name,monitoring,used,start_monitoring,end_monitoring
from v$object_usage;

INDEX_NAME MONITORING USED START_MONITORING END_MONITORING
------------ ---------- ---- ------------------- -------------------
PRODUCTS_PK YES NO 04/25/2001 15:43:13

Note: Column MONITORING='YES', START_MONITORING gives the timestamp.


(f) Issue the SELECT statement which uses the index

First, make sure that index will be used for this statement.
Create plan_table in your schema, as required by Oracle Autotrace
utility:

@$ORACLE_HOME/rdbms/admin/utlxplan

Table created.

Use Oracle Autotrace utility to obtain the execution plan:

set autotrace on explain
select * from products where prod_id = 2;
.
.
Execution Plan
------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'PRODUCTS'
2 1 INDEX (UNIQUE SCAN) OF 'PRODUCTS_PK' (UNIQUE)

set autotrace off

Now, since you know the index will be used for this query, issue the
actual SELECT statement:

select * from products where prod_id = 2;

PROD_ID PROD_
---------- -----
2 bbbbb


(g) Query v$object_usage again to see that the index has been used

select index_name,monitoring,used,start_monitoring,end_monitoring
from v$object_usage;

INDEX_NAME MONITORING USED START_MONITORING END_MONITORING
------------ ---------- ---- ------------------- -------------------
PRODUCTS_PK YES YES 04/25/2001 15:43:13

Note: Column USED='YES'.


(h) Stop monitoring of the index usage

alter index products_pk nomonitoring usage;

Index altered.


(i) Query v$object_usage to see that the monitoring stopped

select index_name,monitoring,used,start_monitoring,end_monitoring
from v$object_usage;

INDEX_NAME MONITORING USED START_MONITORING END_MONITORING
------------ ---------- ---- ------------------- -------------------
PRODUCTS_PK NO YES 04/25/2001 15:43:13 04/25/2001 15:48:44

Note: Column MONITORING='NO', END_MONITORING gives the timestamp.



PART 2 - How to identify all unused indexes in the database
-------------------------------------------------------------

To identify all unused indexes in the database, you can do the
following:
(a) Create a SQL script to start monitoring all indexes except those
owned by users SYS and SYSTEM
(b) Create another script to stop monitoring all indexes except those
owned by users SYS and SYSTEM
(c) Connect as a user with ALTER ANY INDEX system privilege and run
the start monitoring script
(d) Perform normal activities in your database
(e) After a period of time that is representative of your workload,
run the stop monitoring script
(f) Query v$object_usage to see what indexes have not been used


Detailed steps:

(a) Create a SQL script to start monitoring all indexes except those
owned by users SYS and SYSTEM

set heading off
set echo off
set feedback off
set pages 10000
spool startmonitor.sql
select 'alter index '||owner||'.'||index_name||' monitoring usage;'
from dba_indexes
where owner not in ('SYS','SYSTEM');
spool off


(b) Create another script to stop monitoring all indexes except those
owned by users SYS and SYSTEM

set heading off
set echo off
set feedback off
set pages 10000
spool stopmonitor.sql
select 'alter index '||owner||'.'||index_name||' nomonitoring usage;'
from dba_indexes
where owner not in ('SYS','SYSTEM');
spool off


(c) Connect as a user with ALTER ANY INDEX system privilege and run
the newly created script to start monitoring.

@startmonitor

(d) Perform normal activities in your database


(e) After a period of time that is representative of your workload,
connect as a user with ALTER ANY INDEX system privilege and run
the script to stop monitoring.

@stopmonitor


(f) Query v$object_usage in join with dba_indexes, to see what indexes
have not been used

select d.owner, v.index_name
from dba_indexes d, v$object_usage v
where v.used='NO' and d.index_name=v.index_name;



RELATED DOCUMENTS
-----------------

Note 1033478.6 Script Monitoring the Usage of Indexes(prior Oracle9i)
Note 1015945.102 How to Monitor Most Recently Accessed Indexessing script

Oracle9i Database Administrator's Guide

Saturday, January 9, 2010

Sql Scripts for knowing the transaction status....

select * from v$transaction;

--All tranasactions + sid and username
select s.sid,s.username, t.start_time, t.used_ublk, t.used_urec
from v$transaction t, v$session s
where t.ses_addr=s.saddr;

--All tranasactions + sid and username + first 64 bytes of SQL
select s.sid,s.username, t.start_time, t.used_ublk, t.used_urec,sql.sql_text
from v$transaction t, v$session s, v$sql sql
where t.ses_addr=s.saddr
and s.sql_address=sql.address and s.sql_hash_value=sql.hash_value;

How to do Object level tunning in Oracle???

Object-level tuning

1. Last analyzed date of all objects (table as well as indexes).
Command to analyze:
Analyze table table_name compute statistics; /* will estimate with 100 % of total records as default size*/
Analyze table table_name estimate statistics; /*will estimate with 1024 bytes as default size*/
Analyze table table_name estimate statistics for table sample 20 PERCENT; /* will estimate with20 % of total records as default size*/

Else we may use DBMS_STATS.GATHER_TABLE_STATS package e.g.

EXEC DBMS_STATS.GATHER_TABLE_STATS
(OWNNAME =>null,
TABNAME => 'DW_CSC_COLLN',
CASCADE => TRUE,
METHOD_OPT => 'FOR COLUMNS’,
DEGREE=>8,
ESTIMATE_PERCENT=>100);

==> GATHER_INDEX_STATS Index statistics
==> GATHER_TABLE_STATS Table, column, and index statistics
==> GATHER_SCHEMA_STATS Statistics for all objects in a schema
==> GATHER_DATABASE_STATS Statistics for all objects in a database

Questions:
1. How to check the particular table has been analyzed with what percentage?
==> Sample_size column of dba_tables.

2. How to know when to analyze the object?
==> 10 % of DML occurrence is the thumb rule in particular time period( you may query user_tab_modifications to find out the occurrences of dmls since last analyzed statement executed on that particular object). To query from user_tab_modifications, monitoring should be on or execute alter table table_name monitoring on;
==> When 10 % of difference is there in num_rows column of dba_tables (information with data dictionary of total number of rows in one particular table) and count(*) output from “select count(*) from table_name;”

3. What does Analyzing of an object do?
By analyzing the objects, we can collect the latest object information (mentioned below) and stores/updates in Data Dictionary. It basically collects or updates statistics about specific objects in order to opt for best optimized execution plan.
==> Table Statistics
==> Number of rows
==> Number of blocks that have been used
==> Number of blocks never used
==> Average available free space
==> Number of chained rows
==> Average row length
==> Number of distinct values in a column
==> The low value and high value in a column
==> Index Statistics
==> Index level
==> Number of leaf blocks
==> Number of distinct keys

2. Check the blevel of indexes from dba_indexes
If Blevel of index is >=3 then
==> Rebuild the index with alter index index_name rebuild online;

3. Check the Type of Index
On the basis of cardinality we have to check whether the existing index on particular column is correct or not.
==> Less than 5% of difference in distinct_keys and count(*)/num_rows forces us to make bitmap indexes.
==> If the cardinality is less (<5%) then
Preferred index type is BITMAP
==> If the column is using a particular single row function (eg:UPPER,LOWER,SUBSTR etc) then
Preferred index type is FUNCTION BASED INDEX

4. Check the Status of Index
==> Status of indexes should be valid.

5. Check if the tables and indexes are in different tablespaces.

6. Check if there are any chained rows
How to check?
==> Chain_cnts column of dba_tables.
How to solve?
==> Reorganize the table by Exporting to different tablespace and import the same table back to original tablespace?
==> Or data move command. Alter table table_name data move;

7. Table fragmentation the tables.
How to check?
==> Compare blocks and empty_blocks columns of dba_tables, if difference is very high apply below solution. Below commands will work in same way as defragmentation works in windows.
How to solve?
Alter table shrink space cascade;
Before this we have to enable the row movement to allow shrinking.
Alter table table_name enable row movement;


Query To check all above things at one go…..
Query to obtain stats
select a.table_name "Table_name",
a.index_name "Index_name",
c.tablespace_name "Table's tablespace",
a.tablespace_name "Index Tablespace",
a.index_type,
a.blevel,
c.num_rows,
distinct_keys,
b.column_name,
a.status,
a.last_analyzed "index_lastanalyzed",
c.last_analyzed "table_lastanalyzed",
a.logging "index logging",
c.logging "table logging",
c.chain_cnt
from dba_indexes a, dba_ind_columns b, dba_tables c
where a.index_name=b.index_name
and a.table_name=c.table_name
and a.table_name =''

How to encrypt password when connecting to the server???

QUESTIONS & ANSWERS
-------------------
1. Is my password encrypted when I logon?

Answer
------
Yes, the Oracle password protocol provides security for client-server and
server-server password communication by encrypting passwords passed over a
network, . The Oracle password protocol uses a session key, valid for a single
database connection attempt to encrypt the user's password. Each connection
attempt uses a separate key for encryption, making the encryption more
difficult to decipher.

As of version 10.2 password encryption during session authentication is based
on the new AES 128 standard except for JDBC thin clients that will support
AES as of 11g . Older clients use a modified Data Encryption Standard (DES)
algorithm, the best algorithm available in both the server and client is
negotiated during the session handshake for reasons of backward compatibility.

2. What about parameters ora_encrypt_login and dblink_encrypt_login ?

Do not use these parameters as of version 9.2 as they have become obsolete,
some documentation may refer to them as being necessary, but the problems
with the clear text passwords were all fixed as of Oracle version 7.1 .

Password during a logon is ALWAYS encrypted. We do this since Oracle 7.1. the
parameters addressed the issue that if a 7.1 client connected to a 7.0 server
which did no credential encryption that by default a second unencrypted transfer
was done. To specify if this second unencrypted transfer should happen or not we
had ora_encrypt_login for login by a client, dblink_encrypt_login for login by
using a database link. So the parameters today would make sense only
if a newer client connects to a 7.0 database. A 9.2 client cannot even connect
to a 7.3 and lower database, so the parameters are not needed. The second
unencrypted transfer does not happen.

3. What encryption does Oracle use to store the passwords?

After the key-encrypted password is passed to the server, the server decrypts it,
then re-encrypts it using a hashing algorithm and then compares it with the password
stored in the data dictionary table USER$. If they match, the user successfully
connects to the database. Before 11g a proprietary Data Encryption Standard (DES)
based one-way (irreversible) algorithm was used. In 11g the case insensitive
password hashing algorithm was replaced by the 160 bit SHA-1 hashing algorithm in
better alignment with industry standards.



goal:- How To Ensure Oracle Encrypts Passwords Transferred from a Remote Client
fix:

Password's sent over Net8 are encrypted by default. Password encryption was
introduced in release 7.1. Prior to Oracle 7.1 passwords could be passed in
clear text. To disable this functionality set ORA_ENCRYPT_LOGIN=true.

Set ora_encrypt_login=true :
- MS Windows put an entry in the registry.
- UNIX boxes create an environment variable.

These parameters, ora_encrypt_login & dblink_encrypt_login are not
applicable for 9i onwards.
So, setting this parameter for 9i onwards is not supported.

The above information is collected from oracle metalink.

Friday, January 8, 2010

Creating a Data Guard Configuration

1) Ensure the Primary database is in ARCHIVELOG mode:

SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /export/home/oracle/temp/oracle/arch
Oldest online log sequence 7
Current log sequence 9
SQL> alter database close;

Database altered.

SQL> alter database archivelog;

Database altered.

SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.

Modify the Primary database init.ora so that log_archive_start=true and restart
the instance. Verify that database is in archive log mode and that automatic
archiving is enabled.

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /export/home/oracle/temp/oracle/arch
Oldest online log sequence 7
Next log sequence to archive 9
Current log sequence 9

2) Create a backup of the Primary database:

You can use an existing backup of the Primary database as long as you have the
archive logs that have been generated since that backup. You may also take a
hot backup as long as you have all archive logs through the end of the backup
of the last tablespace. To create a cold backup do the following:

SQL>
SQL> select name from v$datafile;

NAME
----------------------------------------------------------------------
/export/home/oracle/temp/oracle/data/sys.dbf

SQL>
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit

Create a backup of all datafiles and online redo logs using an OS command or
utility. A backup of the online redo logs is necessary to facilitate switchover.

Once complete startup the instance:

SQL> startup
ORACLE instance started.

Total System Global Area 80512156 bytes
Fixed Size 279708 bytes
Variable Size 71303168 bytes
Database Buffers 8388608 bytes
Redo Buffers 540672 bytes
Database mounted.
Database opened.

3) Connect to the primary database and create the standby control file:

SQL> alter database create standby controlfile as
'/export/home/oracle/temp/oracle/data/backup/standby.ctl';

Database altered.

4) Copy files to the Standby host:

Copy the backup datafiles, standby controlfile, all available archived redo logs,
and online redo logs from the primary site to the standby site. Copying of the
online redo logs is necessary to facilitate switchover.

If the standby is on a separate site with the same directory structure as the
primary database then you can use the same path names for the standby files as
the primary files. In this way, you do not have to rename the primary datafiles
in the standby control file. If the standby is on the same site as the primary
database, or the standby database is on a separate site with a different
directory structure the you must rename the primary datafiles in the standby
control file after copying them to the standby site. This can be done using
the db_file_name_convert and log_file_name_convert parameters or by manually
using the alert database statements.

5) Set the initialization parameters for the primary database:

It is suggested that you maintain two init.ora’s on both the primary and the
standby. This allows you to facilitate role reversal during switchover
operations more easily.

Primary init.ora on Primary host:

log_archive_dest_1='LOCATION=/export/home/oracle/temp/oracle/arch'
log_archive_dest_2='SERVICE=DGD01_hasunclu2 reopen=60'
log_archive_dest_state_1=enable
log_archive_dest_state_2=enable
log_archive_format=%t_%s.dbf
log_archive_start=true
remote_archive_enable=true

Create the standby initialization parameter file and set the initialization
parameters for the standby database. Depending on your configuration, you may
need to set filename conversion parameters.

Standby init.ora on Primary host:

log_archive_dest_1='LOCATION=/export/home/oracle/temp/oracle/arch'
log_archive_dest_state_1=enable
log_archive_format=%t_%s.dbf
log_archive_start=true
standby_archive_dest=/export/home/oracle/temp/oracle/arch
standby_file_management=auto
fal_server=DGD01_hasunclu2
fal_client=DGD01_hasunclu1
remote_arhive_enable=true

NOTE: In the above example db_file_name_convert and log_file_name_convert are
not needed as the directory structure on the two hosts are the same. If the
directory structure is not the same then setting of these parameters is
recommended. Please reference notes 47325.1 and 47343.1 for further
information.

Copy the two init.ora’s from the Primary host to the Standby host. You must
modify the Primary init.ora on the Standby host to have log_archive_dest_2 use
the alias that points to the Primary host (ie DGD01_hasunclu1). You must
modify the Standby init.ora on the standby host to have fal_server and
fal_client use the aliases when standby is running on the Primary host
(ie fal_server=DGD01_hasunclu1 and fal_client=DGD01_hasunclu2).

6) Configure networking components:

On the Primary host create a net service name that the Primary database can
use to connect to the Standby database. On the Primary host create a net
service name that Standby, when running on the Primary host, can use to
connect to the Primary, when it is running on the Standby host. Example from
Primary’s host tnsnames.ora:

DGD01_hasunclu1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hasunclu1)(PORT = 1521))
)
(CONNECT_DATA =
(SID = DGD01)
(SERVER = DEDICATED)
)
)
DGD01_hasunclu2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hasunclu2)(PORT = 1521))
)
(CONNECT_DATA =
(SID = DGD01)
(SERVER = DEDICATED)
)
)

The above two net service names must exist in the Standby hosts tnsnames.ora
also.

You must also configure a listener on the standby database. If you plan to
manage this standby database using the Data Guard broker, you must configure
the listener to use the TCP/IP protocol and statically register the standby
database service using its SID.


7) Start the standby instance and mount the standby database.

oracle@hasunclu2:/export/home/oracle/temp/oracle> sqlplus "/ as sysdba"

SQL*Plus: Release 9.0.1.0.0 - Production on Thu Mar 14 18:00:57 2002

(c) Copyright 2001 Oracle Corporation. All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile=?/dbs/initDGD.ora
ORACLE instance started.

Total System Global Area 80512156 bytes
Fixed Size 279708 bytes
Variable Size 71303168 bytes
Database Buffers 8388608 bytes
Redo Buffers 540672 bytes
SQL> alter database mount standby database;

Database altered.

SQL>

8) Create standby redo log files, if necessary:

Standby redo logs are necessary for the higher protection levels such as
Guaranteed, Instant, and Rapid. In these protection modes LGWR from the
Primary host writes transactions directly to the standby redo logs.
This enables no data loss solutions and reduces the amount of data loss
in the event of failure. Standby redo logs are not necessary if you are using
the delayed protection mode.

If you configure standby redo on the standby then you should also configure
standby redo logs on the primary database. Even though the standby redo logs
are not used when the database is running in the primary role, configuring
the standby redo logs on the primary database is recommended in preparation
for an eventual switchover operation.

Standby redo logs must be archived before the data can be applied to the
standby database. The standby archival operation occurs automatically, even if
the standby database is not in ARCHIVELOG mode. However, the archiver process
must be started on the standby database. Note that the use of the archiver
process (ARCn) is a requirement for selection of a standby redo log.

You must have the same number of standby redo logs on the standby as you have
online redo logs on production. They must also be exactly the same size.

The following syntax is used to create standby redo logs:

SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_1.dbf' size 20m;

Database altered.
SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_2.dbf' size 20m;

Database altered.

SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_3.dbf' size 20m;

Database altered.

SQL>

9) Manually change the names of the primary datafiles and redo logs in the
standby control file for all files not automatically renamed using
DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT as noted in step 5. Datafile
names can be changed on the standby at a mounted state with the following
syntax:

SQL> alter database rename file
2 '/export/home/oracle/temp/oracle/data/sys.dbf'
3 to
4 '/export/home/oracle/temp/oracle/data2/sys.dbf';

10) Stop and restart the listeners:

On the primary database, and start the listener on the standby database so that
changes made to the listener.ora can be implemented.

11) Activate parameter changes:

Manually enable initialization parameter changes on the primary database
so that it can initiate archiving to the standby site.

At runtime, the LOG_ARCHIVE_DEST_n initialization parameter can be changed
using ALTER SYSTEM and ALTER SESSION statements. Activate the changes made to
these parameters by either bouncing the instance or activating via alter system.
For example:

SQL> alter system set log_archive_dest_2='SERVICE=DGD01_hasunclu2 reopen=60';

System altered.

12) Verify that automatic archiving is occurring:

On the Primary database switch a log and verfy that it has been shipped
properly using the v$archive_dest view.

SQL> alter system switch logfile;

System altered.

SQL> select status, error from v$archive_dest where dest_id=2;

STATUS ERROR
--------- -------------------------------------------------------
VALID

SQL>

13) Optionally place Standby database in managed recovery:

SQL> recover managed standby database disconnect;
Media recovery complete.
SQL> exit

How to Automate the caculation of Database Growth and scheduling it in DBMS JOBS....

1.Create a Table By the Name db_growth as shown below...

Name Null? Type
----------------------------------------- -------- ----------------------------
DAY DATE
DATABASE_SIZE_MB NUMBER
DAILY_GROWTH_MB NUMBER


2.create or replace PROCEDURE database_growth
AS
today_size NUMBER;
yesterday_size NUMBER;
growth_size NUMBER;
cnt NUMBER;
BEGIN
SELECT sum(bytes)/(1024*1024) INTO today_size FROM SM$TS_USED;
SELECT COUNT(1) INTO cnt FROM db_growth ;
IF cnt > 0
THEN
SELECT database_size_mb INTO yesterday_size FROM db_growth WHERE to_date(d
ay,'dd-mon-yy')=to_date(SYSDATE -1,'dd-mon-yy');
ELSE
yesterday_size:=today_size;
END IF;
growth_size := today_size - yesterday_size;
INSERT INTO db_growth VALUES(sysdate,today_size,growth_size);
EXCEPTION
WHEN no_data_found THEN
INSERT INTO db_growth VALUES(sysdate,today_size,0);
DBMS_OUTPUT.PUT_LINE(SQLERRM);
END;

3.Submit in DBMS_JOBS

variable jobno number;
begin
dbms_job.submit(
:jobno,
'database_growth ;',
trunc(sysdate+1) + 4/24,
'trunc(sysdate+1) + 4/24'
);
commit;
end;
/
print :jobno

How to change hostname on Linux (Red Hat or Oracle)

This is post for those who wish to change hostname .

How to find hostname on Linux/OEL ?
Use command “hostname” to find host name of Linux

How to change hostname on Linux/OEL ?

1. Open file /etc/sysconfig/network using editor like vi , nano or ed

2. Change entry HOSTNAME=XXXXXX to HOSTNAME=New_Host_Name

3. Restart xinetd service or reboot Linux/OEL
Restart xinetd service:- xinetd restart
Reboot Linux:- reboot

4. If you have defined hostname resolution in hosts file then change entry in /etc/hosts