Thursday, July 23, 2009

query to find database size completely

The below query will give the size of the Oracle Database completely.

select round((a.data_size+b.temp_size+c.redo_size)/1024/1024/1024,2) "total_size"
from ( select sum(bytes) data_size
from dba_data_files ) a,
( select nvl(sum(bytes),0) temp_size
from dba_temp_files ) b,
( select sum(bytes) redo_size
from sys.v_$log ) c;

script for tablespace useage

Below is the script to get data about all the tablespaces in a database.

SELECT Total.name "Tablespace Name",
total_space "Total Size(MB)",
nvl(total_space-Free_space, 0) "Used Size(MB)",
nvl(Free_space, 0) "Free Size(MB)"
FROM
(select tablespace_name, sum(bytes/1024/1024) free_space
from sys.dba_free_space
group by tablespace_name
) Free,
(select b.name, sum(bytes/1024/1024) total_space
from sys.v_$datafile a, sys.v_$tablespace B
where a.ts# = b.ts#
group by b.name
) Total
WHERE Free.Tablespace_name(+) = Total.name
ORDER BY Total.name;

oracle kernel paramters for solaris10

steps for oracle kernel paramters setup for solaris 10.

1. Login as root

2. To add the project run the following command
# projadd user.oracle

3. To view the current values of the resource control, enter the following commands

. # id -p // to verify the project id
uid=0(root) gid=0(root) projid=1 (user.root)

4. # prctl -n project.max-shm-memory -i project user.root

5. # prctl -n project.max-sem-ids -i project user.root


If you must change any of the current values, then:
a. To modify the value of max-shm-memory to 6 GB:

# prctl -n project.max-shm-memory -v 6gb -r -i project user.root


d. To modify the value of max-sem-ids to 256:

# prctl -n project.max-sem-ids -v 256 -r -i project user.root


For changing the values for user.oracle run the following commands
#projmod -a -K "project.max-shm-memory=(priv,6,deny)" user.oracle
#projmod -a -K "project.max-sem=(priv,256,deny)" user.oracle

-------------------------------------------------------------------------------------------------
The Solaris 10 Operating System introduces the following new resource controls to replace the old /etc/system tunables:Code: Old Old New New
Resource control tunable default Max value default
---------------------- ------------- ------- --------- ----------
process.max-msg-qbytes msginfo_msgmnb 4096 ULONG_MAX 65536
process.max-msg-messages msginfo_msgtql 40 UINT_MAX 8192
process.max-sem-ops seminfo_semopm 10 INT_MAX 512
process.max-sem-nsems seminfo_semmsl 25 SHRT_MAX 512
project.max-shm-memory shminfo_shmmax 0x800000 UINT64_MAX 1/4 physmem
project.max-shm-ids shminfo_shmmni 100 2**24 128
project.max-msg-ids msginfo_msgmni 50 2**24 128
project.max-sem-ids seminfo 10 2**24 128

Data Pump.....New Feature in Oracle 10g

Data Pump:
Data Pump is a server side application therefore the dump files will be generated on the server.It is about 5-6 times faster than export/import.

Steps for configuration of Data Pump:

1.Login as sysdba

2.Create directory at OS and database level with the following commands
mkdir d:\datapump
SQL>CREATE DIRECTORY test_dir AS 'd:\datapump';

3.Grant read and write access to the user.
SQL>GRANT READ, WRITE ON DIRECTORY dmpdir TO ;

4.Ensure that the user have the EXP_FULL_DATABASE and IMP_FULL_DATABASE privileges. To see a list of all roles assigned to you within your security domain, issue the following command
Connect to
SQL> SELECT * FROM SESSION_ROLES;

Now the required setting for Data Pump has been done.

The following are some of the different options that we can use in Data Pump.

Table Exports/Imports:

The TABLES parameter is used to specify the tables that are to be exported. The following is an example of the table export and import syntax:

expdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log

impdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log

The TABLE_EXISTS_ACTION=APPEND parameter allows data to be imported into existing tables.

Schema Exports/Imports:

The OWNER parameter of exp has been replaced by the SCHEMAS parameter which is used to specify the schemas to be exported. The following is an example of the schema export and import syntax:

expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=impdpSCOTT.log

Database Exports/Imports:

The FULL parameter indicates that a complete database export is required. The following is an example of the full database export and import syntax:

expdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=expdpDB10G.log

impdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=impdpDB10G.log

Data pump performance can be improved by using the PARALLEL parameter. This should be used in conjunction with the "%U" wildcard in the DUMPFILE parameter to allow multiple dumpfiles to be created or read:

expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4 dumpfile=SCOTT_%U.dmp logfile=expdpSCOTT.log

impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4 dumpfile=SCOTT_%U.dmp logfile=impdpSCOTT.log

The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific objects. When the INCLUDE parameter is used, only those objects specified by it will be included in the export. When the EXCLUDE parameter is used all objects except those specified by it will be included in the export:

expdp scott/tiger@db10g schemas=SCOTT include=TABLE:"IN ('EMP', 'DEPT')" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

expdp scott/tiger@db10g schemas=SCOTT exclude=TABLE:"= 'BONUS'" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

Suspending and Resuming Export Jobs (Attaching and Re-Attaching to the Jobs):

You can suspend running export jobs and later on resume these jobs or kill these jobs using Data Pump Export. You can start a job in one client machine and then, if because of some work, you can suspend it. Afterwards when your work has been finished you can continue the job from the same client, where you stopped the job, or you can restart the job from another client machine.
For Example, suppose a DBA starts a full database export by typing the following command at one client machine CLNT1 by typing the following command

$expdp scott/tiger@mydb FULL=y DIRECTORY=data_pump_dir DUMPFILE=full.dmp LOGFILE=myfullexp.log JOB_NAME=myfullJob

After some time, the DBA wants to stop this job temporarily. Then he presses CTRL+C to enter into interactive mode. Then he will get the Export> prompt where he can type interactive commands
Now he wants to stop this export job so he will type the following command
Export> STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([y]/n): y
The job is placed in a stopped state and exits the client.

After finishing his other work, the DBA wants to resume the export job and the client machine from where he actually started the job is locked because, the user has locked his/her cabin. So now the DBA will go to another client machine and he reattach to the job by typing the following command
$expdp hr/hr@mydb ATTACH=myfulljob

After the job status is displayed, he can issue the CONTINUE_CLIENT command to resume logging mode and restart the myfulljob job.
Export> CONTINUE_CLIENT
A message is displayed that the job has been reopened, and processing status is output to the client.

Note: After reattaching to the Job a DBA can also kill the job by typing KILL_JOB, if he doesn’t want to continue with the export job.

There is one impdp parameter "SQLFILE", if you use this in impdp command it will not import the data into the database instead of that it will create one SQL file for the data that has to be imported.

Tuesday, July 21, 2009

copy command in oracle!!!

Suppose you have to make exact replica of a table from one database to another.....

How will you do it?
The solution that comes into mind is through exp-imp or data pump (Oracle 10g feature), this can difficult sometimes as you have to move the .dmp file from one database to another or do some other settings like tns entries, instead of this there is one better solution for the same "copy command"

Lets understand it through an example
Suppose you want to make exact replica of "emp" table from database A to database B.

Steps:
1. Login on database B.

2. Create structure of emp table in database B same as in database A.

3. Now execute the below copy command
copy from username/password@A insert emp using select * from emp;

Now you have exact replica of table emp in database B also.

Please let me know if you have any queries.

Sunday, July 19, 2009

How to change datatype of a column if the data in that column exists?

The above question many of you might have faced that how to change the datatype of the column if the data in the column already exists.

Here is the solution.
1. Suppose for example the below is the table
desc emp
id number
name varchar2(50)

Now you want to change the datatype of id to varchar2(20)

2. Add a dump column say xyz to the table emp
alter table emp
add xyz varchar2(20)

3. Copy the data from the id column to the xyz column
update emp
set xyz=id;
Crosscheck the data and commit the transaction.
commit;

4. Update all the rows of the id solution with NULL Values.
update emp
set id=NULL;
Crosscheck and commit the transaction.
commit;

5. Now the id column is having no data so change its datatype.
alter table emp
modify id varchar2(20);

6. Now copy the data back from xyz column to id column.
update emp
set id=xyz;
Crosscheck the data and commit the transaction.
commit;

7. Now drop the column xyz as its purpose is over.
alter table emp
drop column xyz;

Hope this helps.....