About Me

My photo
Bangalore, India
I am an Oracle Certified Professional working in SAP Labs. Everything written on my blog has been tested on my local environment, Please test before implementing or running in production. You can contact me at amit.rath0708@gmail.com.

Saturday, August 31, 2013

How to implement Change Data Capture in Oracle

Change Data Capture is a feature in Oracle Database which identifies data change and keep record of it whether the change is a insert , update or delete. This Change Data which is recorded is available for applications or individuals.

Prior to Change Data Capture there are many ways through which Change data can be captured like Table Differencing and Change Value Selection . Performing Table Differencing computational cost is too high to generate the change data.

Capturing of change data using Change data Capture can be done in following ways :-

1. Synchronous :- In this mode whenever any DML(insert , update or delete) is performed on the source table triggers on the source database allow change data to capture immediately. In this mode, transaction is not closed until change data has not been captured.

2. Asynchronous :- In this mode change data will be captured once transaction has been completed means its not a part of transaction. Change data will be captured with the help of redo logs. There are three modes of asynchronous change data capture.

There are three modes of asynchronous change data capture : Hot Log, Distributed Hot Log and Autolog. PFB details:-

a. Hot Log :- In this mode change data is captured from the online redo log files . There is a time lag between committing source tables transactions and arrival of change data to change tables. Change tables in this mode has to be present in source database.

b. Distributed Hot Log :- In distributed hot log Source tables are in Source database and change tables are in staging database. In this mode also change data is captured through online logs. Source and Staging database can be on different platforms and can have different OS installed.

c. Auto Log :- In Autolog mode change data is captured from a set of redo log files managed by Redo transport services.  It controls the automatic transfer of redo log files from source database to staging database. In this mode change data can be captured from online logs as well as from archive logs. These modes are called as Asynchronous Autolog online and Asynchronous Autolog archive. 

Change data Capture have one publisher who publishes change data to change tables from source tables. Multiple Applications and individuals who want to access that change data are called as subscribers. 

PFB steps to implement Change Data Capture in Asynchronous Hot Log mode in Oracle :-

1. Create a tablespace for Change Data Capture

SQL> create tablespace ts_cdcpub datafile '/amit/TEST/ts_cdcpub01.dbf' size 300m;

Tablespace created.

2. Create a publisher user 

SQL> CREATE USER cdcpub IDENTIFIED by cdcpub DEFAULT TABLESPACE ts_cdcpub QUOTA UNLIMITED ON SYSTEM QUOTA UNLIMITED ON SYSAUX;

User created.

3. Grant privileges to Publisher user :-

 SQL> GRANT CREATE SESSION TO cdcpub;

Grant succeeded.

SQL> GRANT CREATE TABLE TO cdcpub;

Grant succeeded.

SQL> GRANT CREATE TABLESPACE TO cdcpub;

Grant succeeded.

SQL> GRANT UNLIMITED TABLESPACE TO cdcpub;

Grant succeeded.

SQL> GRANT SELECT_CATALOG_ROLE TO cdcpub;

Grant succeeded.

SQL> GRANT EXECUTE_CATALOG_ROLE TO cdcpub;

Grant succeeded.

SQL> GRANT CREATE SEQUENCE TO cdcpub;

Grant succeeded.

SQL> GRANT DBA TO cdcpub;

Grant succeeded.

SQL> GRANT EXECUTE on DBMS_CDC_PUBLISH TO cdcpub;

Grant succeeded.

SQL> EXECUTE DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(GRANTEE => 'cdcpub');

PL/SQL procedure successfully completed.

4. Check that the source table and column datatypes supports Change Data Capture

 SQL> BEGIN
               DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION(TABLE_NAME => 'amit.testing');
           END;
          /
PL/SQL procedure successfully completed.

5. Create a Change set using DBMS_CDC_PUBLISH package

SQL> BEGIN
              DBMS_CDC_PUBLISH.CREATE_CHANGE_SET(
              change_set_name => 'AMIT_TEST',
              description => 'TESTING_PURPOSE',
              change_source_name => 'HOTLOG_SOURCE',
              stop_on_ddl => 'y',
              begin_date => sysdate,
             end_date => sysdate+100);
          END;
         /
PL/SQL procedure successfully completed.

This change set will capture the changes starting from today and stops capturing after 100 days.

SQL> conn amit/amit
Connected.
SQL> desc testing
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                                 NUMBER(12)
 NAME                                               VARCHAR2(23)

6. Create Change table that will contain changes to source tables. Here Source table is TESTING :-

SQL> conn /as sysdba
Connected.

SQL> BEGIN
  2     DBMS_CDC_PUBLISH.CREATE_CHANGE_TABLE(
  3     owner              => 'cdcpub',
  4     change_table_name  => 'test_ct',
  5     change_set_name    => 'AMIT_TEST',
  6     source_schema      => 'AMIT',
  7     source_table       => 'TESTING',
  8  column_type_list   => 'ID NUMBER(12), NAME VARCHAR2(23)',
  9     capture_values     => 'both',
 10     rs_id              => 'y',
 11     row_id             => 'n',
 12     user_id            => 'n',
 13     timestamp          => 'n',
   object_id          => 'n',
 14   15     source_colmap      => 'n',
 16     target_colmap      => 'y',
 17     options_string     => 'TABLESPACE TS_CDCPUB');
 18  END;
 19  /
PL/SQL procedure successfully completed.

7. Enable the Change set :-

SQL> BEGIN
             DBMS_CDC_PUBLISH.ALTER_CHANGE_SET(
             change_set_name => 'AMIT_TEST',
             enable_capture => 'y');
          END;
         /
 PL/SQL procedure successfully completed.

SQL> grant select on cdcpub.test_ct to amit;

Grant succeeded.

SQL> conn amit/amit
Connected.
SQL> SELECT * FROM ALL_SOURCE_TABLES;

no rows selected

8. Find the source tables for which subscribers has access privileges :-

SQL> conn cdcpub/cdcpub
Connected.

9. Find change sets and columns for which subscribers have access privileges 

SQL> SELECT UNIQUE CHANGE_SET_NAME, COLUMN_NAME, PUB_ID
FROM ALL_PUBLISHED_COLUMNS
WHERE SOURCE_SCHEMA_NAME ='AMIT' AND SOURCE_TABLE_NAME = 'TESTING';
  2    3
CHANGE_SET_NAME                COLUMN_NAME                        PUB_ID
------------------------------ ------------------------------ ----------
AMIT_TEST                      NAME                                75247
AMIT_TEST                      ID                                  75247

10. Create a subscription which can access the change data in the Change tables :-

SQL>     BEGIN
                 DBMS_CDC_SUBSCRIBE.CREATE_SUBSCRIPTION(
                change_set_name   => 'AMIT_TEST',
                description       => 'Change data for test',
                subscription_name => 'SUBS_TEST');
              END;
             /
PL/SQL procedure successfully completed.

11. Subscribe the subscriber to source table and columns in source table :-

 SQL>  BEGIN
  2          DBMS_CDC_SUBSCRIBE.SUBSCRIBE(
  3          subscription_name => 'SUBS_TEST',
  4          source_schema     => 'AMIT',
  5          source_table      => 'TESTING',
  6          column_list       => 'ID,NAME',
  7          subscriber_view   => 'SUBS_VIEW');
  8        END;
  9       /
PL/SQL procedure successfully completed.

12. Activate the subscription 

SQL> BEGIN
            DBMS_CDC_SUBSCRIBE.ACTIVATE_SUBSCRIPTION(
           subscription_name => 'SUBS_TEST');
          END;
         /
PL/SQL procedure successfully completed.

13. This PL/SQL block will get you the next set of change data. Whenever data changed in source table and captured in Change table then this PL/SQL block will get you data in subscription window.

 SQL> BEGIN
             DBMS_CDC_SUBSCRIBE.EXTEND_WINDOW(
             subscription_name => 'SUBS_TEST');
           END;
           /
PL/SQL procedure successfully completed.

14. Check that the Changes made to the source table reflects in subscription window :-

SQL> SELECT id,name FROM SUBS_VIEW;

no rows selected

SQL> conn amit/amit
Connected.
SQL>
SQL> insert into testing values('1','AMITRATH');

1 row created.

SQL> insert into testing values('2','ORACLE');

1 row created.

SQL> insert into TESTING values('3','CHANGEDATACPATURE');

1 row created.

SQL> commit;

Commit complete.

SQL> conn cdcpub/cdcpub;
Connected.

SQL> SELECT id,name FROM SUBS_VIEW;

no rows selected

SQL> BEGIN
            DBMS_CDC_SUBSCRIBE.EXTEND_WINDOW(
            subscription_name => 'SUBS_TEST');
          END;
          /
 PL/SQL procedure successfully completed.

SQL> select  OPERATION$,to_char(COMMIT_TIMESTAMP$,'dd-mon-yyyy hh24:mi:ss'),ID,NAME from subs_view;

OP TO_CHAR(COMMIT_TIMESTAMP$,'DD         ID NAME
-- ----------------------------- ---------- -----------------------
I  28-aug-2013 10:56:30                   1 AMITRATH
I  28-aug-2013 10:56:30                   2 ORACLE
I  28-aug-2013 10:56:30                   3 CHANGEDATACPATURE

SQL> update testing set name='AMIT_RATH' where id=1;

1 row updated.

SQL> commit;

Commit complete.

SQL>
SQL> conn cdcpub/cdcpub
Connected.

SQL>  BEGIN
             DBMS_CDC_SUBSCRIBE.EXTEND_WINDOW(
             subscription_name => 'SUBS_TEST');
           END;
          /
PL/SQL procedure successfully completed.

SQL> select  OPERATION$,to_char(COMMIT_TIMESTAMP$,'dd-mon-yyyy hh24:mi:ss'),ID,NAME from subs_view;

OP TO_CHAR(COMMIT_TIMESTAMP$,'DD         ID NAME
-- ----------------------------- ---------- --------------------------------------------------
I  28-aug-2013 10:56:30                   1                     AMITRATH
I  28-aug-2013 10:56:30                   2                     ORACLE
I  28-aug-2013 10:56:30                   3                     CHANGEDATACPATURE
UO 28-aug-2013 15:12:55               1                     AMITRATH
UN 28-aug-2013 15:12:55               1                     AMIT_RATH

Whatever changes we have made to source tables reflects in change table and listed in subscription window.
If we insert data in Source table, all records details will be reflected in Change tables. If we update then both data before update and data after update will be recorded in Change tables and reflected in subscription window.

I hope this article helped you.

Regards,
Amit Rath

Table_Exists_Action parameter in IMPDP

Sometimes we need to import data to a table which is existing in our database. If we import it normally then we will get error like existence of table. If we need to preserve the old data and need to append the new data then we have to use TABLE_EXISTS_ACTION parameter of IMPDP utility.

Valid Key words of TABLE_EXISTS_ACTION  are [SKIP/APPEND/TRUNCATE/REPLACE]. Details are:-

1. SKIP :- its the default value , it leaves the table as it is and moves on to the next object. This option is not valid if we want to append the data to existing table. We do not have to externally set this value. 

2. APPEND :- When we want to add data to a existing table then we have to use this parameter. This parameter append data to the table from dump file and leaves the existing data unchanged. This parameter only imports the data in existing table and skip the indexes.

impdp amit/amit directory=datapump dumpfile=expdp_table.dmp remap_schema=amit:test remap_tablespace=amit:test table_exist_action=APPEND &

After importing the table using table_exist_action=APPEND we have to rebuild the indexes.

3. TRUNCATE :- This parameter deletes all the existing data from the table and loads new data to the table from dump file. This parameter skips the metadata of the table.

impdp amit/amit directory=datapump dumpfile=expdp_table.dmp remap_schema=amit:test remap_tablespace=amit:test table_exist_action=TRUNCATE &

4. REPLACE :- This parameter internally drop the table and recreate the table from the dumpfile and loads the new data from dumpfile. Existing metadata of the table also gets dropped and recreated from the dumpfile.

impdp amit/amit directory=datapump dumpfile=expdp_table.dmp remap_schema=amit:test remap_tablespace=amit:test table_exist_action=REPLACE &

If your dumpfile do not have metadata , then using table_exist_action=REPLACE parameter donot import data from dumpfile.

I hope this article helped you.

Regards,
Amit Rath

EXPDP with multiple dump file on multiple directories

Sometimes we have to take a logical backup of our database using EXPDP utility. Consider database size to be 400GB. We cannot take backup of 400 gb database to a single FS , we have to take it in multiple FS as one FS donot have that much space. 

In such Situation we use the multiple directories feature of EXPDP utility. Using this feature we can create dumpfile in multiple directories and distribute our logical backup to different directories.

Example :-

Consider two directories named DATAPUMP and TESTING in Oracle Database :-

SQL > create or replace directory DATAPUMP as '/amit/datapump';

Directory created.

SQL > create or replace directory TESTING as '/amit/testing';

Directory created.

SQL >  select * from dba_directories;

OWNER                     DIRECTORY_NAME                      DIRECTORY_PATH
------------------------- ----------------------------------- ----------------------------------------
SYS                               TESTING                                  /amit/testing
SYS                               DATAPUMP                            /amit/datapump

Now take logical backup using multiple directories.

$ expdp amit/amit dumpfile=datapump:expdp_datapump_%U.dmp,TESTING:expdp_testing_%U.dmp compression=all parallel=2 &

We can check that multiple dumpfiles will be created.

bash-3.2$ pwd
/amit/datapump
bash-3.2$ ls -ltr|tail
-rw-r--r--    1 oracle   oinstall      89785 Aug 30 19:45 import.log
-rw-r-----    1 oracle   oinstall    7680000 Aug 31 02:34 expdp_datapump_01.dmp
bash-3.2$ cd -
/amit/testing
bash-3.2$ ls -ltr|tail
total 731952
-rw-r--r--    1 oracle   oinstall     185540 May 31 17:40 export.log
-rw-r-----    1 oracle   oinstall  229363712 Aug 31 02:34 expdp_testing_01.dmp
 
Parallel parameter must be used while using multiple directories, otherwise expdp will not fail but it will write to only first directory. Number of directories used must be equal to parallel parameter then only all directories will be used for writing.

%U is used to create multiple files in same directory. Like expdp_datapump_%U.dmp will create expdp_datapump_01.dmp, expdp_datapump_02.dmp. If we want using %U expdp write to multiple files in parallel mode then we have to use parallel parameter. 

We can also set FILESIZE parameter while using EXPDP, it will not create file more than the filesize allocated.

$expdp amit/amit dumpfile=datapump:expdp_datapump_%U.dmp,TESTING:expdp_testing_%U.dmp compression=all parallel=2 filesize=2g &

Above example will create dumpfiles like expdp_datapump_01.dmp, expdp_testing_01.dmp, expdp_datapump_02.dmp, expdp_testing_02.dmp. All dumpfiles have filesize of 2g.

I hope this article helped you.

Regards,
Amit Rath

Tuesday, August 27, 2013

Top SQLs in Oracle Database session

Finding Top rated SQL's is the key process while working on Database performance. Sure we can find these SQL's using AWR report but still we should know which view to access to find Top rated SQL's. PFB SQL's:-

1. TOP 10 SQL STATEMENT WITH LARGE NO. OF DISK READS :-

set lin 400
column "SQL_TEXT" format a100
column EXECUTIONS format 9999999
column DISK_READS format 999999999
column USER FORMAT a10
select substr(sql_text,0,100) as "SQL_TEXT",PARSING_SCHEMA_NAME as "USER",executions ,disk_reads from v$sqlarea  where decode(executions ,0,disk_reads,disk_reads/executions)
> (select avg(decode(executions,0,disk_reads,disk_reads/executions))  + stddev(decode(executions,0,disk_reads,disk_reads/executions))
from v$sqlarea) and ROWNUM < 11 order by disk_reads desc
/

2. TOP 10 SQL STATEMENT WITH FULL TABLE SCANS :-

column "SQL_TEXT" format a100
column OPERATION format a12
column OPTIONS format a12
column USER FORMAT a10
select substr(t.SQL_TEXT,0,100) as "SQL_TEXT",t.PARSING_SCHEMA_NAME as "USER",p.operation,p.options from v$sqlarea t, v$sql_plan p where t.hash_value=p.hash_value and p.operation='TABLE ACCESS'
and p.options='FULL' and p.object_owner not in ('SYS','SYSTEM') and ROWNUM < 11 order by DISK_READS DESC, EXECUTIONS DESC
/

3. TOP 10 SQL STATEMENT WITH MOST CPU UTILIZATION :-

column "SQL_TEXT" format a100
column EXECUTIONS format 9999999
column CPU_TIME format 999999999
column USER FORMAT a10
select substr(t.SQL_TEXT,0,100) as "SQL_TEXT",t.PARSING_SCHEMA_NAME as "USER",t.EXECUTIONS,t.CPU_TIME from v$sqlarea t where ROWNUM < 11 order by CPU_TIME DESC,EXECUTIONS DESC
/

4. TOP 10 SQL STATEMENT WITH MOST BUFFER GETS :-

column SQL_TEXT format a100
column EXECUTIONS format 9999999
column BUFFER_GETS format 999999999
column USER FORMAT a10
select substr(t.SQL_TEXT,0,100) as "SQL_TEXT",t.PARSING_SCHEMA_NAME as "USER",t.EXECUTIONS,t.BUFFER_GETS from v$sqlarea t where ROWNUM < 11 order by BUFFER_GETS DESC,EXECUTIONS DESC
/

5. TOP 10 SQL STATEMENT WITH MOST NO. OF EXECUTIONS :-


column SQL_TEXT format a100
column EXECUTIONS format 9999999
column USER FORMAT a10
select substr(t.SQL_TEXT,0,100) as "SQL_TEXT",t.PARSING_SCHEMA_NAME as "USER",t.EXECUTIONS from v$sqlarea t where ROWNUM < 11 order by EXECUTIONS DESC
/

6. TOP 10 SQL STATEMENT WITH MOST NO. OF SORTS :-

column SQL_TEXT format a100
column EXECUTIONS format 9999999
column SORTS format 99999999
column USER FORMAT a10
select substr(t.SQL_TEXT,0,100) as "SQL_TEXT",t.PARSING_SCHEMA_NAME as "USER",t.EXECUTIONS,t.SORTS from v$sqlarea t where ROWNUM < 11 order by SORTS DESC,EXECUTIONS DESC
/

7. TOP 10 SQL STATEMENT WITH MOST SHARABLE MEMORY :-


column SQL_TEXT format a100
column EXECUTIONS format 9999999
column SHARABLE_MEM format 99999999
column USER FORMAT a10
select substr(t.SQL_TEXT,0,100) as "SQL_TEXT",t.PARSING_SCHEMA_NAME as "USER",t.EXECUTIONS,t.SHARABLE_MEM from v$sqlarea t where ROWNUM < 11 order by SHARABLE_MEM DESC,EXECUTIONS DESC
/

8. TOP 10 SQL STATEMENT WITH MOST NO. of PARSE CALLS :-

column SQL_TEXT format a100
column EXECUTIONS format 9999999
column PARSE_CALLS format 99999999
column USER FORMAT a10
select substr(t.SQL_TEXT,0,100) as "SQL_TEXT",t.PARSING_SCHEMA_NAME as "USER",t.EXECUTIONS,t.PARSE_CALLS from v$sqlarea t where ROWNUM < 11 order by PARSE_CALLS DESC,EXECUTIONS DESC
/

I hope this article helped you.

Regards,
Amit Rath

Wednesday, August 21, 2013

Local Listener vs Remote Listener in Oracle RAC

Concept of Local listener and Remote Listener comes with Oracle RAC database where a single database has multiple instances. PFB example to understand the concept of Local Listener and Remote Listener.

Consider a two node RAC database. It has following components :-

1. Two nodes named NODE1 and NODE2
2. Two listeners on both nodes named LISTENER1 on NODE1 and LISTENER2 on NODE2.
3. Two instances of database on both nodes named ORCL1 on NODE1 and ORCL2 on NODE2 and database name is ORCL

Now Listener1 is running on Node1 where database instance Orcl1 is present. Listener1 is considered as Local Listener for Orcl1 instance as its present on the same node where Orcl1 is running and Listener2 is considered as Remote Listener for Orcl1 instance as its running on different node as the database instance Orcl1.

Similarly, Listener2 is considered as Local Listener for Orcl2 instance as its present on the same node where Orcl2 is running and Listener1 is considered as Remote Listener for Orcl2 instance as its running on different node as the database instance Orcl2.

Purpose of Remote Listener is to connect all instances with all listeners so the instances can propogate their load balance advisories to all listeners. Listener uses the advisories to decide which instance should service client request. If listener get to know from advisories that its local instance is least loaded and should service client request then listener passes client request to local instance. If local instance is over loaded then listener can use TNS redirect to redirect client request to a less loaded instance means remote instance. This Phenomenon is also called as Server Side Load balancing.

PFB steps to configure Local and Remote listener configuration for both instances :-

Node1 :-

orcl1.local_listener=(address of listener_host1)
orcl1.remote_listener=(addresses of both listener_host1 and listener_host2)

Node2 :-

orcl2.local_listener=(address of listener_host2)
orcl2.remote_listener=(addresses of both listener_host1 and listener_host2)

Without Configuration of Remote Listener, each listener will only know about its local instance only and it don't have any knowledge about the load of remote instance. Listener will direct the Client request to local instance only whatever may be the load on local instance. Without this configuration we only left with Client Side Load Balancing.

I hope this article helped you.

Regards,
Amit Rath

Monday, August 19, 2013

How to insert a LOB (pdf,doc,jpeg etc) in a Oracle Database.

Generally LOB( BLOBs , CLOBs) are not inserted in a table with normal insertion method. To insert a LOB object we have to move with certain steps. PFB steps to insert a LOB object in Oracle Table :-

1. Create a Table having a LOB column.

  SQL>   create table LOB_INSERT
(
        LOB_id     number(12),
        LOB_name   varchar2(80),
        file_size       number,
        LOB_file       blob
)  ;

Table created.

2. Create a directory in FS so that Oracle can access files in that directory.

SQL> create or replace directory LOB as '/home/oracle';

Directory created.

NOTE :- Directory oracle has to be physically present in FS.

3. Now copy files in oracle directory which you want to insert in LOB_INSERT table and then run the below script to insert LOB objects in LOB_INSERT table.

SQL> declare
    l_size      number;
  2    3      l_file_ptr  bfile;
  4      l_blob      blob;
  5  begin
  6      l_file_ptr := bfilename('LOB ', 'health_check.html');
  7      dbms_lob.fileopen(l_file_ptr);
  8      l_size := dbms_lob.getlength(l_file_ptr);
  9      for ctr in 1 .. 3 loop
 10          insert into LOB_INSERT
 11          (
 12               LOB_id ,
 13          LOB_name,
 14          file_size,
 15          LOB_file
 16          )
 17          values
 18          (
 19              ctr,
 20              'filenumber '||ctr,
 21              null,
 22              empty_blob()
 23          )
 24          returning LOB_file into l_blob;
 25          dbms_lob.loadfromfile(l_blob, l_file_ptr, l_size);
 26      end loop;
 27      commit;
 28      dbms_lob.close(l_file_ptr);
 29  end;
 30  /

PL/SQL procedure successfully completed.

This procedure will insert three records health_check.html file in  LOB_INSERT table.

SQL>  select * from LOB_INSERT;

    LOB_ID LOB_NAME                        FILE_SIZE LOB_FILE
---------- ------------------------------ ---------- ----------------------------------------
         1 filenumber 1                              3C68746D6C3E0A3C686561643E0A3C6D65746120
                                                     687474702D65717569763D22436F6E74656E742D
                                                     547970652220636F6E74656E743D22746578742F
                                                     68746D6C3B20636861727365743D5554462D3822

         2 filenumber 2                              3C68746D6C3E0A3C686561643E0A3C6D65746120
                                                     687474702D65717569763D22436F6E74656E742D
                                                     547970652220636F6E74656E743D22746578742F
                                                     68746D6C3B20636861727365743D5554462D3822

         3 filenumber 3                              3C68746D6C3E0A3C686561643E0A3C6D65746120

    LOB_ID LOB_NAME                        FILE_SIZE LOB_FILE
---------- ------------------------------ ---------- ----------------------------------------
                                                     687474702D65717569763D22436F6E74656E742D
                                                     547970652220636F6E74656E743D22746578742F
                                                     68746D6C3B20636861727365743D5554462D3822

This way we can insert LOB files in Oracle Table.

I hope this article helped you.

Regards,
Amit Rath

Friday, August 9, 2013

Difference between Physical Standby and Logical Standby Database.

Standby databases act as a failover for our Production Databases. If Production Database crashes , running applications can quickly switchover to standby database.

There are two types of stanby database , Physical standby and Logical Standby.

1. Physical Standby :-  

  a. Physical Standby Schema Structure matches exactly with primary database.

  b. Archived redo logs are moved directly from primary database to physical standby which is running in recover mode. Once archive log arrived , they are applied directly to standby database.

2. Logical Standby :-

  a. Logical standby schema structure does not have to match with Primary database. It can have a different structure.

  b. Logical Standby uses Log Miner Technique and change the archive redo logs in primary to change it into native DML statements i,e insert,update and delete . These DMLs are transported and then applied to logical standby.

  c. Logical Standby can have additional Indexes and Materialized views ahead of Primary database which enhance faster performance of database.

  d. Logical standby tables can be open as Read only and all other standby tables can be open for updates.

Advantages of having Physical Standby Database :-

1. Identical copy of primary database.
2. Disaster Recovery, High availability and High Data Protection.
3. Primary database workload is reduced, can be open in Read-Only mode for querying purpose.

Advantages of having Logical Standby Database :-

1. Simultaneously can be used for querying  , reporting and summation.
2. There are some limitation on the use of certain datatypes.
3. Reduction in Primary database workload.

I hope this article helped you.

Regards,
Amit Rath

Wednesday, August 7, 2013

How to perform Tablespace point in time recovery (TSPITR)

TSPITR abbreviates to Tablespace point in time recovery. Unlike DBPITR , it rewinds only specified tablespace to a earlier time and leaving the whole database in the current time. All changes made to the tablespace after the recovery time will be lost.

While doing TSPITR you have to be very cautious if you are doing it using control file because once you recovered your tablespace to a earlier time and its not fulfilled your requirement then there is no second chance for that. All backups related to that tablespace becomes invalid after TSPITR if using control file instead of recovery catalog. 

PFB Example to recover a table using Fully Automated TSPITR after truncate operation :-

Prerequisites :-

1.  If I want recovery to a particular time then I should have a Valid Backup available before that time to perform TSPITR.

2. Database has to be in Archive Log mode

C:\Users\NewAdmin>sqlplus

SQL*Plus: Release 11.2.0.1.0 Production on Tue Aug 6 23:43:01 2013

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: /as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> conn amit/amit
Connected.
SQL> create table amit
  2  (
  3  id number(12),
  4  name varchar2(12));

Table created.

SQL> insert into amit values ('&id','&name');
Enter value for id: 3242
Enter value for name: ffsdf
old   1: insert into amit values ('&id','&name')
new   1: insert into amit values ('3242','ffsdf')

1 row created.
.........

like this insert some rows in table

SQL> commit;

Commit complete.

SQL> select * from amit;

        ID NAME
---------- ------------
      3242 ffsdf
      4324 fdfdf
    324324 ssdfdsf
     43242 fsfdf/
     43242 ffewtew
   4324234 fsfegg

6 rows selected.

SQL> alter system switch logfile;

System altered.
SQL> select systimestamp from dual;

SYSTIMESTAMP
---------------------------------------------------------------------------
06-AUG-13 11.46.38.945000 PM +05:30

SQL> truncate table amit;

Table truncated.

SQL> commit;

Commit complete.

SQL> alter system switch logfile;

System altered.

SQL> select systimestamp from dual;

SYSTIMESTAMP
---------------------------------------------------------------------------
06-AUG-13 11.47.05.185000 PM +05:30

SQL>
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

C:\Users\NewAdmin>rman target /

Recovery Manager: Release 11.2.0.1.0 - Production on Tue Aug 6 23:47:21 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TEST (DBID=2120650031)

RMAN> recover tablespace amit until time "to_date('06-AUG-13 23:46:38','dd-mon-yy hh24:mi:ss')" auxiliary destination 'D:\backup';

Starting recover at 06-AUG-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=198 device type=DISK
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified point-in-time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1

Creating automatic instance, with SID='ADAF'

initialization parameters used for automatic instance:
db_name=TEST
db_unique_name=ADAF_tspitr_TEST
compatible=11.2.0.0.0
db_block_size=8192
db_files=200
sga_target=280M
processes=50
db_create_file_dest=D:\backup
log_archive_dest_1='location=D:\backup'
#No auxiliary parameter file used

starting up automatic instance TEST

Oracle instance started

Total System Global Area     292278272 bytes

Fixed Size                     2175128 bytes
Variable Size                100667240 bytes
Database Buffers             184549376 bytes
Redo Buffers                   4886528 bytes
Automatic instance created
Running TRANSPORT_SET_CHECK on recovery set tablespaces
TRANSPORT_SET_CHECK completed successfully

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('06-AUG-13 23:46:38','dd-mon-yy hh24:mi:ss')";
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log
sql 'alter system archive log current';
# avoid unnecessary autobackups for structural changes during TSPITR
sql 'begin dbms_backup_restore.AutoBackupFlag(FALSE); end;';
}
executing Memory Script

executing command: SET until clause

Starting restore at 06-AUG-13
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=81 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\AUTOBACKUP\2013_08_06\O1_MF_S_822784043_902F2NHM_.BKP
channel ORA_AUX_DISK_1: piece handle=G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\AUTOBACKUP\2013_08_06\O1_MF_S_822784043_902F2NHM_.BKP tag=TAG20130806T230723
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=D:\BACKUP\TEST\CONTROLFILE\O1_MF_902HTHRJ_.CTL
Finished restore at 06-AUG-13

sql statement: alter database mount clone database

sql statement: alter system archive log current

sql statement: begin dbms_backup_restore.AutoBackupFlag(FALSE); end;

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('06-AUG-13 23:46:38','dd-mon-yy hh24:mi:ss')";
plsql <<<-- tspitr_2
declare
  sqlstatement       varchar2(512);
  offline_not_needed exception;
  pragma exception_init(offline_not_needed, -01539);
begin
  sqlstatement := 'alter tablespace '||  'AMIT' ||' offline immediate';
  krmicd.writeMsg(6162, sqlstatement);
  krmicd.execSql(sqlstatement);
exception
  when offline_not_needed then
    null;
end; >>>;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile  1 to new;
set newname for clone datafile  3 to new;
set newname for clone datafile  2 to new;
set newname for clone tempfile  1 to new;
set newname for datafile  5 to
 "G:\APP\NEWADMIN\ORADATA\TEST\DATAFILE\AMIT01.DBF";
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 3, 2, 5;
switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

sql statement: alter tablespace AMIT offline immediate

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to D:\BACKUP\TEST\DATAFILE\O1_MF_TEMP_%U_.TMP in control file

Starting restore at 06-AUG-13
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to D:\BACKUP\TEST\DATAFILE\O1_MF_SYSTEM_%U_.DBF
channel ORA_AUX_DISK_1: restoring datafile 00003 to D:\BACKUP\TEST\DATAFILE\O1_MF_UNDOTBS1_%U_.DBF
channel ORA_AUX_DISK_1: restoring datafile 00002 to D:\BACKUP\TEST\DATAFILE\O1_MF_SYSAUX_%U_.DBF
channel ORA_AUX_DISK_1: restoring datafile 00005 to G:\APP\NEWADMIN\ORADATA\TEST\DATAFILE\AMIT01.DBF
channel ORA_AUX_DISK_1: reading from backup piece G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\BACKUPSET\2013_08_06\O1_MF_NNNDF_TAG20130806T230534_902DZ6YP_.BKP
channel ORA_AUX_DISK_1: piece handle=G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\BACKUPSET\2013_08_06\O1_MF_NNNDF_TAG20130806T230534_902DZ6YP_.BKP tag=TAG20130806T23053
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:02:25
Finished restore at 06-AUG-13

datafile 1 switched to datafile copy
input datafile copy RECID=4 STAMP=822787012 file name=D:\BACKUP\TEST\DATAFILE\O1_MF_SYSTEM_902HTTV8_.DBF
datafile 3 switched to datafile copy
input datafile copy RECID=5 STAMP=822787012 file name=D:\BACKUP\TEST\DATAFILE\O1_MF_UNDOTBS1_902HTV2W_.DBF
datafile 2 switched to datafile copy
input datafile copy RECID=6 STAMP=822787012 file name=D:\BACKUP\TEST\DATAFILE\O1_MF_SYSAUX_902HTTX3_.DBF

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('06-AUG-13 23:46:38','dd-mon-yy hh24:mi:ss')";
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  2 online";
sql clone "alter database datafile  5 online";
# recover and open resetlogs
recover clone database tablespace  "AMIT", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  1 online

sql statement: alter database datafile  3 online

sql statement: alter database datafile  2 online

sql statement: alter database datafile  5 online

Starting recover at 06-AUG-13
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 12 is already on disk as file G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2013_08_06\O1_MF_1_12_902F2JG8_.ARC
archived log for thread 1 with sequence 13 is already on disk as file G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2013_08_06\O1_MF_1_13_902HC6KM_.ARC
archived log for thread 1 with sequence 14 is already on disk as file G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2013_08_06\O1_MF_1_14_902HC88F_.ARC
archived log for thread 1 with sequence 15 is already on disk as file G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2013_08_06\O1_MF_1_15_902HCFM2_.ARC
archived log for thread 1 with sequence 16 is already on disk as file G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2013_08_06\O1_MF_1_16_902HDWNH_.ARC
archived log file name=G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2013_08_06\O1_MF_1_12_902F2JG8_.ARC thread=1 sequence=12
archived log file name=G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2013_08_06\O1_MF_1_13_902HC6KM_.ARC thread=1 sequence=13
archived log file name=G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2013_08_06\O1_MF_1_14_902HC88F_.ARC thread=1 sequence=14
archived log file name=G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2013_08_06\O1_MF_1_15_902HCFM2_.ARC thread=1 sequence=15
archived log file name=G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2013_08_06\O1_MF_1_16_902HDWNH_.ARC thread=1 sequence=16
media recovery complete, elapsed time: 00:00:06
Finished recover at 06-AUG-13

database opened

contents of Memory Script:
{
# make read only the tablespace that will be exported
sql clone 'alter tablespace  AMIT read only';
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
D:\backup''";
# create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
D:\backup''";
}
executing Memory Script

sql statement: alter tablespace  AMIT read only

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''D:\backup''

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''D:\backup''

Performing export of metadata...
   EXPDP> Starting "SYS"."TSPITR_EXP_ADAF":
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   EXPDP> Master table "SYS"."TSPITR_EXP_ADAF" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_ADAF is:
   EXPDP>   D:\BACKUP\TSPITR_ADAF_54021.DMP
   EXPDP> ******************************************************************************
   EXPDP> Datafiles required for transportable tablespace AMIT:
   EXPDP>   G:\APP\NEWADMIN\ORADATA\TEST\DATAFILE\AMIT01.DBF
   EXPDP> Job "SYS"."TSPITR_EXP_ADAF" successfully completed at 23:59:45
Export completed

contents of Memory Script:
{
# shutdown clone before import
shutdown clone immediate
# drop target tablespaces before importing them back
sql 'drop tablespace  AMIT including contents keep datafiles';
}
executing Memory Script

database closed
database dismounted
Oracle instance shut down

sql statement: drop tablespace  AMIT including contents keep datafiles

Performing import of metadata...
   IMPDP> Master table "SYS"."TSPITR_IMP_ADAF" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_ADAF":
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   IMPDP> Job "SYS"."TSPITR_IMP_ADAF" successfully completed at 00:00:17
Import completed

contents of Memory Script:
{
# make read write and offline the imported tablespaces
sql 'alter tablespace  AMIT read write';
sql 'alter tablespace  AMIT offline';
# enable autobackups after TSPITR is finished
sql 'begin dbms_backup_restore.AutoBackupFlag(TRUE); end;';
}
executing Memory Script

sql statement: alter tablespace  AMIT read write

sql statement: alter tablespace  AMIT offline

sql statement: begin dbms_backup_restore.AutoBackupFlag(TRUE); end;

Removing automatic instance
Automatic instance removed
auxiliary instance file D:\BACKUP\TEST\DATAFILE\O1_MF_TEMP_902J05F7_.TMP deleted
auxiliary instance file D:\BACKUP\TEST\ONLINELOG\O1_MF_3_902HZY45_.LOG deleted
auxiliary instance file D:\BACKUP\TEST\ONLINELOG\O1_MF_2_902HZVTN_.LOG deleted
auxiliary instance file D:\BACKUP\TEST\ONLINELOG\O1_MF_1_902HZSHD_.LOG deleted
auxiliary instance file D:\BACKUP\TEST\DATAFILE\O1_MF_SYSAUX_902HTTX3_.DBF deleted
auxiliary instance file D:\BACKUP\TEST\DATAFILE\O1_MF_UNDOTBS1_902HTV2W_.DBF deleted
auxiliary instance file D:\BACKUP\TEST\DATAFILE\O1_MF_SYSTEM_902HTTV8_.DBF deleted
auxiliary instance file D:\BACKUP\TEST\CONTROLFILE\O1_MF_902HTHRJ_.CTL deleted
Finished recover at 07-AUG-13

RMAN> exit

Recovery Manager complete.

C:\Users\NewAdmin>sqlplus

SQL*Plus: Release 11.2.0.1.0 Production on Wed Aug 7 00:05:18 2013

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: /as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> conn amit/amit
Connected.

SQL> select count(1) from amit;
select count(1) from amit
                     *
ERROR at line 1:
ORA-00376: file 5 cannot be read at this time
ORA-01110: data file 5: 'G:\APP\NEWADMIN\ORADATA\TEST\DATAFILE\AMIT01.DBF'

##############AFTER TSPITR you have to manually make your tablespace online###############

SQL> alter tablespace amit online;

Tablespace altered.

SQL> select * from amit;

        ID NAME
---------- ------------
      3242 ffsdf
      4324 fdfdf
    324324 ssdfdsf
     43242 fsfdf/
     43242 ffewtew
   4324234 fsfegg

6 rows selected.

Table has been successfully recovered using TSPITR. 

Please note that if you have hands on in using TSPITR only then use it for recovery in Production database.

If possible Always preferred Flashback feature ahead of TSPITR.

Related Articles :-
How to perform DBPITR

I hope this article helped you.

Regards,
Amit Rath

Sunday, August 4, 2013

How to perform Database Point in Time Recovery(DBPITR)

DBPITR abbreviates to Database point in time recovery. When we want to rewind our database back to a previous time then we can use DBPITR to do that.

When we have performed some unwanted changes in our database and we want to undo that, we can perform DBPITR to undo that. 

Only perform DBPITR when you are not able to undo the unwanted change with Flashback technology.

Disadvantages :-

1. Its a time consuming process as whole database has to be restored from the backup and recovered to a earlier time using archive logs and incremental backups.

2. Unlike TSPITR in which only particular tablespace objects are inaccessible  whole database is unavailable during this entire process. 

3. All changes which are made after the TIME or SCN or log sequence which we used in DBPITR will be lost.

Requirements :-

1. Database has to be in archive log mode.
2. A valid backup of full database is needed and all archive logs or incremental backup after that backup will be required.

##NOTE:- When we have to undo those changes which are are present in a different incarnation then DBPITR has to be done because we cannot rewind particular objects to a different incarnation, we have to rewind whole database.

Example :-

Consider we create a table named AMIT and AMIT_BCKUP and insert some records in that. Now accidentally we performed a drop command on table. Now we want to rewind our database to a time before drop operation. PFB steps :-

SQL> select table_name from user_tables where table_name like 'AM%';

TABLE_NAME
------------------------------
AMIT
AMIT_BCKUP

SQL> drop table AMIT;

Table dropped.

SQL> drop table AMIT_BCKUP;

Table dropped.

SQL> shu immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1503199232 bytes
Fixed Size                  2175928 bytes
Variable Size             872418376 bytes
Database Buffers          620756992 bytes
Redo Buffers                7847936 bytes
Database mounted.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

C:\Users\NewAdmin>rman target /

Recovery Manager: Release 11.2.0.1.0 - Production on Sun Aug 4 02:52:36 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TEST (DBID=2120650031, not open)

RMAN> run
2> {
3> set until time "sysdate-1/24/60*6";   ########### We rewind our database 6 mins before###########
#set until scn 10023;
#set until sequence 345 thread 2;
#set until restore point AMIT_RESTORE; #######AMIT_RESTORE point already created##########
4> restore database;
5> recover database;
6> }

executing command: SET until clause

Starting restore at 04-AUG-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=63 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to G:\APP\NEWADMIN\ORADATA\TEST\DATAFILE\O1_MF_SYSTEM_8ZSV29QW_.DBF
channel ORA_DISK_1: restoring datafile 00002 to G:\APP\NEWADMIN\ORADATA\TEST\DATAFILE\O1_MF_SYSAUX_8ZSV29Y3_.DBF
channel ORA_DISK_1: restoring datafile 00003 to G:\APP\NEWADMIN\ORADATA\TEST\DATAFILE\O1_MF_UNDOTBS1_8ZSV2B0R_.DBF
channel ORA_DISK_1: restoring datafile 00004 to G:\APP\NEWADMIN\ORADATA\TEST\DATAFILE\O1_MF_USERS_8ZSV2B72_.DBF
channel ORA_DISK_1: reading from backup piece G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\BACKUPSET\2013_08_03\O1_MF_NNNDF_TAG20130803T191402_8ZT292Z5_.BKP
channel ORA_DISK_1: piece handle=G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\BACKUPSET\2013_08_03\O1_MF_NNNDF_TAG20130803T191402_8ZT292Z5_.BKP tag=TAG20130803T191402
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:01:56
Finished restore at 04-AUG-13

Starting recover at 04-AUG-13
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 7 is already on disk as file G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2013_08_03\O1_MF_1_7_8ZT2C6SZ_.ARC
archived log for thread 1 with sequence 8 is already on disk as file G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2013_08_03\O1_MF_1_8_8ZT2GG5P_.ARC
archived log for thread 1 with sequence 1 is already on disk as file G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2013_08_04\O1_MF_1_1_8ZTWR0B7_.ARC
archived log for thread 1 with sequence 2 is already on disk as file G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2013_08_04\O1_MF_1_2_8ZTX0MNK_.ARC
archived log file name=G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2013_08_03\O1_MF_1_7_8ZT2C6SZ_.ARC thread=1 sequence=7
archived log file name=G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2013_08_03\O1_MF_1_8_8ZT2GG5P_.ARC thread=1 sequence=8
archived log file name=G:\APP\NEWADMIN\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2013_08_04\O1_MF_1_1_8ZTWR0B7_.ARC thread=1 sequence=1
media recovery complete, elapsed time: 00:00:06
Finished recover at 04-AUG-13

RMAN>

RMAN> alter database open resetlogs;

database opened

RMAN>

RMAN> exit

Recovery Manager complete.

C:\Users\NewAdmin>
C:\Users\NewAdmin>sqlplus

SQL*Plus: Release 11.2.0.1.0 Production on Sun Aug 4 02:58:18 2013

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: /as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select table_name from user_tables where table_name like 'AM%';

TABLE_NAME
------------------------------
AMIT
AMIT_BCKUP

We successfully Rewind our database to a before drop point.

############# Before doing any important change its better to create a restore point , Its helps a lot while doing recovery as no need to find the earlier time or SCN or log sequence to perform recovery##############

SQL> create restore point amit_restore;

Restore point created.

Related Articles :-
 How to perform TSPITR

I hope this article helped you.

Regards,
Amit Rath