About Me

My photo
Bangalore, India
I am an Oracle Certified Professional working in SAP Labs. Everything written on my blog has been tested on my local environment, Please test before implementing or running in production. You can contact me at amit.rath0708@gmail.com.

Saturday, December 17, 2016

RMAN restore fails with RMAN-06023 but backups are available

Today while I was restoring one RMAN backup to a new host , I got very different error while restoring even though backup pieces are available and cataloged also. PFB error details :-

Error:-

creating datafile file number=1 name=/u01/oracle/amit/system01.dbf
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 07/27/2015 03:19:59
ORA-01180: can not create datafile 1
ORA-01110: data file 1: '/u01/oracle/amit/system01.dbf'

Checked backup was also available for datafile 1

RMAN> list backup of datafile 1;


List of Backup Sets

===================
.
.
   List of Backup Pieces for backup set 30299 Copy #1
    BP Key  Pc# Status      Piece Name
    ------- --- ----------- ----------
    30299   1   EXPIRED     /u00/oracle/admin/amit/backup/hot_amit_30306_1_20140701

  Backup Set Copy #2 of backup set 30299
  Device Type Elapsed Time Completion Time     Compressed Tag
  ----------- ------------ ------------------- ---------- ---
  DISK        00:52:02     2015-07-27 03:18:48 YES        HOT_DISK_amit

    List of Backup Pieces for backup set 30299 Copy #2
    BP Key  Pc# Status      Piece Name
    ------- --- ----------- ----------
    30310   1   AVAILABLE   /ora_backup/test/amit/hot_amit_30306_1_20140701

Cause :-

When we restore using a backup controlfile and run the catalog command to catalog backup pieces, Oracle does two things, First it does an implicit crosscheck for files available in the recovery area and then it does an explicit crosscheck for the location you mentioned to catalog the pieces.

While doing an implicit catalog, if there is any file available(archive log, online log, any datafile) of different incarnation from the control file, Oracle register that file details to the controlfile and changes the incarnation value for the control file as per the available file in the Recovery area.

Now when we try to restore our datafiles from backup pieces, controlfile is not able to recognize the backup pieces as they are from different incarnation.

So we get "ORA-01110: data file 1: '/u01/oracle/amit/system01.dbf'" even if we have backup pieces available.

Solution :-

1. Clean up you recovery area before doing a catalog for backup pieces. Run a Crosscheck and delete expired backup details.
2. Disable Recovery area details before running catalog command, you can do it by removing db_recovery_file_dest and db_recovery_file_dest_size parameters in parameter file.

After doing any of the above try to restore the backup , it will complete without any issues.

I hope this article helped you

Thanks
Amit Rath

Thursday, December 15, 2016

ORA-00060: deadlock detected while waiting for resource ( Deadlock due to Bitmap Indexes)

Yesterday I was testing the below scenario with development team using Bitmap indexes .

Concept :-

Generally Bitmap index are created in Data warehouse environments where database activity is mostly read and writing is significantly less.

Bitmap indexes are created on columns with less cardinality , so one row of a bitmap index points to multiple rowid's to the column in the table , irrespective of B-tree indexes which points to only one row id.

Now any DML on a column having bitmap index ends up locking many rows in the index.

Every record for a column having bitmap index, there will be a index record created for that which will point to the number of rows in that column having that value.

For Eg , Gender Column will have 2 distinct values, Male and Female. If we put a Bitmap index on Gender column then index created will have two index records, Male and Female , and each index records will have detail for all the column values.

If Column's 1st, 5th and 9th record has value as Male then the index record will have value '1' in 1st , 5th and 9th record and '0' in the other.

Same goes for the Female Index record also, if 2nd, 4th, 3rd and 6th have value as Female then the index record will have '1' in 2nd, 4th, 3rd and 6th record and 0 in other.

Especially concurrent single row DML operations goes for a deadlock.

What happens when we try to insert same value to a column having Bitmap index associated to it from different sessions. PFB detail:-

When we try to insert same value from different sessions, both session tries to aquire lock for the same region of Bitmap index to create a index record, resulting in Deadlock.

Below is a demo for that :-

SQL> create table test_bitmap ( name varchar(10),age number);

Table created.

SQL> create bitmap index test_bitmap_idx on test_bitmap(age);

Index created.

Session 1:-

SQL> insert into test_bitmap values ('&name',&age);
Enter value for name: rick
Enter value for age: 34
old   1: insert into test_bitmap values ('&name',&age)
new   1: insert into test_bitmap values ('rick',34)

1 row created.

SQL> insert into test_bitmap values ('&name',&age);
Enter value for name: Ramse
Enter value for age: 24
old   1: insert into test_bitmap values ('&name',&age)
new   1: insert into test_bitmap values ('Ramse',24)
Waiting ……………………


Session 2:-

SQL> insert into test_bitmap values ('&name',&age);
Enter value for name: Ramse
Enter value for age: 24
old   1: insert into test_bitmap values ('&name',&age)
new   1: insert into test_bitmap values ('Ramse',24)

1 row created.

SQL> insert into test_bitmap values ('&name',&age);
Enter value for name: rick
Enter value for age: 34
old   1: insert into test_bitmap values ('&name',&age)
new   1: insert into test_bitmap values ('rick',34)
insert into test_bitmap values ('rick',34)
            *
ERROR at line 1:
ORA-00060: deadlock detected while waiting for resource

Generally this is not the case with normal index while doing insert or any DML.

Solution :-

Prefer not to do insert/update/delete on columns having Bitmap indexes, if Business requirement is there then remove the Bitmap index from that column.

I hope this article helped you.

Thanks
Amit Rath

How to purge/flush a single SQL PLAN from shared pool in Oracle

Purging a SQL PLAN from shared pool is not a frequent activity , we generally do it when a query is constantly picking up the bad plan and we want the sql to go for a hard parse next time it runs in database.

Obviously we can pass a hint in the query to force it for a Hard Parse but that will require a change in query , indirectly change in the application code , which is generally not possible in a business critical application.

We can flush the entire shared pool but that will invalidate all the sql plans available in the database and all sql queries will go for a hard parse. Flushing shared pool can have adverse affect on your database performance.

Flush the entire shared pool :-

Alter system flush shared_pool;

Flushing a single SQL plan from database will require certain details for that sql statement like address of the handle and hash value of the cursor holding the SQL plan.

Steps to Flush/purge a particular sql plan from Shared pool :-

SQL>  select ADDRESS, HASH_VALUE from GV$SQLAREA where SQL_ID like '';

ADDRESS HASH_VALUE
---------------- ----------
000000085FD77CF0  808321886

Now we have the address of the handle and hash value of the cursor holding the sql. Flush this from shared pool.

SQL> exec DBMS_SHARED_POOL.PURGE ('000000085FD77CF0, 808321886', 'C');

PL/SQL procedure successfully completed.

SQL>  select ADDRESS, HASH_VALUE from GV$SQLAREA where SQL_ID like '';

no rows selected

SQL plan flushed for above particlar sql, Now next time above sql/query will go for a hard parse in database.

I hope this article helped you.

Thanks
Amit Rath

Wednesday, December 14, 2016

How to migrate SQL PLAN BASELINE from one database to another

We can migrate execution plan for a particular query from one database to another . This article will provide all the details on how to migrate and when to migrate the plan of a query

When to migrate the plan of a query :-

We came across with this kind of scenario when a query is running fine in pre prod/UAT and is not running properly in production.

Everything related to query in both the environments are same, Plan which the query is using in production is not good enough and taking a lot of time.

Best approach in this kind of scenario is to purge the bad plan from the shared pool . PFB check here to purge a SQL PLAN from shared pool :-

SQL> exec DBMS_SHARED_POOL.PURGE ('0000000410E16308, 3635099784', 'C');

PL/SQL procedure successfully completed.

Now when we run the query again in production , it will go for a hard parse and Oracle will automatically select the best execution plan.

What if , execution plan which Oracle selected is not that good as it's in UAT/Preprod, then we will decide to migrate the plan from UAT/Pre prod to production.

How to migrate the plan of a query to another database :-

gather the sql_id and plan_hash_value for the query , we an get this value from gv$sqlarea

select sql_id,plan_hash_value, sql_text from gv$sql_area where sql_text like '%

For this article :-

Sql_id = '45g4pghgh9rt'
Plan_Hash_value :- 2904123173

1. Load the plan from shared pool

DECLARE
my_plans pls_integer;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id =>'45g4pghgh9rt', plan_hash_value=>'2904123173');
end;
/

PL/SQL procedure successfully completed.

2. Check the details of the baseline in dba_sql_plan_baselines :-

select sql_handle, sql_text, plan_name,PARSING_SCHEMA_NAME,enabled,accepted,fixed,to_char(last_executed,'dd-mon-yy HH24:MI') last_executed from dba_sql_plan_baselines;

3. Create a staging table, can’t create in sys schema :-

SQL> exec DBMS_SPM.CREATE_STGTAB_BASELINE('STGTAB', 'AMIT');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.16
SQL> select table_name,owner from dba_tables where table_name='STGTAB';

TABLE_NAME                     OWNER
------------------------------ ------------------------------
STGTAB                         AMIT

Elapsed: 00:00:00.07

4. Pack the baseline for the sql in the table :-

SQL>  var x number;
begin
:x := DBMS_SPM.PACK_STGTAB_BASELINE('STGTAB', 'AMIT', sql_handle => 'SQL_27c1b5a44008ee73', plan_name => 'SQL_PLAN_2ghdpnj00jvmm46bd3d4a' );
end;SQL>   2    3
  4  /

PL/SQL procedure successfully completed.

5. Take export backup of STGTAB table and import it to the target table and then unpack the baseline

6. Unpack the baseline in target database, before this purge the previous/bad sql plan available in target database.

SQL> var x number;
begin
:x := DBMS_SPM.UNPACK_STGTAB_BASELINE('STGTAB', 'AMIT');
end;SQL>   2    3
  4  /

PL/SQL procedure successfully completed.

7. Sqlplan baseline migration done, you will see the plan hash value for respective sql will be changed.

I hope this article helped you

Thanks
Amit Rath

Saturday, December 3, 2016

Relocating Service in Oracle RAC Database

Relocating Services in Oracle RAC is mainly used for maintenance purpose.

Maintenance purpose :- When for Patching(OS/Oracle), we wanted to do it in a rolling fashion, one node at a time as we don't have downtime for the application, we have to relocate service temporarily to other nodes of the cluster.

We can also permanently move a service from one node to another node. We will show both the methods in this article 

Temporary relocation of a Service in Oracle :-

==>srvctl config service -d AMIT -s app_prod
Service name: app_prod
Service is enabled
Server pool: amit_app_prd
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SESSION
Failover method: BASIC
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: AMIT1
Available instances: AMIT2

==>srvctl status service -d AMIT
Service app_prod is running on instance(s) AMIT1

We can see from above that app_prod service is running on AMIT1 instance , now we wanted to do patching on AMIT1 instance.

We will relocate service temporarily to AMIT2 instance and once our maintenance work is done we will relocate it back. 

Relocating a service will not modify the preferred instance and available instance details for a service . This is a temporary job.

We can relocate service temporarily using below methods :-

Using Database instance :-

==>srvctl relocate service -d AMIT -s app_prod -i p1end1 -t p1end2

Using Node details :-

==>srvctl relocate service -d AMIT -s app_prod -c NODE1 -n NODE2

If you want to disconnect all session while relocating the service , we can use -f clause.

Now check the configuration of the service , it's the same :-

==>srvctl config service -d AMIT -s app_prod
Service name: app_prod
.......
TAF policy specification: NONE
Edition:
Preferred instances: AMIT1
Available instances: AMIT2

Once maintenance job is done, relocate it as it was before using same command.

Permanent relocation of a service :-

When we have to permanently relocate a service then we have to modify the config parameters for the service , in this case the preferred instance and available instance details will be changed.

==>srvctl config service -d AMIT -s app_prod
Service name: app_prod
Service is enabled
Server pool: amit_app_prd
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SESSION
Failover method: BASIC
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: AMIT1
Available instances: AMIT2

Now modify the service for preferred instance as AMIT2

=> srvctl modify service -d AMIT -s app_prod -i AMIT2 -a AMIT1

if multiple instances then we can use comma(,)

=>  srvctl modify service -d AMIT -s app_prod -i AMIT2,AMIT3 -a AMIT1,AMIT4

we can use -f to disconnect all active session in the relocation process.

Let's see if configuration modified or not :-

==>srvctl config service -d AMIT -s app_prod
Service name: app_prod
..................
Edition:
Preferred instances: AMIT2
Available instances: AMIT1

It's modified permanently


I hope this article helped you

Thanks
Amit Rath

Thursday, December 1, 2016

ORA-65114: space usage in container is too high

Yesterday I was doing an import for a pluggable database and it ran out of space so I tried to resize one of the datafile and got below error while doing it :-

SQL> ALTER DATABASE DATAFILE '+DATA1/AMIT_CON/3933D120863A61B7E0531609900AE4EE/DATAFILE/AMIT.786.919009661' RESIZE 100g;
ALTER DATABASE DATAFILE '+DATA1/AMIT_CON/3933D120863A61B7E0531609900AE4EE/DATAFILE/AMIT.786.919009661' RESIZE 100g
*
ERROR at line 1:
ORA-65114: space usage in container is too high

Cause :-

This error generally comes when total size of PDB after resize of the datafile is more than the size you have allocated to a PDB while creating.

Solution :-

Change the allocation storage for a PDB and then try to resize the datafile.

Note :- while changing the storage for a PDB , check if space is available in ASM disks.


Allocation storage of a PDB can be changed by logging in to the PDB :-

SQL> alter session set container=AMIT;

Session altered.

Either change the allocation storage to a required value or change it to unlimited.

SQL> ALTER PLUGGABLE DATABASE STORAGE UNLIMITED;

Pluggable database altered.

OR

SQL> alter pluggable database storage (MAXSIZE 120G);

Pluggable database altered.

Now try to resize the datafile

SQL> ALTER DATABASE DATAFILE '+DATA1/AMIT_CON/3933D120863A61B7E0531609900AE4EE/DATAFILE/amit.786.919009661' RESIZE 100556633088;

Database altered.

I hope this article helped you.

Thanks
Amit Rath