About Me

My photo
Bangalore, India
I am an Oracle Certified Professional working in SAP Labs. Everything written on my blog has been tested on my local environment, Please test before implementing or running in production. You can contact me at amit.rath0708@gmail.com.

Saturday, December 17, 2016

RMAN restore fails with RMAN-06023 but backups are available

Today while I was restoring one RMAN backup to a new host , I got very different error while restoring even though backup pieces are available and cataloged also. PFB error details :-

Error:-

creating datafile file number=1 name=/u01/oracle/amit/system01.dbf
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 07/27/2015 03:19:59
ORA-01180: can not create datafile 1
ORA-01110: data file 1: '/u01/oracle/amit/system01.dbf'

Checked backup was also available for datafile 1

RMAN> list backup of datafile 1;


List of Backup Sets

===================
.
.
   List of Backup Pieces for backup set 30299 Copy #1
    BP Key  Pc# Status      Piece Name
    ------- --- ----------- ----------
    30299   1   EXPIRED     /u00/oracle/admin/amit/backup/hot_amit_30306_1_20140701

  Backup Set Copy #2 of backup set 30299
  Device Type Elapsed Time Completion Time     Compressed Tag
  ----------- ------------ ------------------- ---------- ---
  DISK        00:52:02     2015-07-27 03:18:48 YES        HOT_DISK_amit

    List of Backup Pieces for backup set 30299 Copy #2
    BP Key  Pc# Status      Piece Name
    ------- --- ----------- ----------
    30310   1   AVAILABLE   /ora_backup/test/amit/hot_amit_30306_1_20140701

Cause :-

When we restore using a backup controlfile and run the catalog command to catalog backup pieces, Oracle does two things, First it does an implicit crosscheck for files available in the recovery area and then it does an explicit crosscheck for the location you mentioned to catalog the pieces.

While doing an implicit catalog, if there is any file available(archive log, online log, any datafile) of different incarnation from the control file, Oracle register that file details to the controlfile and changes the incarnation value for the control file as per the available file in the Recovery area.

Now when we try to restore our datafiles from backup pieces, controlfile is not able to recognize the backup pieces as they are from different incarnation.

So we get "ORA-01110: data file 1: '/u01/oracle/amit/system01.dbf'" even if we have backup pieces available.

Solution :-

1. Clean up you recovery area before doing a catalog for backup pieces. Run a Crosscheck and delete expired backup details.
2. Disable Recovery area details before running catalog command, you can do it by removing db_recovery_file_dest and db_recovery_file_dest_size parameters in parameter file.

After doing any of the above try to restore the backup , it will complete without any issues.

I hope this article helped you

Thanks
Amit Rath

Thursday, December 15, 2016

ORA-00060: deadlock detected while waiting for resource ( Deadlock due to Bitmap Indexes)

Yesterday I was testing the below scenario with development team using Bitmap indexes .

Concept :-

Generally Bitmap index are created in Data warehouse environments where database activity is mostly read and writing is significantly less.

Bitmap indexes are created on columns with less cardinality , so one row of a bitmap index points to multiple rowid's to the column in the table , irrespective of B-tree indexes which points to only one row id.

Now any DML on a column having bitmap index ends up locking many rows in the index.

Every record for a column having bitmap index, there will be a index record created for that which will point to the number of rows in that column having that value.

For Eg , Gender Column will have 2 distinct values, Male and Female. If we put a Bitmap index on Gender column then index created will have two index records, Male and Female , and each index records will have detail for all the column values.

If Column's 1st, 5th and 9th record has value as Male then the index record will have value '1' in 1st , 5th and 9th record and '0' in the other.

Same goes for the Female Index record also, if 2nd, 4th, 3rd and 6th have value as Female then the index record will have '1' in 2nd, 4th, 3rd and 6th record and 0 in other.

Especially concurrent single row DML operations goes for a deadlock.

What happens when we try to insert same value to a column having Bitmap index associated to it from different sessions. PFB detail:-

When we try to insert same value from different sessions, both session tries to aquire lock for the same region of Bitmap index to create a index record, resulting in Deadlock.

Below is a demo for that :-

SQL> create table test_bitmap ( name varchar(10),age number);

Table created.

SQL> create bitmap index test_bitmap_idx on test_bitmap(age);

Index created.

Session 1:-

SQL> insert into test_bitmap values ('&name',&age);
Enter value for name: rick
Enter value for age: 34
old   1: insert into test_bitmap values ('&name',&age)
new   1: insert into test_bitmap values ('rick',34)

1 row created.

SQL> insert into test_bitmap values ('&name',&age);
Enter value for name: Ramse
Enter value for age: 24
old   1: insert into test_bitmap values ('&name',&age)
new   1: insert into test_bitmap values ('Ramse',24)
Waiting ……………………


Session 2:-

SQL> insert into test_bitmap values ('&name',&age);
Enter value for name: Ramse
Enter value for age: 24
old   1: insert into test_bitmap values ('&name',&age)
new   1: insert into test_bitmap values ('Ramse',24)

1 row created.

SQL> insert into test_bitmap values ('&name',&age);
Enter value for name: rick
Enter value for age: 34
old   1: insert into test_bitmap values ('&name',&age)
new   1: insert into test_bitmap values ('rick',34)
insert into test_bitmap values ('rick',34)
            *
ERROR at line 1:
ORA-00060: deadlock detected while waiting for resource

Generally this is not the case with normal index while doing insert or any DML.

Solution :-

Prefer not to do insert/update/delete on columns having Bitmap indexes, if Business requirement is there then remove the Bitmap index from that column.

I hope this article helped you.

Thanks
Amit Rath

How to purge/flush a single SQL PLAN from shared pool in Oracle

Purging a SQL PLAN from shared pool is not a frequent activity , we generally do it when a query is constantly picking up the bad plan and we want the sql to go for a hard parse next time it runs in database.

Obviously we can pass a hint in the query to force it for a Hard Parse but that will require a change in query , indirectly change in the application code , which is generally not possible in a business critical application.

We can flush the entire shared pool but that will invalidate all the sql plans available in the database and all sql queries will go for a hard parse. Flushing shared pool can have adverse affect on your database performance.

Flush the entire shared pool :-

Alter system flush shared_pool;

Flushing a single SQL plan from database will require certain details for that sql statement like address of the handle and hash value of the cursor holding the SQL plan.

Steps to Flush/purge a particular sql plan from Shared pool :-

SQL>  select ADDRESS, HASH_VALUE from GV$SQLAREA where SQL_ID like '';

ADDRESS HASH_VALUE
---------------- ----------
000000085FD77CF0  808321886

Now we have the address of the handle and hash value of the cursor holding the sql. Flush this from shared pool.

SQL> exec DBMS_SHARED_POOL.PURGE ('000000085FD77CF0, 808321886', 'C');

PL/SQL procedure successfully completed.

SQL>  select ADDRESS, HASH_VALUE from GV$SQLAREA where SQL_ID like '';

no rows selected

SQL plan flushed for above particlar sql, Now next time above sql/query will go for a hard parse in database.

I hope this article helped you.

Thanks
Amit Rath

Wednesday, December 14, 2016

How to migrate SQL PLAN BASELINE from one database to another

We can migrate execution plan for a particular query from one database to another . This article will provide all the details on how to migrate and when to migrate the plan of a query

When to migrate the plan of a query :-

We came across with this kind of scenario when a query is running fine in pre prod/UAT and is not running properly in production.

Everything related to query in both the environments are same, Plan which the query is using in production is not good enough and taking a lot of time.

Best approach in this kind of scenario is to purge the bad plan from the shared pool . PFB check here to purge a SQL PLAN from shared pool :-

SQL> exec DBMS_SHARED_POOL.PURGE ('0000000410E16308, 3635099784', 'C');

PL/SQL procedure successfully completed.

Now when we run the query again in production , it will go for a hard parse and Oracle will automatically select the best execution plan.

What if , execution plan which Oracle selected is not that good as it's in UAT/Preprod, then we will decide to migrate the plan from UAT/Pre prod to production.

How to migrate the plan of a query to another database :-

gather the sql_id and plan_hash_value for the query , we an get this value from gv$sqlarea

select sql_id,plan_hash_value, sql_text from gv$sql_area where sql_text like '%

For this article :-

Sql_id = '45g4pghgh9rt'
Plan_Hash_value :- 2904123173

1. Load the plan from shared pool

DECLARE
my_plans pls_integer;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id =>'45g4pghgh9rt', plan_hash_value=>'2904123173');
end;
/

PL/SQL procedure successfully completed.

2. Check the details of the baseline in dba_sql_plan_baselines :-

select sql_handle, sql_text, plan_name,PARSING_SCHEMA_NAME,enabled,accepted,fixed,to_char(last_executed,'dd-mon-yy HH24:MI') last_executed from dba_sql_plan_baselines;

3. Create a staging table, can’t create in sys schema :-

SQL> exec DBMS_SPM.CREATE_STGTAB_BASELINE('STGTAB', 'AMIT');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.16
SQL> select table_name,owner from dba_tables where table_name='STGTAB';

TABLE_NAME                     OWNER
------------------------------ ------------------------------
STGTAB                         AMIT

Elapsed: 00:00:00.07

4. Pack the baseline for the sql in the table :-

SQL>  var x number;
begin
:x := DBMS_SPM.PACK_STGTAB_BASELINE('STGTAB', 'AMIT', sql_handle => 'SQL_27c1b5a44008ee73', plan_name => 'SQL_PLAN_2ghdpnj00jvmm46bd3d4a' );
end;SQL>   2    3
  4  /

PL/SQL procedure successfully completed.

5. Take export backup of STGTAB table and import it to the target table and then unpack the baseline

6. Unpack the baseline in target database, before this purge the previous/bad sql plan available in target database.

SQL> var x number;
begin
:x := DBMS_SPM.UNPACK_STGTAB_BASELINE('STGTAB', 'AMIT');
end;SQL>   2    3
  4  /

PL/SQL procedure successfully completed.

7. Sqlplan baseline migration done, you will see the plan hash value for respective sql will be changed.

I hope this article helped you

Thanks
Amit Rath

Saturday, December 3, 2016

Relocating Service in Oracle RAC Database

Relocating Services in Oracle RAC is mainly used for maintenance purpose.

Maintenance purpose :- When for Patching(OS/Oracle), we wanted to do it in a rolling fashion, one node at a time as we don't have downtime for the application, we have to relocate service temporarily to other nodes of the cluster.

We can also permanently move a service from one node to another node. We will show both the methods in this article 

Temporary relocation of a Service in Oracle :-

==>srvctl config service -d AMIT -s app_prod
Service name: app_prod
Service is enabled
Server pool: amit_app_prd
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SESSION
Failover method: BASIC
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: AMIT1
Available instances: AMIT2

==>srvctl status service -d AMIT
Service app_prod is running on instance(s) AMIT1

We can see from above that app_prod service is running on AMIT1 instance , now we wanted to do patching on AMIT1 instance.

We will relocate service temporarily to AMIT2 instance and once our maintenance work is done we will relocate it back. 

Relocating a service will not modify the preferred instance and available instance details for a service . This is a temporary job.

We can relocate service temporarily using below methods :-

Using Database instance :-

==>srvctl relocate service -d AMIT -s app_prod -i p1end1 -t p1end2

Using Node details :-

==>srvctl relocate service -d AMIT -s app_prod -c NODE1 -n NODE2

If you want to disconnect all session while relocating the service , we can use -f clause.

Now check the configuration of the service , it's the same :-

==>srvctl config service -d AMIT -s app_prod
Service name: app_prod
.......
TAF policy specification: NONE
Edition:
Preferred instances: AMIT1
Available instances: AMIT2

Once maintenance job is done, relocate it as it was before using same command.

Permanent relocation of a service :-

When we have to permanently relocate a service then we have to modify the config parameters for the service , in this case the preferred instance and available instance details will be changed.

==>srvctl config service -d AMIT -s app_prod
Service name: app_prod
Service is enabled
Server pool: amit_app_prd
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SESSION
Failover method: BASIC
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: AMIT1
Available instances: AMIT2

Now modify the service for preferred instance as AMIT2

=> srvctl modify service -d AMIT -s app_prod -i AMIT2 -a AMIT1

if multiple instances then we can use comma(,)

=>  srvctl modify service -d AMIT -s app_prod -i AMIT2,AMIT3 -a AMIT1,AMIT4

we can use -f to disconnect all active session in the relocation process.

Let's see if configuration modified or not :-

==>srvctl config service -d AMIT -s app_prod
Service name: app_prod
..................
Edition:
Preferred instances: AMIT2
Available instances: AMIT1

It's modified permanently


I hope this article helped you

Thanks
Amit Rath

Thursday, December 1, 2016

ORA-65114: space usage in container is too high

Yesterday I was doing an import for a pluggable database and it ran out of space so I tried to resize one of the datafile and got below error while doing it :-

SQL> ALTER DATABASE DATAFILE '+DATA1/AMIT_CON/3933D120863A61B7E0531609900AE4EE/DATAFILE/AMIT.786.919009661' RESIZE 100g;
ALTER DATABASE DATAFILE '+DATA1/AMIT_CON/3933D120863A61B7E0531609900AE4EE/DATAFILE/AMIT.786.919009661' RESIZE 100g
*
ERROR at line 1:
ORA-65114: space usage in container is too high

Cause :-

This error generally comes when total size of PDB after resize of the datafile is more than the size you have allocated to a PDB while creating.

Solution :-

Change the allocation storage for a PDB and then try to resize the datafile.

Note :- while changing the storage for a PDB , check if space is available in ASM disks.


Allocation storage of a PDB can be changed by logging in to the PDB :-

SQL> alter session set container=AMIT;

Session altered.

Either change the allocation storage to a required value or change it to unlimited.

SQL> ALTER PLUGGABLE DATABASE STORAGE UNLIMITED;

Pluggable database altered.

OR

SQL> alter pluggable database storage (MAXSIZE 120G);

Pluggable database altered.

Now try to resize the datafile

SQL> ALTER DATABASE DATAFILE '+DATA1/AMIT_CON/3933D120863A61B7E0531609900AE4EE/DATAFILE/amit.786.919009661' RESIZE 100556633088;

Database altered.

I hope this article helped you.

Thanks
Amit Rath

Friday, August 5, 2016

ORA-31623: a job is not attached to this session via the specified handle

Today I was doing an import and got below error while import was initiating :-

Issue :-

Import: Release 11.2.0.4.0 - Production on Thu Aug 4 22:49:29 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Username: /as sysdba

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options

UDI-31623: operation generated ORACLE error 31623
ORA-31623: a job is not attached to this session via the specified handle
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3326
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4551
ORA-06512: at line 1

Cause :-

Above issue can be caused if any registry component is invalid or Oracle is not able to provide sufficient memory to Datapump job.

Solution :-

Crosschecked the DBA _REGISTRY , all components were in VALID state.

Checked the SGA component allocated to database and saw that only SGA_MAX_TARGET is mentioned, SGA_TARGET is 0 . PFB below :-

Datapump use streams pool memory parameter to allocate memory to jobs, if SGA is manually sized then allocate some memory to streams_pool_size

SQL> alter system set streams_pool_size=128M scope=both;

IF ASMM/AMM is used then perform below steps :-

SQL> show parameter sga

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
lock_sga                             boolean     FALSE
pre_page_sga                         boolean     FALSE
sga_max_size                         big integer 1280M
sga_target                           big integer 0

Increased the value of SGA target to 4GB

SQL> alter system set sga_max_size=4g scope=spfile;

System altered.

SQL> shu immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 4275781632 bytes
Fixed Size                  2260088 bytes
Variable Size            4043309960 bytes
Database Buffers          201326592 bytes
Redo Buffers               28884992 bytes
Database mounted.
Database opened.

SQL> alter system set sga_target=4g;

SQL> show parameter sga

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
lock_sga                             boolean     FALSE
pre_page_sga                         boolean     FALSE
sga_max_size                         big integer 4G
sga_target                           big integer 4G

Tried import after this and it completed without any errors.

I hope this article helped you

Thanks
Amit Rath

Monday, March 14, 2016

How to recreate Oracle Inventory

Today I was applying patch on one of my development server and I got below error :-

[oracle@omr oraInventory]$ ./opatch lsinvinventory
Oracle Interim Patch Installer version 11.2.0.3.12
Copyright (c) 2016, Oracle Corporation.  All rights reserved.


Oracle Home       : /opt/oracle/product/database/11.2.0.4
Central Inventory : /opt/orainv/oraInventory
   from           : /opt/oracle/product/database/11.2.0.4/oraInst.loc
OPatch version    : 11.2.0.3.12
OUI version       : 11.2.0.4.0
Log file location : /opt/oracle/product/database/11.2.0.4/cfgtoollogs/opatch/opatch2016-03-11_04-33-59AM_1.log

Lsinventory Output file location : /opt/oracle/product/database/11.2.0.4/cfgtoollogs/opatch/lsinv/lsinventory2016-03-11_04-33-59AM.txt

OPatch failed to locate Central Inventory.
Possible causes are: 
    The Central Inventory is corrupted
    The oraInst.loc file specified is not valid.
LsInventorySession failed: OPatch failed to locate Central Inventory.
Possible causes are: 
    The Central Inventory is corrupted
    The oraInst.loc file specified is not valid.


OPatch failed with error code 73

Cause :- 

Checked the Inventory location, /opt/orainv/oraInventory , it was empty, seems someone accidentally deleted it.

Solution :-

We have to recreate our Inventory with all the ORACLE_HOME details available in the server before this blunder happened.

To crosscheck , what all products are installed, query the oratab file :-

grid:/opt/grid/11.2.0.4:N
agent:/opt/oracle/product/agent/core/12.1.0.4.0:N
+ASM:/opt/grid/11.2.0.4:N
amit:/opt/oracle/product/database/11.2.0.4:N        

We can see that , we have GRID_HOME, ORACLE_HOME and AGENT installed in this server.

Recreating the Oracle Inventory :-

1. Attach Oracle Home

Oracle Home :- /opt/oracle/product/database/11.2.0.4
Oracle Home Name :- Db11g_home1

$ ./runInstaller -silent -attachHome ORACLE_HOME="/opt/oracle/product/database/11.2.0.4" -invPtrLoc /etc/oraInst.loc ORACLE_HOME_NAME="Db11g_home1"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16383 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /opt/orainv/oraInventory
'AttachHome' was successful.

$ ./opatch lsinventory -oh /opt/grid/11.2.0.4
Oracle Interim Patch Installer version 11.2.0.3.6
Copyright (c) 2013, Oracle Corporation.  All rights reserved.


Oracle Home       : /opt/grid/11.2.0.4
Central Inventory : /opt/orainv/oraInventory
   from           : /opt/grid/11.2.0.4/oraInst.loc
OPatch version    : 11.2.0.3.6
OUI version       : 11.2.0.4.0
Log file location : /opt/grid/11.2.0.4/cfgtoollogs/opatch/opatch2016-03-10_23-34-48PM_1.log

List of Homes on this system:

  Home name= OraDb11g_home1, Location= "/opt/oracle/product/database/11.2.0.4"
Inventory load failed... OPatch cannot load inventory for the given Oracle Home.
Possible causes are:
   Oracle Home dir. path does not exist in Central Inventory
   Oracle Home is a symbolic link
   Oracle Home inventory is corrupted
LsInventorySession failed: OracleHomeInventory gets null oracleHomeInfo

OPatch failed with error code 73

We have to attach grid home also.

2. Attach Grid Home to the Inventory

Grid Home :- /opt/grid/11.2.0.4
Grid Home Name :- 11g_gridinfrahome1

$ ./runInstaller -silent -attachHome ORACLE_HOME="/opt/grid/11.2.0.4" -invPtrLoc /etc/oraInst.loc ORACLE_HOME_NAME="11g_gridinfrahome1"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16383 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /opt/orainv/oraInventory
'AttachHome' was successful.

3. Attach Agent Home to the Inventory

Agent Home :- /opt/oracle/product/agent/core/12.1.0.4.0
Agent Home Name :- agent12c1

$ ./runInstaller -silent -attachHome ORACLE_HOME="/opt/oracle/product/agent/core/12.1.0.4.0" -invPtrLoc /etc/oraInst.loc ORACLE_HOME_NAME="agent12c1"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16383 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /opt/orainv/oraInventory
'AttachHome' was successful.

All Homes present in the server added to the Central Inventory.

I hope this article helped you.

Thanks
Amit Rath

Monday, February 29, 2016

OMS 12.1.0.4 installation failed with "The database details which you have provided doesn't contain valid Management Repository. Provide correct database details and retry"

Yesterday I was installing OMS 121.0.4 and got below error while attempting to connect to database:-

OMS 12.1.0.4 installation failed with "The database details which you have provided doesn't contain valid Management Repository. Provide correct database details and retry"

Cause :-

Generally this error comes when the database we are using as OMS database was already used as a database for OMS installation before or A failed OMS installation already happened using this database.

Solution :-

we have to properly clean the database before using it again as a OMS database :-

Below steps I did to clean my database :-

1. Drop sysman related schemas.

DROP USER SYSMAN CASCADE;  ( will take time)
DROP USER SYSMAN_OPSS CASCADE;
DROP USER SYSMAN_MDS CASCADE;
DROP USER SYSMAN_APM CASCADE;
DROP USER SYSMAN_RO CASCADE;

2. Remove Synonyms related to sysman accounts :-

DECLARE
  CURSOR l_syn_csr IS
    SELECT 'DROP ' ||
      CASE owner
        WHEN 'PUBLIC'
          THEN 'PUBLIC SYNONYM '
        ELSE 'SYNONYM ' || owner || '.'
      END ||
      synonym_name AS cmd
    FROM
      dba_synonyms 
    WHERE
      table_owner IN (
        'SYSMAN',
        'SYSMAN_MDS',
        'MGMT_VIEW',
        'SYSMAN_BIP',
        'SYSMAN_APM',
        'BIP',
        'SYSMAN_OPSS',
        'SYSMAN_RO'
      );
BEGIN
  FOR l_syn_rec IN l_syn_csr LOOP
    BEGIN
      EXECUTE IMMEDIATE l_syn_rec.cmd;
    EXCEPTION
      WHEN OTHERS THEN
        dbms_output.put_line( '===> ' || l_syn_rec.cmd );
        dbms_output.put_line( sqlerrm );
    END;
  END LOOP;
END;
/

3. Removing remaining Objects and tablespaces :-

DROP USER mgmt_view CASCADE;
DROP TABLESPACE mgmt_ecm_depot_ts INCLUDING CONTENTS AND DATAFILES CASCADE CONSTRAINTS;
DROP TABLESPACE mgmt_tablespace   INCLUDING CONTENTS AND DATAFILES CASCADE CONSTRAINTS;
DROP TABLESPACE mgmt_ad4j_ts      INCLUDING CONTENTS AND DATAFILES CASCADE CONSTRAINTS;

4.  As proper database cleaning using RepManager dropall didn't happen, so we have to clean up the registry details :-

DELETE
  FROM
    schema_version_registry
  WHERE
    (comp_name,owner) IN (
      ('Authorization Policy Manager','SYSMAN_APM'),
      ('Metadata Services','SYSMAN_MDS'),
      ('Oracle Platform Security Services','SYSMAN_OPSS')
    );
commit;

5. Now the database clean up completed, Please continue with the installation, I tried and it completed successfully.

I hope this article helped you, Please go through the below article also for proper error details :-
https://sites.google.com/site/yetanotherocm/home/system-builds/cronulla/07-complete-oms-12-1-0-3-1-installation

Thanks
Amit Rath