About Me

My photo
Bangalore, India
I am an Oracle Certified Professional working in SAP Labs. Everything written on my blog has been tested on my local environment, Please test before implementing or running in production. You can contact me at amit.rath0708@gmail.com.

Friday, October 6, 2017

ORA-28017: The password file is in the legacy format.

Today I was trying to change password for one of my 12c database and got below error :-

SQL> alter user dbsnmp identified by "*************"
*
ERROR at line 1:
ORA-28017: The password file is in the legacy format.

Cause :- 

Looks like update to password file failed as the file format is wrong. 

Administrative privileges like sysbackup, syskm, sysdg cannot be granted unless the password file is in 12c Format

Solution :-

This error generally comes when you migrate your database from 11g to 12c

Regeneration of password will resolve this error.

PFB steps :-

1. Check which users have access to password file before regenerating it.

SQL> select * from v$pwfile_users;

USERNAME                       SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
------------------------------ ----- ----- ----- ----- ----- ----- ----------
SYS                            TRUE  TRUE  FALSE FALSE FALSE FALSE          0
DBSNMP                         TRUE  FALSE FALSE FALSE FALSE FALSE          0

2. Recreate your password file :-

==>orapwd file=$ORACLE_HOME/dbs/orapwamit entries=5 force=y

Enter password for SYS:
[Target_server][oracle][amit]

3. Now by default only sys have access to it

SQL> select * from v$pwfile_users;

USERNAME                       SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
------------------------------ ----- ----- ----- ----- ----- ----- ----------
SYS                            TRUE  TRUE  FALSE FALSE FALSE FALSE          0

4. Grant sysdba privilege to DBSNMP user to have access to password file

grant sysdba to dbsnmp;

Grant succeeded.

SQL> select * from v$pwfile_users;

USERNAME                       SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
------------------------------ ----- ----- ----- ----- ----- ----- ----------
SYS                            TRUE  TRUE  FALSE FALSE FALSE FALSE          0
DBSNMP                         TRUE  FALSE FALSE FALSE FALSE FALSE          0

5. Now again go for the password change, this time it will be successful.

SQL> alter user dbsnmp identified by "*************";

User altered.

I hope this article helped you.

Thanks
Amit Rath

Thursday, October 5, 2017

How to Clone Grid Infrastructure binaries for Oracle Server(Standalone)

Yesterday I was applying patch on one of our development servers and patch on Grid Home failed with space issue and binaries got corrupted.

I thought of reinstalling the grid binaries by cloning it from a fresh development server. PFB steps for that :-

1. Stop the source(fresh) grid binaries. Hostname is Source_server
2. Tar the binaries 
3. Scp to the target(corrupted) server.  Hostname is Target_server

4. Detach the grid home from the Central inventory in Corrupted server :-

==>./runInstaller.sh -silent -detachHome ORACLE_HOME=/opt/grid/12.1.0.2 ORACLE_HOME_NAME=" Ora11g_gridinfrahome1"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16383 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'DetachHome' was successful.
[target_server][oracle][]

5. Clone the grid home so that binaries are linked and grid home is attached to Central inventory

./runInstaller -clone -waitForCompletion  "ORACLE_HOME=/opt/grid/12.1.0.2" "ORACLE_HOME_NAME=OraHome1Grid" "INVENTORY_LOCATION=/opt/orainv/oraInventory" "LOCAL_NODE=target_server" "CRS=FASLE" "ORACLE_BASE=/opt/oracle" -silent -paramFile /opt/grid/12.1.0.2/clone/clone_oraparam.ini
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 233206 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 16383 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-10-03_05-47-08AM. Please wait ...You can find the log of this install session at:
 /opt/orainv/oraInventory/logs/cloneActions2017-10-03_05-47-08AM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........Could not backup file /opt/grid/12.1.0.2/root.sh to /opt/grid/12.1.0.2/root.sh.ouibak
Could not backup file /opt/grid/12.1.0.2/rootupgrade.sh to /opt/grid/12.1.0.2/rootupgrade.sh.ouibak

Copy files in progress.

Copy files successful.

Link binaries in progress.

Link binaries successful.

Setup files in progress.

Setup files successful.

Setup Inventory in progress.

Setup Inventory successful.

Finish Setup successful.
The cloning of OraHome1Grid was successful.
Please check '/opt/orainv/oraInventory/logs/cloneActions2017-10-03_05-47-08AM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   95% Done.

As a root user, execute the following script(s):
        1. /opt/grid/12.1.0.2/root.sh


..................................................   100% Done.
[target_server][oracle][]

6. Execute the root.sh script

7. Configure your CRS instance on the cluster

While Configuring , I got below error :-

[root@target_server grid]# /opt/grid/12.1.0.2/perl/bin/perl -I/opt/grid/12.1.0.2/perl/lib -I/opt/grid/12.1.0.2/crs/install /opt/grid/12.1.0.2/crs/install/roothas.pl
Using configuration parameter file: /opt/grid/12.1.0.2/crs/install/crsconfig_params
2017/06/03 06:00:59 CLSRSC-350: Cannot configure two CRS instances on the same cluster

2017/10/03 06:00:59 CLSRSC-352: CRS is already configured on this node for the CRS home location /opt/grid/12.1.0.2

we have to deconfigure the existing one before configuring it again

8. Deconfigure the existing CRS 

[root@target_server install]# ./roothas.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
2017/10/03 06:13:02 CLSRSC-337: Successfully deconfigured Oracle Restart stack

9. Configure it again :-

[root@target_server install]# /opt/grid/12.1.0.2/perl/bin/perl -I/opt/grid/12.1.0.2/perl/lib -I/opt/grid/12.1.0.2/crs/install /opt/grid/12.1.0.2/crs/install/roothas.pl
Using configuration parameter file: /opt/grid/12.1.0.2/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'oracle', privgrp 'sigdba'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node target_server successfully pinned.
2017/10/03 06:15:36 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'


target_server 2017/10/03 06:16:02 /opt/grid/12.1.0.2/cdata/target_server/backup_20170603_061602.olr 3924561976
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'target_server'
CRS-2673: Attempting to stop 'ora.evmd' on 'target_server'
CRS-2677: Stop of 'ora.evmd' on 'target_server' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'target_server' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/10/03 06:16:33 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

9. Add the asm details and start the asm

==>srvctl add asm -d /dev/mapper/
[target_server][oracle][+ASM]
/dev/mapper$
==>
[target_server][oracle][+ASM]
/dev/mapper$
==>srvctl start asm
[target_server][oracle][+ASM]
/dev/mapper$

[target_server][oracle][+ASM]
/dev/mapper$
==>sqlplus  /as sysasm

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option

SQL> alter diskgroup AMIT_GROUP mount;

Diskgroup altered.

10 . Add database to CRS 

==>srvctl add database -d amit -o /opt/oracle/product/database/12.1.0.2/
[target_server][oracle][amit]

/dev/mapper$
==>sqlplus /as sysdba

Connected to an idle instance.

SQL> startup mount
ORACLE instance started.

Total System Global Area 8589934592 bytes
Fixed Size                  7663544 bytes
Variable Size            3288334408 bytes
Database Buffers         3137339392 bytes
Redo Buffers                9113600 bytes
In-Memory Area           2147483648 bytes
Database mounted.
SQL> alter database open;

Database altered.

Cloning of Grid home binaries completed.

I hope this article helped you

Thanks
Amit Rath

Wednesday, June 14, 2017

OMS decided to shutdown the agent because of the following reason sent from OMS: AGENT_TZ_MISMATCH

Today I was trying to start my OEM agent in one of my dev server and got below error while starting it :-

==>./emctl start agent
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Starting agent ...........failed.
Consult emctl.log and emagent.nohup in: /u01/oracle/product/agent/agent_inst/sysman/log

Cause :-

Crosschecked the logs as mentioned above and got below in emagent.nohup logfile 

highlighted one below is of our interest

 --- EMState agent
----- Wed Jun 14 11:58:16 2017::20552::Auto tuning the agent at time Wed Jun 14 11:58:16 2017 -----
----- Wed Jun 14 11:58:17 2017::20552::Finished auto tuning the agent at time Wed Jun 14 11:58:17 2017 -----
----- Wed Jun 14 11:58:17 2017::20552::Launching the JVM with following options: -Xmx128M -server -Djava.security.egd=file:///dev/./urandom -Dsun.lang.ClassLoader.allowArraySyntax=true -XX:+UseLinuxPosixThreadCPUClocks -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+UseCompressedOops -----
----- Wed Jun 14 11:58:17 2017::20552::Agent Launched with PID 20672 at time Wed Jun 14 11:58:17 2017 -----
----- Wed Jun 14 11:58:17 2017::20552::Previous Thrash State(-1,-1) -----
----- Wed Jun 14 11:58:17 2017::20672::Time elapsed between Launch of Watchdog process and execing EMAgent is 1 secs -----
2017-06-14 11:58:17,396 [1:main] WARN - Missing filename for log handler 'wsm'
2017-06-14 11:58:17,402 [1:main] WARN - Missing filename for log handler 'opss'
2017-06-14 11:58:17,403 [1:main] WARN - Missing filename for log handler 'opsscfg'
OMS decided to shutdown the agent because of the following reason sent from OMS:  AGENT_TZ_MISMATCH
----- Wed Jun 14 11:58:26 2017::20552::Checking status of EMAgent : 20672 -----
----- Wed Jun 14 11:58:26 2017::20552::EMAgent exited at Wed Jun 14 11:58:26 2017 with return value 0. -----
----- Wed Jun 14 11:58:26 2017::20552::writeAbnormalExitTimestampToAgntStmp: exitCause=NORMAL : restartRequired=0 -----
----- Wed Jun 14 11:58:26 2017::20552::EMAgent was shutdown normally. -----
----- Wed Jun 14 11:58:26 2017::20552::Exiting watchdog loop

We can clearly see that there is some timezone mismatch issue of the agent with the OMS repository server

Solution :-

As above error state that there is a timezone mismatch between the agent server and OMS repository server.

or OMS repository database must be having incorrect timezone related details for agent server

Below steps resolved my error:-

[Server1][oracle][amit1]
EM_HOME/bin$
==>./emctl resetTZ agent
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Updating /u01/oracle/product/agent/agent_inst/sysman/config/emd.properties...
Successfully updated /u01/oracle/product/agent/agent_inst/sysman/config/emd.properties.
Login as the em repository user and run the  script:
exec mgmt_target.set_agent_tzrgn('Server1.test.com:9901','US/Eastern')
and commit the changes
This can be done for example by logging into sqlplus and doing
SQL> exec mgmt_target.set_agent_tzrgn('Server1.test.com:9901','US/Eastern')
SQL> commit

Login to OMS repository database and connect using SYSMAN and execute below procedure :-

exec mgmt_target.set_agent_tzrgn('Server1.test.com:9901','US/Eastern')

Do a clearstate and securing of your agent again and start it :-

==>./emctl clearstate agent
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
EMD clearstate completed successfully
[Server1][oracle][amit1]
EM_HOME/bin$
==>./emctl secure agent
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Agent is already stopped...   Done.
Securing agent...   Started.
Enter Agent Registration Password :
Securing agent...   Successful.
[Server1][oracle][amit1]
EM_HOME/bin$
==>./emctl start agent
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Starting agent ..... started.

Agent started successfully

I hope this article helped you

Thanks 
Amit Rath

Friday, January 13, 2017

Could not read EMD_URL from agent configuration files in 12c

Today after OS patching, got below error when trying to check the status of the agent :-

==>./emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
EMD_URL is null.
Could not read EMD_URL from agent configuration files

==>./emctl stop agent
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
EMD_URL is null.
Could not read EMD_URL from agent configuration files

Cause :-

Configuration files for Agent corrupted or accidentally deleted.

Solution :-

Check for the emd.properties file in EM_AGENT_HOME/agent_inst/sysman/config folder
or check for the targets.xml file in EM_AGENT_HOME/agent_inst/sysman/emd folder

In my scenario , my emd.properties got corrupted

==>ls -ltr emd*
-rw------- 1 oracle sigdba 7709 Jan 19  2016 emd.properties.tzbak
-rw-r----- 1 oracle sigdba 7709 Jan 19  2016 emd.properties.2016-01-19-21-51-32
-rw-r----- 1 oracle sigdba 7734 Jan 19  2016 emd.properties.2016_01_19_09_51_36
-rw-r----- 1 oracle sigdba 7861 Oct 29 01:59 emd.properties.bak
-rw-r----- 1 oracle sigdba    0 Dec 16 01:01 emd.properties
[Hostname][oracle][AMIT2]

We can see the size of file is zero , by default Oracle keeps a backup of emd.properties file in the same directory, restore the file from that backup, I did the same

-rw-r----- 1 oracle sigdba  7861 Oct 29 01:59 emd.properties
-rw-r----- 1 oracle sigdba   155 Oct 29 01:59 private.properties
-rw-r----- 1 oracle sigdba   265 Dec 16 01:01 autotune.properties
-rw-r----- 1 oracle sigdba     0 Dec 16 01:01 emd.properties_20170113

Now again tried for a restart of the agent , it worked

==>./emctl stop agent
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Agent is Not Running
[Hostname][oracle][amit2]
/opt/oracle/product/agent/core/12.1.0.4.0/bin$
==>./emctl start agent
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Starting agent ..................... started.

If your emd.properties file is not corrupted but targets.xml is corrupted , then also Oracle keep a backup of that file in same folder , we can restore from that file.

I hope this article helped you.

Thanks
Amit Rath

Wednesday, January 11, 2017

MRP0: Background Media Recovery terminated with error 1111

Today while troubleshooting a DR database sync scenario issue, I got below error in alert log file :-

MRP0: Background Media Recovery terminated with error 1111
Errors in file /opt/oracle/diag/rdbms/amit/amit1/trace/amit1_pr00_1890.trc:
ORA-01111: name for data file 34 is unknown - rename to correct file
ORA-01110: data file 34: '/opt/oracle/product/database/11.2.0.4/dbs/UNNAMED00034'
ORA-01157: cannot identify/lock data file 34 - see DBWR trace file
ORA-01111: name for data file 34 is unknown - rename to correct file
ORA-01110: data file 34: '/opt/oracle/product/database/11.2.0.4/dbs/UNNAMED00034'

Cause :-

Above issue can occur in DR database in below scenarios :-

1. Space issue in DR database and a datafile is added in primary database.
2. Standby file management is set to manual
3. DB_create_file_dest or DB_file_name_convert parameter is not set in Standby database

Troubleshoot :-

When I crosscheck the primary database and standby database sync details , I saw Redo Apply was stopped.

I crosschecked the alert log files for standby database and got above mentioned error in alert logfile.

Checked the datafile details in DR database :-

File_id       name
34             /opt/oracle/product/database/11.2.0.4/dbs/UNNAMED00034

Checked the Primary alert log file and got to know that a datafile addition activity happened in primary database.

Crosschecked all the above mentioned parameters and got to know that space exhausted in BCP database due to that Oracle created an Unknown file in ORACLE_HOME location due to which Redo Apply stopped.

Solution :-

Checked the added file detail in Production database :-

File_id       name
34            +DATA/amit/datafile/data.321.931222251

Login to Standby database server :-

1. As space was exhausted on Diskgroup level, added space by logging in to ASM instance
2. Login to DR database instance :-

a. Change the standby_file_management parameter to Manual

SQL> show parameter standby

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_archive_dest                 string      ?/dbs/arch
standby_file_management              string      AUTO
SQL> alter system set standby_file_management=MANUAL sid='*';

SQL> show parameter standby

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_archive_dest                 string      ?/dbs/arch
standby_file_management              string      MANUAL
SQL>
SQL> select name from v$datafile where file#=34;

NAME
--------------------------------------------------------------------------------
/opt/oracle/product/database/11.2.0.4/dbs/UNNAMED00034

SQL> alter database create datafile '/opt/oracle/product/database/11.2.0.4/dbs/UNNAMED00034' as '+DATA' size 10g;

Database altered.

After creation of the datafile in DR database , check the file detail in DR database :-

File_id       name
34             +DATA/amit_dr/datafile/data.278.293456881

Check the redo apply status, if not started , start it manually using dgmgrl.

I hope this article helped you.

Thanks
Amit Rath