Restart Exadata storage cell service without affecting ASM

Brief history:

One week ago on our DR Exadata cell service hanged, which caused all databases located on Exadata to become inaccessible.

CellCLI> LIST ALERTHISTORY
9 2017-10-13T11:56:05+04:00 critical “RS-7445 [Serv CELLSRV hang detected] [It will be restarted] [] [] [] [] [] [] [] [] [] []”

In cell’s alert history there was written that the service would be restarted itself , but it did not and I restarted it by the following way:

CellCLI> ALTER CELL RESTART SERVICES CELLSRV

The databases started to work correctly.

Today, the same problem happend on the HQ side which of course caused to stop everything for a while until I’ve restarted the service.

But identifying which cell was problematic was a little bit difficult, because there was no error in alerthistory.

BUT when I entered the following command on the third cell node – it hanged, other cells were OK.

CellCLI> LIST ACTIVEREQUEST

So I restarted the same service on that node and problem was resolved.

CellCLI> ALTER CELL RESTART SERVICES CELLSRV

Of course, this is not a solution and cell service must not hang! , but this is the simple workaround when you have stopped PRODUCTION database.

I have created SR and waiting answer from them , if there is any usefull news will update this post.

===================================================================================================

Writing down the correct steps of restarting Cell Services without affecting ASM:

1.  Run the following command to check if there are offline disks on other cells that are mirrored with disks on this cell:

CellCLI > LIST GRIDDISK ATTRIBUTES name WHERE asmdeactivationoutcome != ‘Yes’

Warning : If any grid disks are listed in the returned output, then it is not safe to stop or re-start the CELLSRV process because proper Oracle ASM disk group redundancy will not be intact and will cause Oracle ASM to dismount the affected disk group, causing the databases to shut down abruptly.

If no grid disks are listed in the returned output, you can safely restart cellsrv or all services in step #2 below.

2.  Re-start the cell services using either of the following commands:

CellCLI> ALTER CELL RESTART SERVICES CELLSRV

CellCLI> ALTER CELL RESTART SERVICES ALL

BUT what is good news cell has self-defence on reduced redundancy, if you try to restart it when redundancy check is not satisfied you get:

CellCLI> ALTER CELL RESTART SERVICES ALL;

Stopping the RS, CELLSRV, and MS services…
The SHUTDOWN of ALL services was not successful.
CELL-01548: Unable to shut down CELLSRV because disk group DATA, RECO may be forced to dismount due to reduced redundancy.
Getting the state of CELLSRV services… running
Getting the state of MS services… running
Getting the state of RS services… running

 

Advertisements

Rebuild RAC clusterware without deleting data

As I have mentioned in my previous posts, I was applying interim patch on database which had post installation script (# <GI_HOME>/crs/install/rootcrs.pl -postpatch) .
The post script failed with permission denied error on ohasd file and left clusterware in a messy situation.

I have opened SR on metalink and one of their support after a huge amount of time of talking and troubleshooting together says:

“We do not know what happened or what steps you have taken to reach this situation. You should open an SR with us before you deconfigure the node.
Please, do bare metal restore as it is recommended by previous engineer.
Bare Metal Restore Procedure for Compute Nodes on an Exadata Environment ( Doc ID 1084360.1 )”

This Bare Metal Restore is like wiping everything and after that I have had to configure RAC, DATAGUARD and everything from scratch. <<–Don’t like such solutions, this is like “if your windows works slowly then reinstall it”.. for windows this might be really true 🙂 nothing than reinstall helps 😀 but on Linux/Oracle you must troubleshoot first.
So I created another SR with another error(Errors at this time were lot) and for the second time I was lucky.
I was working 24/7 with support, the engineers were shifting. Three different engineers worked at different times on this SR.
I want to mention one “Venkata Pradeep Kumar” Oracle support engineer , he is so clever he helped me a lot and we rescued the system !:)

I want to share the steps with you , it should interesting.

Problem:

After applying patch post script on first node (which failed), clusterware on first node was not starting. At this time second node was fine.
I have deconfigured clusterware (write this step in solution section) on first node and it started but with some problems about oc4j service.

2016/09/27 06:56:15 CLSRSC-1003: Failed to start resource OC4J
2016/09/27 06:56:16 CLSRSC-287: FirstNode configuration failed

I have deconfigured clusterware on second node also and tried to run root.sh, but it said that root.sh could not be run because it was not successful on first node. 😦

So, until root.sh script is not completely successful on first node you should not deconfigure it on second. But if you did it do not panic if you have OCR backup.

Solution:

# Deconfigure crs on problematic node , note you may help the different solution , by just configuring one node. In my situation all nodes became problematic.
# Also please be careful, below steps assumes that you have separate group for OCR. Datafiles must be on different group. Or diskgroup will be wiped.

# From root on both nodes node1 , node2

/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -deconfig -force

# run root.sh on node1 , it may not be completely successful

/u01/app/12.1.0.2/grid/root.sh

# We need to find a good OCR backup , for me it is week.ocr which was taken automatically in 2016/09/15 09:12:28.
# Patch was applied at 10:00AM in 2016/09/25. So we need week.ocr it is before patching.

[root@lbdm01-dr-adm grid]# ocrconfig -showbackup

lbdm02-dr-adm 2016/09/27 02:35:23 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup00.ocr 3351897854
lbdm02-dr-adm 2016/09/26 15:44:53 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup01.ocr 3351897854
lbdm02-dr-adm 2016/09/26 11:44:52 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup02.ocr 3351897854
lbdm02-dr-adm 2016/09/27 02:35:23 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/day.ocr 3351897854
lbdm01-dr-adm 2016/09/15 09:12:28 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/week.ocr 854493477
lbdm02-dr-adm 2016/09/25 15:29:18 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup_20160925_152918.ocr 3351897854
lbdm02-dr-adm 2016/09/25 10:34:56 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup_20160925_103456.ocr 2725022894
lbdm01-dr-adm 2015/07/29 19:46:28 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup_20150729_194628.ocr 854493477
lbdm01-dr-adm 2015/07/29 19:46:27 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup_20150729_194627.ocr 854493477

# Ensure that no process left
# node 1

crsctl stop crs -f
ps -ef|grep “/u01/app”

# if here is anything kill them!

#Start clusterware in exclusive mode with no ocr on node 1

crsctl start crs -excl -nocrs

#Restore OCR on node 1

ocrconfig -restore /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/week.ocr
ocrcheck

# Stop crs on node 1

crsctl stop crs -f
crsctl start crs

# Check the status

crsctl status res -t

# It should be OK

# Do the same steps on node 2 from root, but it may fail

/u01/app/12.1.0.2/grid/root.sh

# Failed

ORA-15160: rolling migration internal fatal error in module SKGXP,valNorm:not-native
. For details refer to “(:CLSN00107:)” in “/u01/app/oracle/diag/crs/lbdm02-dr-adm/crs/trace/ohasd_oraagent_oracle.trc”.
CRS-2883: Resource ‘ora.asm’ failed during Clusterware stack start.
CRS-4406: Oracle High Availability Services synchronous start failed.
CRS-4000: Command Start failed, or completed with errors.
2016/09/28 09:11:00 CLSRSC-117: Failed to start Oracle Clusterware stack

# deconfig on both nodes
# node1 , node2

 /u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -deconfig -force

#and run agin root.sh
# node 1

/u01/app/12.1.0.2/grid/root.sh

# It was completelly successful.

# On second there is still problem

# Read the following document ORA-15160: rolling migration internal fatal error in module SKGXP,valNorm:not-native (NOTE 1682591.1)

# Here problem was on protocols that was used by asm and rdbms.
# rdbms is using rds protocol and asm is using udp, see Oracle Clusterware and RAC Support for RDS Over Infiniband (NOTE 751343.1)
# problem was in libraries and we should relink them with right protocols
# As the ORACLE_HOME/GI_HOME owner, stop all resources (database, listener, ASM etc) that’s running from the home. When stopping database, use NORMAL or IMMEDIATE option.

# From problemtic node , where asm or database is not starting.

crsctl stop crs
ps -ef|grep d.bin
ps -ef|grep “/u01/app”

# Kill if any process left

# If relinking Grid Infrastructure (GI) home, as root, unlock GI home: <GI_HOME>/crs/install/rootcrs.pl -unlock

/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -unlock

# As the ORACLE_HOME/GI_HOME owner, go to ORACLE_HOME/GI_HOME and cd to rdbms/lib
# As the ORACLE_HOME/GI_HOME owner, issue “make -f ins_rdbms.mk <protocol write here> ioracle”
#For rdbms

[root@lbdm02-dr-adm lib]# su – oracle
[oracle@lbdm02-dr-adm ~]$ cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk ipc_rds ioracle

#For asm

. oraenv
+ASM2
[oracle@lbdm02-dr-adm ~]$ cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk ipc_g ioracle

# From root

/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -patch

# The last step should have configure clusterware also. And everything should be fine. And you can sleep now. 🙂

 

RAC: root.sh | CRS-2672: Attempting to start ‘ora.storage’ | ORA-01017: invalid username/password

I was configuring clusterware on node1 and got the following error:

CRS-2672: Attempting to start ‘ora.storage’ on ‘node1’
ORA-01017: invalid username/password; logon denied
CRS-5017: The resource action “ora.storage start” encountered the following error:
Storage agent start action aborted. For details refer to “(:CLSN00107:)” in “/u01/app/oracle/diag/crs/node1/crs/trace/ohasd_orarootagent_root.trc”.
CRS-2883: Resource ‘ora.storage’ failed during Clusterware stack start.
CRS-4406: Oracle High Availability Services synchronous start failed.
CRS-4000: Command Start failed, or completed with errors.
2016/09/27 05:41:01 CLSRSC-117: Failed to start Oracle Clusterware stack

Died at /u01/app/12.1.0.2/grid/crs/install/crsinstall.pm line 930.
The command ‘/u01/app/12.1.0.2/grid/perl/bin/perl -I/u01/app/12.1.0.2/grid/perl/lib -I/u01/app/12.1.0.2/grid/crs/install /u01/app/12.1.0.2/grid/crs/install/rootcrs.pl ‘ exe ution failed

 

/u01/app/oracle/diag/crs/node1/crs/trace/ohasd_orarootagent_root.trc file says:

2016-09-27 05:40:56.787330*:kgfn.c@6018: kgfnConnect2Int: sysasm=0 envflags=0x10 srvrflags=0x3 unam=NULL password is NULL pstr=_ocr
2016-09-27 05:40:56.787330*:kgfn.c@6194: kgfnConnect2Int: cstr=(DESCRIPTION=(ADDRESS=(PROTOCOL=beq)(PROGRAM=/u01/app/12.1.0.2/grid/bin/oracle)(ARGV0=oracle+ASM1_ocr)(ENVS=’ORACLE_HOME=/u01/app/12.1.0.2/grid,ORACLE_SID=+ASM1′)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))’)(PRIVS=(USER=root)(GROUP=root)))(enable=setuser))
2016-09-27 05:40:57.273302 : AGENT:2583111424: {0:9:3} {0:9:3} Created alert : (:CRSAGF00113:) : Aborting the command: start for resource: ora.storage 1 1

 

So why user root???

See, when I connect using root I got ORA-01017

[root@node1 ~]# . oraenv
ORACLE_SID = [+ASM1] ? +ASM1
The Oracle base has been set to /u01/app/oracle
[root@node1 ~]# sqlplus / as sysasm

SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 27 05:59:01 2016
Copyright (c) 1982, 2014, Oracle. All rights reserved.

ERROR:
ORA-01017: invalid username/password; logon denied

If I connect through Oracle it is OK:

su – oracle

[oracle@node1 ~]$ . oraenv
ORACLE_SID = [LBTCI1] ? +ASM1

[oracle@node1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 27 05:59:45 2016
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL>

 

Look the connection string again there is “PROGRAM=/u01/app/12.1.0.2/grid/bin/oracle”, so let’s check file permissions.

[oracle@node1 ~]$ ll /u01/app/12.1.0.2/grid/bin/oracle
-rwsr-s–x 1 root root 295054213 Sep 27 05:26 /u01/app/12.1.0.2/grid/bin/oracle

It must be oracle:oinstall  not root:root 

chown oracle:oinstall /u01/app/12.1.0.2/grid/bin/oracle
chmod 6751 /u01/app/12.1.0.2/grid/bin/oracle

 

deconfigure(rootcrs.pl -deconfig  -verbose) crs and reconfigure(run root.sh) it again.

 

error: package cvuqdisk is not installed

I have applied patch on RAC and after running postinstall script on the first node, it failed because of some file permission and the problem started…

I could not startup clusterware on first node.

I have deconfigured clusterware by:

[root@lbdm01-dr-adm ~]# $ORACLE_HOME/crs/install/rootcrs.pl -deconfig -force -verbose

And here I got error(cvuqdisk):

PRCR-1070 : Failed to check if resource ora.net1.network is registered
CRS-0184 : Cannot communicate with the CRS daemon.
PRCR-1070 : Failed to check if resource ora.helper is registered
CRS-0184 : Cannot communicate with the CRS daemon.
PRCR-1070 : Failed to check if resource ora.ons is registered
CRS-0184 : Cannot communicate with the CRS daemon.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘lbdm01-dr-adm’
CRS-2679: Attempting to clean ‘ora.cssd’ on ‘lbdm01-dr-adm’
CRS-2680: Clean of ‘ora.cssd’ on ‘lbdm01-dr-adm’ failed
CRS-2799: Failed to shut down resource ‘ora.cssd’ on ‘lbdm01-dr-adm’
CRS-2795: Shutdown of Oracle High Availability Services-managed resources on ‘lbdm01-dr-adm’ has failed
CRS-4687: Shutdown command has completed with errors.
CRS-4000: Command Stop failed, or completed with errors.
2016/09/26 19:54:12 CLSRSC-463: The deconfiguration or downgrade script could not stop current Oracle Clusterware stack.

2016/09/26 19:54:12 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.

2016/09/26 19:54:26 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.

error: package cvuqdisk is not installed
2016/09/26 19:54:26 CLSRSC-557: Oracle Clusterware stack on this node has been successfully deconfigured. There were some errors which can be ignored.

 

I have searched information about this package on documentation and I have found the following:

Link: https://docs.oracle.com/database/121/LADBI/pre_install.htm#LADBI7632

Installing the cvuqdisk RPM for Linux

If you do not use an Oracle Preinstallation RPM, then you must install the cvuqdisk RPM. Without cvuqdisk, the Cluster Verification Utility cannot find shared disks, and you receive a “Package cvuqdisk not installed” error when you run the Cluster Verification Utility. Use the cvuqdisk RPM for your hardware (for example, x86_64, or i386).

To install the cvuqdisk RPM, complete the following procedure:

  1. Locate the cvuqdisk RPM package, which is in the directory rpm on the Oracle Database installation media. If you installed Oracle Grid Infrastructure, then it is in the directory oracle_home1/cv/rpm.
  2. Log in as root.
  3. Use the following command to find if you have an existing version of the cvuqdisk package:
  4.  
    # rpm -qi cvuqdisk
    

    If you have an existing version, then enter the following command to deinstall the existing version:

    # rpm -e cvuqdisk
    
  5. Set the environment variable CVUQDISK_GRP to point to the group that owns cvuqdisk, typically oinstall, for example:
    # CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
    
  6. In the directory where you have saved the cvuqdisk RPM, use the following command to install the cvuqdisk package:
    rpm -iv package
    

    For example:

    # rpm -iv cvuqdisk-1.0.9-1.rpm

So I have found the mentioned package in the following directory and installed it.

cd /u01/app/12.1.0.2/grid/cv/rpm

yum install cvuqdisk-1.0.9-1.rpm

Note: The problem is strange but, I am not writing why this happened in this post, because I don’t know it yet 🙂
The aim of this post is that you should know where to find cvuqdisk package and what is it for 🙂

Good Luck!

Oracle Clusterware is restarting database “” | shut down instance “” of database “” | start up instance “” of database “”

When I used data guard broker to switchover primary database to standby database sometimes broker writes “Oracle Clusterware is restarting database”  … hangs and times out.

The problem was that database was not registered with srvctl.

srvctl add database -d db_unique_name -o oracle_home
srvctl add instance -d db_unique_name -i instance_name1 -n node_name1
srvctl add instance -d db_unique_name -i instance_name2 -n node_name2

instead of db_unique_name, oracle_home , instance_name1, node_name1,  instance_name2, node_name2 enter values according to your database. 

SESSIONS WAITING ON INACTIVE TRANSACTION BRANCH, GLOBAL HASH COLLISION 12c, 11g Oracle RAC(distributed transaction)

When performing XA transactions against a multi-node Oracle RAC configuration, some branches of the transaction may not commit.. this is a known bug, but to tell the truth no bug fix helped me to solve this problem until I came across IBM technote. https://www-304.ibm.com/support/docview.wss?uid=swg21460967

There are several workarounds but I prefer Work around 1.  I have used it and works perfectly.

1. If using pfile (init.ora) files, add the following line to the file:

_clusterwide_global_transactions=false

2. If using an spfile, issue the following command from SQL*Plus:

alter system set “_clusterwide_global_transactions”=false scope=spfile

3. Restart the database (you can restart nodes , one by one)

Problem should dissapear.

 

Add/Drop ASM disks to DISKGROUP on RAC(or Standalone)

Note: The steps are described for RAC, but you can easily guess what are the steps for the standalone database.

1. First of all find the disk or partition name, that should be added to the ASM.

fdisk -l

My disk partition name is /dev/sdi1.

2. Assign the disk to ORACLEASM.

–On node1

/etc/init.d/oracleasm createdisk DISK7 /dev/sdi1

3. Scan disks in ALL NODES and list them to check if is presented.

–On node1

/etc/init.d/oracleasm scandisks
/etc/init.d/oracleasm listdisks

–On node2

/etc/init.d/oracleasm scandisks
/etc/init.d/oracleasm listdisks

4. Change the environment to the grid infrastructure, by setting ORACLE_SID to +ASM and so on :

$ . oraenv
ORACLE_SID = [media1] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle

# Connect as an SYSASM

sqlplus / as sysasm

Note: If you don’t remember the password for the sysasm user see How to reset SYSASM password.

# Find the diskgroup name

SQL> select name from v$asm_diskgroup;

NAME
——————————
DATA01

# Increase power limit, if you want, to complete rebalance operation in a short time.

SQL>  alter system set asm_power_limit=10

# Indicate disks location by the parameter asm_diskstring

SQL> alter system set asm_diskstring=’ORCL:DISK*’

SQL> alter diskgroup DATA01 add disk ‘ORCL:DISK7’;

It will do the rebalance automatically.

# To drop the disk , do the following:

SQL >  alter diskgroup DATA01 drop disk DISK7;

It will rebalance first and then drops the disk automatically.

You can see the current operation in v$asm_operation view.

Note: Until the view v$asm_operation contains a record you are able to undrop the disks by the following way:

SQL> alter diskgroup DATA01 undrop disks;

If the operation is already completed , you are not able to undrop the disk . But you can re-add the disk , if you want.

That is all.