Switchover: convert primary database to standby

–On Primary

— Convert primary database to standby

CONNECT / AS SYSDBA
ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY;

— Shutdown primary database

SHUTDOWN IMMEDIATE;

— Mount old primary database as standby database, open and enable real-time apply

STARTUP NOMOUNT;
ALTER DATABASE MOUNT STANDBY DATABASE;
ALTER DATABASE OPEN;

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE using current logfile DISCONNECT FROM SESSION;

–On Standby

— Convert standby database to primary

CONNECT / AS SYSDBA
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

— Shutdown standby database

SHUTDOWN IMMEDIATE;

— Open old standby database as primary

STARTUP;

Enable Real-Time apply on Standby(Standalone,RAC)

In this post we will discuss how to enable real-time apply feature on standalone and Real Application Cluster standby databases.

On standalone standby:

1. See what is recovery mode for now:

SQL> select dest_name,status,type,recovery_mode
from v$archive_dest_status
where dest_id=1;

DEST_NAME               STATUS    TYPE           RECOVERY_MODE
———————– ——— ————– ————————-
LOG_ARCHIVE_DEST_1      VALID     LOCAL          MANAGED

2. Enable real-time apply

On physical:

SQL> recover managed standby database cancel;
Media recovery complete.

SQL> recover managed standby database using current logfile disconnect from session;
Media recovery complete.

On logical:

SQL> alter database start logical standby apply immediate;

3. Check

SQL>  select dest_name,status,type,recovery_mode
from v$archive_dest_status
where dest_id=1;

DEST_NAME               STATUS    TYPE           RECOVERY_MODE
———————– ——— ————– ————————-
LOG_ARCHIVE_DEST_1      VALID     LOCAL          MANAGED REAL TIME APPLY

On RAC standby:

1. See what is recovery mode for now:

SQL> select dest_name,status,type,recovery_mode
from v$archive_dest_status
where dest_id=1;

DEST_NAME               STATUS    TYPE           RECOVERY_MODE
———————– ——— ————– ————————-
LOG_ARCHIVE_DEST_1      VALID     LOCAL          MANAGED

2. Enable real-time apply

On node1:

SQL> recover managed standby database cancel;
Media recovery complete.

SQL> recover managed standby database using current logfile disconnect from session;
Media recovery complete.

3. Check

SQL>  select dest_name,status,type,recovery_mode
from v$archive_dest_status
where dest_id=1;

DEST_NAME               STATUS    TYPE           RECOVERY_MODE
———————– ——— ————– ————————-
LOG_ARCHIVE_DEST_1      VALID     LOCAL          MANAGED REAL TIME APPLY

Remove remembered password in SQL Server Management Studio

After entering into SQL Server database via SSMS using remembered password the following error window appeared:

image

 

 

 

 

Solution:

Note that, this method will clear all remembered passwords and the connect window will look like this:

image

 

 

 

 

 

 

 

 

 

image

 

 

 

 

 

 

 

 

 

 

If this is what you want, do the following steps:

1. Close SQL Server Management Studio.
2. Delete C:\Users\mkupatadze\AppData\Roaming\Microsoft\Microsoft SQL Server\100\Tools\Shell\SqlStudio.bin
3. Re-start SQL Server Management Studio.

Installing standalone standby database for RAC

In this post we will install standalone standby database for 2-node RAC. Assume that we have already configured RAC, if not see the instructions here.

RAC description:

Instance name: orcl1
Hostname        : node1

Instance name: orcl2
Hostname        : node2

Standby description:

Instance name will be: stbydb
Hostname                    :  stbynode

All of these instances are using ASM.

So let’s start…

On primary database:

0. Add entries in hosts file.

On standby :

127.0.0.1 localhost

192.168.34.150 node1
192.168.34.151 node2

10.10.2.50 node1-priv
10.10.2.51 node2-priv

192.168.34.154 node1-vip
192.168.34.155 node2-vip

192.168.34.160 rac-scan

192.168.34.156 stbynode

On node1,node2:

127.0.0.1               localhost.localdomain localhost

192.168.34.150 node1
192.168.34.151 node2

10.10.2.50 node1-priv
10.10.2.51 node2-priv

192.168.34.154 node1-vip
192.168.34.155 node2-vip

192.168.34.160 rac-scan

192.168.34.156 stbynode

1. Enable archiving.

On node1:

[oracle@node1 ~]$ export ORACLE_SID=orcl1

sqlplus / as sysdba

SQL> select log_mode from v$database;

LOG_MODE
————
NOARCHIVELOG

SQL> shutdown immediate;

SQL> startup mount;

On node2:

export ORACLE_SID=orcl2

sqlplus / as sysdba

SQL> shutdown immediate;

SQL> startup mount;

Note that: In RAC environment to take database to archivelog mode , you must take all instances into mount state.

On node1:

SQL> alter database archivelog;

SQL> alter database open;

On node2:

SQL> alter database open;

On any node:

SQL> select log_mode from v$database;

LOG_MODE
————
ARCHIVELOG

2. Enable force logging.

On node1:

SQL> select force_logging from v$database;

FOR

NO

SQL> alter database force logging;

SQL> select force_logging from v$database;

FOR

YES

3. Modify the following parameters:

On node1:

SQL> show parameter pfile

NAME                                 TYPE        VALUE
———————————— ———– ——————————
spfile                               string      +DATA/orcl/spfileorcl.ora

SQL> create pfile from spfile;

Edit the following parameters into initorcl1.ora:

log_file_name_convert=’+DATA/stbydb’,’+DATA/orcl’
db_file_name_convert=’+DATA/stbydb’,’+DATA/orcl’
fal_client=’orcl1′
fal_client=’orcl2′
fal_server=’stbydb’
log_archive_config=’dg_config=(orcl,stbydb)’
log_archive_dest_state_1=’enable’
log_archive_dest_state_2=’ENABLE’
log_archive_dest_state_3=’enable’
log_archive_dest_1=’LOCATION=USE_DB_RECOVERY_FILE_DEST  VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=orcl’
log_archive_dest_2=’SERVICE=stbydb LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=stbydb’
log_archive_dest_3=’LOCATION=+DATA/orcl/STANDBYLOG  VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=orcl’
log_archive_max_processes=7
log_archive_min_succeed_dest=2
remote_login_passwordfile=’EXCLUSIVE’
standby_file_management=’auto’
sec_case_sensitive_logon=FALSE

My initorcl1.ora looks like this:

orcl1.__db_cache_size=503316480
orcl2.__db_cache_size=486539264
orcl1.__java_pool_size=16777216
orcl2.__java_pool_size=16777216
orcl1.__large_pool_size=16777216
orcl2.__large_pool_size=16777216
orcl1.__oracle_base=’/u01/app/oracle’#ORACLE_BASE set from environment
orcl2.__oracle_base=’/u01/app/oracle’#ORACLE_BASE set from environment
orcl1.__pga_aggregate_target=587202560
orcl2.__pga_aggregate_target=587202560
orcl1.__sga_target=872415232
orcl2.__sga_target=872415232
orcl1.__shared_io_pool_size=0
orcl2.__shared_io_pool_size=0
orcl1.__shared_pool_size=318767104
orcl2.__shared_pool_size=335544320
orcl1.__streams_pool_size=0
orcl2.__streams_pool_size=0
*.audit_file_dest=’/u01/app/oracle/admin/orcl/adump’
*.audit_trail=’db’
*.cluster_database=true
*.compatible=’11.2.0.0.0′
*.control_files=’+DATA/orcl/controlfile/current.260.793127259′
*.db_block_size=8192
*.db_create_file_dest=’+DATA’
*.db_domain=”
*.db_name=’orcl’
*.db_recovery_file_dest=’+DATA’
*.db_recovery_file_dest_size=4039114752
*.diagnostic_dest=’/u01/app/oracle’
*.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)’
orcl2.instance_number=2
orcl1.instance_number=1
*.log_archive_format=’%t_%s_%r.dbf’
*.memory_target=1449132032
*.open_cursors=300
*.processes=150
*.remote_listener=’rac-scan:1521′
*.remote_login_passwordfile=’exclusive’
orcl1.thread=1
orcl2.thread=2
orcl2.undo_tablespace=’UNDOTBS1′
orcl1.undo_tablespace=’UNDOTBS2′
log_file_name_convert=’+DATA/stbydb’,’+DATA/orcl’
db_file_name_convert=’+DATA/stbydb’,’+DATA/orcl’
fal_client=’orcl1′
fal_client=’orcl2′
fal_server=’stbydb’
log_archive_config=’dg_config=(orcl,stbydb)’
log_archive_dest_state_1=’enable’
log_archive_dest_state_2=’ENABLE’
log_archive_dest_state_3=’enable’
log_archive_dest_1=’LOCATION=USE_DB_RECOVERY_FILE_DEST  VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=orcl’
log_archive_dest_2=’SERVICE=stbydb LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=stbydb’
log_archive_dest_3=’LOCATION=+DATA/orcl/STANDBYLOG  VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=orcl’
log_archive_max_processes=7
log_archive_min_succeed_dest=2
remote_login_passwordfile=’EXCLUSIVE’
standby_file_management=’auto’
*.sec_case_sensitive_logon=FALSE

SQL> shutdown immediate;

SQL> startup nomount pfile=’/u01/app/oracle/product/11.2.0/db_1/dbs/initorcl1.ora’

On node2:

[oracle@node2 ~]$ export ORACLE_SID=orcl2
[oracle@node2 ~]$ sqlplus / as sysdba

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

On node1:

SQL> create spfile=’+DATA/ORCL/spfileorcl.ora’ from pfile=’/u01/app/oracle/product/11.2.0/db_1/dbs/initorcl1.ora’;

Edit initorcl1.ora file and write just the following entry:

SPFILE=’+DATA/ORCL/spfileorcl.ora’

And start the instance.

SQL> shutdown immediate;

SQL> startup

On node2:

If you try to open node2, it will stuck because it will try to send archivelog to standby location, which is not completely configured yet. So let node2 to be in down state.

3. Create same password files for all instances(passwords must be the same ).

On node1:

[oracle@node1 dbs]$ orapwd file=$ORACLE_HOME/dbs/orapworcl1 password=oracle force=y entries=5 ignorecase=y

On node2:

[oracle@node2 dbs]$ orapwd file=$ORACLE_HOME/dbs/orapworcl2 password=oracle force=y entries=5 ignorecase=y

On stbynode:

[oracle@stbynode ~]$ orapwd file=$ORACLE_HOME/dbs/orapwstbydb password=oracle entries=5 force=y  ignorecase=y

4. Create standby logfiles on primary.

On node1:

SQL> sqlplus / as sysdba
SQL>  select max(length(member)) from v$logfile;

MAX(LENGTH(MEMBER))
——————-
42

SQL> col member for a42
SQL> select group#,member,type from v$logfile order by 1;

GROUP# MEMBER                                     TYPE
———- —————————————— ——-
1 +DATA/orcl/onlinelog/group_1.261.793127267 ONLINE
2 +DATA/orcl/onlinelog/group_2.262.793127267 ONLINE
3 +DATA/orcl/onlinelog/group_3.266.793128605 ONLINE
4 +DATA/orcl/onlinelog/group_4.267.793128605 ONLINE

SQL> alter database add standby logfile thread 1;

SQL> alter database add standby logfile thread 2;

SQL> select group#,member,type from v$logfile order by 1;

GROUP# MEMBER                                     TYPE
———- —————————————— ——-
1 +DATA/orcl/onlinelog/group_1.261.793127267 ONLINE
2 +DATA/orcl/onlinelog/group_2.262.793127267 ONLINE
3 +DATA/orcl/onlinelog/group_3.266.793128605 ONLINE
4 +DATA/orcl/onlinelog/group_4.267.793128605 ONLINE
5 +DATA/orcl/onlinelog/group_5.273.793983817 STANDBY
5 +DATA/orcl/onlinelog/group_5.274.793983819 STANDBY
6 +DATA/orcl/onlinelog/group_6.271.793983823 STANDBY
6 +DATA/orcl/onlinelog/group_6.272.793983823 STANDBY

5. Create parameter file for standby database.

SQL> create pfile=’$ORACLE_HOME/dbs/initstbydb.ora’ from spfile;

Edit required parameters. Delete red entries. Green fields are corrected entries:

orcl2.__db_cache_size=486539264
orcl1.__
db_cache_size=503316480
orcl2.__java_pool_size=16777216
orcl1.__
java_pool_size=16777216
orcl2.__large_pool_size=16777216
orcl1.__
large_pool_size=16777216
orcl1.__oracle_base=’/u01/app/oracle’#ORACLE_BASE set from environment
orcl2.__oracle_base=’/u01/app/oracle’#ORACLE_BASE set from environment
orcl2.__pga_aggregate_target=587202560
orcl1.__
pga_aggregate_target=587202560
orcl2.__sga_target=872415232
orcl1.__
sga_target=872415232
orcl2.__shared_io_pool_size=0
orcl1.__shared_io_pool_size=0
orcl2.__shared_pool_size=335544320
orcl1.__
shared_pool_size=318767104
orcl2.__streams_pool_size=0
orcl1.__
streams_pool_size=0
*.audit_file_dest=’/u01/app/oracle/admin/stbydb/adump’
*.audit_trail=’db’
*.cluster_database=true
*.compatible=’11.2.0.0.0′
*.control_files=’+DATA/stbydb/controlfile/control01.ctl’
*.db_block_size=8192
*.db_create_file_dest=’+DATA’
*.db_domain=”
*.db_file_name_convert=‘+DATA/orcl’,’+DATA/stbydb’
*.db_name=’orcl’
*.db_unique_name=’stbydb’
*.db_recovery_file_dest=’+DATA’
*.db_recovery_file_dest_size=4039114752
*.diagnostic_dest=’/u01/app/oracle’
*.dispatchers='(PROTOCOL=TCP) (SERVICE=stbydbXDB)’
*.fal_client=’stbydb
*.fal_server=’orcl
orcl2.instance_number=2
orcl1.instance_number=1
*.log_archive_config=’dg_config=(orcl,stbydb)’
*.log_archive_dest_1=’LOCATION=USE_DB_RECOVERY_FILE_DEST  VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=stbydb
*.log_archive_dest_2=’SERVICE=orcl LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=orcl
*.log_archive_dest_3=’LOCATION=+DATA/stbydb/STANDBYLOG  VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=stbydb
*.log_archive_dest_state_1=’enable’
*.log_archive_dest_state_2=’ENABLE’
*.log_archive_dest_state_3=’enable’
*.log_archive_format=’%t_%s_%r.dbf’
*.log_archive_max_processes=7
*.log_archive_min_succeed_dest=2
*.log_file_name_convert=’+DATA/orcl‘,’+DATA/stbydb
*.memory_target=1449132032
*.open_cursors=300
*.processes=150
*.remote_listener=’rac-scan:1521′
*.remote_login_passwordfile=’EXCLUSIVE’
*.standby_file_management=’auto’
orcl1.thread=1
orcl2.thread=2
orcl2.
undo_tablespace=’UNDOTBS1′
orcl1.undo_tablespace=’UNDOTBS2′
sec_case_sensitive_logon=FALSE

—initstbydb.ora

db_cache_size=503316480
java_pool_size=16777216
large_pool_size=16777216
pga_aggregate_target=587202560
sga_target=872415232
shared_pool_size=318767104
streams_pool_size=0
*.audit_file_dest=’/u01/app/oracle/admin/stbydb/adump’
*.audit_trail=’db’
*.compatible=’11.2.0.0.0′
*.control_files=’+DATA/stbydb/controlfile/control01.ctl’
*.db_block_size=8192
*.db_create_file_dest=’+DATA’
*.db_domain=”
*.db_file_name_convert=’+DATA/orcl’,’+DATA/stbydb’
*.db_name=’orcl’
*.db_unique_name=’stbydb’
*.db_recovery_file_dest=’+DATA’
*.db_recovery_file_dest_size=4039114752
*.diagnostic_dest=’/u01/app/oracle’
*.dispatchers='(PROTOCOL=TCP) (SERVICE=stbydbXDB)’
*.fal_client=’stbydb’
*.fal_server=’orcl’
*.log_archive_config=’dg_config=(stbydb,orcl)’
*.log_archive_dest_1=’LOCATION=USE_DB_RECOVERY_FILE_DEST  VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=stbydb’
*.log_archive_dest_2=’SERVICE=orcl LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=orcl’
*.log_archive_dest_3=’LOCATION=+DATA/stbydb/STANDBYLOG  VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=stbydb’
*.log_archive_dest_state_1=’enable’
*.log_archive_dest_state_2=’ENABLE’
*.log_archive_dest_state_3=’enable’
*.log_archive_format=’%t_%s_%r.dbf’
*.log_archive_max_processes=7
*.log_archive_min_succeed_dest=2
*.log_file_name_convert=’+DATA/orcl’,’+DATA/stbydb’
*.memory_target=1449132032
*.open_cursors=300
*.processes=150
*.remote_listener=’rac-scan:1521′
*.remote_login_passwordfile=’EXCLUSIVE’
STANDBY_FILE_MANAGEMENT=’auto’
sec_case_sensitive_logon=FALSE

Copy parameter file to standby.

[oracle@node1 dbs]$ scp initstbydb.ora oracle@stbynode:$ORACLE_HOME/dbs

The authenticity of host ‘stbynode (192.168.34.156)’ can’t be established.
RSA key fingerprint is 73:fa:1f:a0:04:41:68:98:30:25:28:55:59:9c:07:9b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘stbynode,192.168.34.156’ (RSA) to the list of known hosts.
oracle@stbynode’s password:
initstbydb.ora                                100% 1479     1.4KB/s   00:00

6. Create required directories on primary and standby databases.

On node1:

mkdir -p /u01/app/oracle/backup

On stbynode

mkdir -p  /u01/app/oracle/admin/stbydb/adump
mkdir -p /u01/app/oracle/backup

su – grid

[grid@stbynode ~]$ asmcmd
ASMCMD> ls
CRS/
DATA/
ASMCMD> cd data
ASMCMD> ls
ASMCMD> mkdir stbydb
ASMCMD> cd stbydb
ASMCMD> mkdir ARCHIVELOG CONTROLFILE DATAFILE ONLINELOG PARAMETERFILE STANDBYLOG TEMPFILE

7. Add tns entries in tnsnames.ora file and listener entry in listener.ora file:

–tnsnames.ora

On node1, node2 and stbynode:

ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl)
)
)

stbydb=
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = stbynode)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = stbydb)
)
)

On node1, node2 and stbynode:

[oracle@node2 dbs]$ tnsping orcl

TNS Ping Utility for Linux: Version 11.2.0.1.0 – Production on 14-SEP-2012 17:04:13

Copyright (c) 1997, 2009, Oracle.  All rights reserved.

Used parameter files:

Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl)))
OK (20 msec)
[oracle@node2 dbs]$ tnsping stbydb

TNS Ping Utility for Linux: Version 11.2.0.1.0 – Production on 14-SEP-2012 17:04:20

Copyright (c) 1997, 2009, Oracle.  All rights reserved.

Used parameter files:

Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = stbynode)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = stbydb)))
OK (20 msec)

8. Add entry in oratab.

On stbynode:

vi /etc/oratab

+ASM:/u01/app/11.2.0/grid:N
stbydb:/u01/app/oracle/product/11.2.0/db_1:N

9. Making backups on primary(node1) and copy them to standby.

On node1:

export ORACLE_SID=orcl1
rman target /
RMAN> configure channel device type disk format ‘/u01/app/oracle/backup/%U’;
RMAN> backup database plus archivelog;
RMAN> backup current controlfile for standby;
RMAN> backup archivelog all;

[oracle@node1 ~]$ scp /u01/app/oracle/backup/* oracle@stbynode:/u01/app/oracle/backup/

10.  Duplicate database.

On stbynode:

[oracle@stbynode ~]$ export ORACLE_SID=stbydb
[oracle@stbynode ~]$ sqlplus / as sysdba

SQL> startup nomount;

[oracle@stbynode ~]$ export ORACLE_SID=stbydb
[oracle@stbynode ~]$ rman target sys/oracle@orcl auxiliary /

Recovery Manager: Release 11.2.0.1.0 – Production on Fri Sep 14 19:50:02 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1320907995)
connected to auxiliary database: ORCL (not mounted)

RMAN> duplicate target database for standby nofilenamecheck;

[oracle@stbynode ~]$ export ORACLE_SID=stbydb
[oracle@stbynode ~]$ sqlplus / as sysdba

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION;

11. Startup all.

On node2:

SQL> startup;

On stbynode:

alter database open;

12. Test.

On node1:

SQL> create table b(b number);
SQL> insert into b values(1);
SQL> commit;
SQL> alter system switch logfile;

On node2:

SQL> alter system switch logfile;

On stbynode:

SQL> select * from b;

B
———-
1

On node1:

ALTER SESSION SET nls_date_format=’DD-MON-YYYY HH24:MI:SS’;

SELECT sequence#, first_time, next_time
FROM v$archived_log
ORDER BY sequence#;

ALTER SYSTEM SWITCH LOGFILE;

SELECT sequence#, first_time, next_time
FROM v$archived_log
ORDER BY sequence#;

On node2:

SQL> alter system switch logfile;

On stbynode:

ALTER SESSION SET nls_date_format=’DD-MON-YYYY HH24:MI:SS’;

SELECT sequence#, first_time, next_time, applied
FROM v$archived_log
ORDER BY sequence#;

ORA-15045: ASM file name ‘+DATA’ is not in reference form

Error Message:

RMAN-06136: ORACLE error from auxiliary database: ORA-00200: control file could not be created
ORA-00202: control file: ‘+DATA’
ORA-15045: ASM file name ‘+DATA’ is not in reference form
ORA-17502: ksfdcre:5 Failed to create file +DATA
ORA-15081: failed to submit an I/O operation to a dis

Cause:

There are permissions problem.

Solution:

The following solution helped me, hope be the same for you.

[oracle@node2  ~] ls -la $ORACLE_HOME/bin/oracle

-r-sr-s–x 1 oracle oinstall  /u01/app/oracle/product/11.2.0/db_1/bin/oracle

[oracle@node2  ~] su –
Password:

[root@node2  ~] chown oracle:asmadmin  /u01/app/oracle/product/11.2.0/db_1/bin/oracle

[root@node2  ~] ls -la  /u01/app/oracle/product/11.2.0/db_1/bin/oracle

-r-xr-x–x 1 oracle asmadmin  /u01/app/oracle/product/11.2.0/db_1/bin/oracle

[root@node2  ~] chmod a+s  /u01/app/oracle/product/11.2.0/db_1/bin/oracle

[root@node2  ~] ls -la  /u01/app/oracle/product/11.2.0/db_1/bin/oracle

-r-sr-s–x 1 oracle asmadmin  /u01/app/oracle/product/11.2.0/db_1/bin/oracle

Note:

[root@node1 ~]# id oracle
uid=501(oracle) gid=500(oinstall) groups=500(oinstall),501(dba),503(asmdba),505(oper)
[root@node1 ~]# id grid
uid=500(grid) gid=500(oinstall) groups=500(oinstall),501(dba),502(asmadmin),503(asmdba),504(asmoper)

Oracle is oracle software owner and Grid user is grid software owner. So I have two different users.

How to restore file permissions of the installed package to its default on Linux?

Hello all,

(Please, note that in command string instead of “” there are two dashes “- -“ like “–setperms” it is “– -setperms”)

By mistake, I run the following command on / directory:

chmod -R 777 *

I stopped this command, but unfortunately some file permissions were changed.

Because of that, I was not able to connect to the server with WinSCP and ssh was not working.

I looked in to the /var/log/messages and found the following entry:

Sep  6 15:02:37 stbynode sshd[24226]: fatal: /var/empty/sshd must be owned by root and not group or world-writable.

I run the following command on another server:

ls -la /var/empty/sshd
total 16
drwx–x–x 3 root root 4096 Sep 3 21:07 .
drwxr-xr-x 3 root root 4096 Sep 3 21:07 ..
drwxr-xr-x 2 root root 4096 Sep 3 21:36 etc

On my server it was:

ls -la /var/empty/sshd
total 16
drwxrwxrwx 3 root root 4096 Sep 3 14:00 .
drwxrwxrwx 3 root root 4096 Sep 3 14:00 ..
drwxrwxrwx 2 root root 4096 Sep 3 14:00 etc

You can change these permissions by hand (that will be boring if not only one file permissions have been changed)but there exist one very useful command:

rpm  –setperms {packagename}

This will reset package permissions, because of rpm database contains permission information.

So I run the following :

for p in $(rpm -qa); do rpm –setperms $p; done

Note this script will reset permissions of the  installed package, not user created file…

After completed this command , I was able to connect to the server with WinScp and  /var/empty/sshd permissions were set to drwxr-xr-x

To avoid such kind of situation my advice would be to save permission information everyday. By running the following command using crontab:

find / -exec stat –format “chmod %a ${MPOINT}%n” {} \; > /tmp/permissionsbackup.sh

Good Luck , I hope the post was helpful…

locate: command not found

Error message:

-bash: locate: command not found

Cause:

Locate is not installed.

Solution:

Find your OS installation disk and in Server folder find package like : mlocate-* ,  for me it is mlocate-0.15-1.el5.2.x86_64.rpm , and install it.

[root@node1 Server]# rpm -Uvh mlocate-0.15-1.el5.2.x86_64.rpm
warning: mlocate-0.15-1.el5.2.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing… ########################################### [100%]
1:mlocate ########################################### [100%]

If you try to run locate again you will receive the following error:

locate: can not open `/var/lib/mlocate/mlocate.db’: No such file or directory

To solve, run:

updatedb

Congratulations, you are able to run locate without any error!