Scenario – 1
In my environment, I recently implemented a TEST environment with Oracle ASM filesystem. It was working fine. The next day when I restart the services,found none of the diskgroups is mounted. Below is the solution I followed to overcome that issue. Here I’ve followed a temporary workaround. For permanent fix, check the solution of scenario-2.See, here none of the diskgroups got mounted after the successful start of ASM Instance.
Since the diskgroup DATA is already enabled, I tried to start the diskgroup using both server control utility and by sqlplus as sysasm user but ended with same issue.
When checking the state of the diskgroup, it returned no rows, so I doubted whether the disks are present under the respective path or it has the correct ownership.
On checking the ownership, found it was assigned to ROOT user rather than the repective OS user.
Changed the ownership for the disks and mounted the disks. This time the issue is fixed.
Scenario – 2
After adding the disks to diskgroup, when tried to mount the disk, none of the disks got mounted.
While checking the cluster status all the diskgroups are
shown status as Offline.
[oragrid@r12smokedb
~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State
Server State
details
--------------------------------------------------------------------------------
Local
Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE OFFLINE
r12smokedb STABLE
ora.FRA.dg
ONLINE OFFLINE
r12smokedb STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
r12smokedb STABLE
ora.REDO.dg
ONLINE OFFLINE
r12smokedb STABLE
ora.asm
ONLINE ONLINE r12smokedb Started,STABLE
ora.ons
OFFLINE OFFLINE r12smokedb STABLE
--------------------------------------------------------------------------------
Cluster
Resources
--------------------------------------------------------------------------------
ora.cssd
1
ONLINE ONLINE r12smokedb STABLE
ora.diskmon
1
OFFLINE OFFLINE STABLE
ora.evmd
1
ONLINE ONLINE r12smokedb STABLE
--------------------------------------------------------------------------------
[oragrid@r12smokedb
~]$
After sometime, I tried to mount the diskgroups manually and
the disks got mounted. Nothing I did here to solve the issue but it got mounted
but still we couldn't see any details under the view v$asm_diskgroup but can
see details under the view v$asm_disk.
SQL> select NAME,STATE,TYPE from v$asm_diskgroup;
no rows selected
SQL> select MOUNT_STATUS,STATE,NAME from v$asm_disk;
MOUNT_S STATE NAME
------- -------- ------------------------------
CACHED NORMAL DATA_0004
CACHED NORMAL DATA_0003
CACHED NORMAL DATA_0000
CACHED NORMAL DATA_0002
CACHED NORMAL FRA_0000
CACHED NORMAL REDO_0000
CACHED NORMAL DATA_0001
7 rows selected.
SQL> select NAME,STATE,TYPE from v$asm_diskgroup;
no rows selected
SQL> select MOUNT_STATUS,STATE,NAME from v$asm_disk;
MOUNT_S STATE NAME
------- -------- ------------------------------
CACHED NORMAL DATA_0004
CACHED NORMAL DATA_0003
CACHED NORMAL DATA_0000
CACHED NORMAL DATA_0002
CACHED NORMAL FRA_0000
CACHED NORMAL REDO_0000
CACHED NORMAL DATA_0001
7 rows selected.
On checking the cluster status after the diskgroups got mounted, everything seems to be fine.
[oragrid@r12smokedb
~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State
Server State
details
--------------------------------------------------------------------------------
Local
Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE
r12smokedb STABLE
ora.FRA.dg
ONLINE ONLINE
r12smokedb STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
r12smokedb STABLE
ora.REDO.dg
ONLINE ONLINE
r12smokedb STABLE
ora.asm
ONLINE ONLINE
r12smokedb
Started,STABLE
ora.ons
OFFLINE OFFLINE r12smokedb STABLE
--------------------------------------------------------------------------------
Cluster
Resources
--------------------------------------------------------------------------------
ora.cssd
1
ONLINE ONLINE r12smokedb STABLE
ora.diskmon
1
OFFLINE OFFLINE STABLE
ora.evmd
1
ONLINE ONLINE r12smokedb STABLE
--------------------------------------------------------------------------------
[oragrid@r12smokedb
~]$
Here the
problem is that I’ve changed the ownership of the ASM disks manually from root
to oragrid. It shouldn’t be changed manually, whereas it should be changed to
corresponding ownership automatically using the ASMLIB configuration file.
On investigating the ASMLIB configuration file we found the
required entries to be missing. After making the correction on the entries, our
issue got fixed permanently.
Configuration file
before change:
[oracle@r12smokedb
~]$ cat /etc/sysconfig/oracleasm
#
# This
is a configuration file for automatic loading of the Oracle
#
Automatic Storage Management library kernel driver. It is generated
# By
running /etc/init.d/oracleasm configure.
Please use that method
# to
modify this file
#
#
ORACLEASM_ENABLED: 'true' means to load the driver on boot.
ORACLEASM_ENABLED=true
#
ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
ORACLEASM_UID=
#
ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
ORACLEASM_GID=
#
ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
ORACLEASM_SCANBOOT=true
#
ORACLEASM_SCANORDER: Matching patterns to order disk scanning
ORACLEASM_SCANORDER=""
#
ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=""
#
ORACLEASM_SCAN_DIRECTORIES: Scan disks under these directories
ORACLEASM_SCAN_DIRECTORIES=""
#
ORACLEASM_USE_LOGICAL_BLOCK_SIZE: 'true' means use the logical block size
#
reported by the underlying disk instead of the physical. The default
# is
'false'
ORACLEASM_USE_LOGICAL_BLOCK_SIZE=false
[oracle@r12smokedb
~]$
[oracle@r12smokedb
~]$ cat /etc/sysconfig/oracleasm
#
# This
is a configuration file for automatic loading of the Oracle
#
Automatic Storage Management library kernel driver. It is generated
# By
running /etc/init.d/oracleasm configure.
Please use that method
# to
modify this file
#
#
ORACLEASM_ENABLED: 'true' means to load the driver on boot.
ORACLEASM_ENABLED=true
#
ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
ORACLEASM_UID=oracle
#
ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
ORACLEASM_GID=dba
#
ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
ORACLEASM_SCANBOOT=true
#
ORACLEASM_SCANORDER: Matching patterns to order disk scanning
ORACLEASM_SCANORDER=""
#
ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=""
#
ORACLEASM_SCAN_DIRECTORIES: Scan disks under these directories
ORACLEASM_SCAN_DIRECTORIES=""
#
ORACLEASM_USE_LOGICAL_BLOCK_SIZE: 'true' means use the logical block size
#
reported by the underlying disk instead of the physical. The default
# is
'false'
ORACLEASM_USE_LOGICAL_BLOCK_SIZE=false
[oracle@r12smokedb
~]$
Once the changes were made, follow the below steps.
oracleasm scandisks
oracleasm listdisks
No comments:
Post a Comment