Include the new shell script in /etc/rc.local to run on each boot
from all Oracle RAC nodes in the cluster:
[root@racnode1 ~]# echo "/usr/local/bin/setup_raw_devices.sh" >> /etc/rc.local
[root@racnode2 ~]# echo "/usr/local/bin/setup_raw_devices.sh" >> /etc/rc.local
- Step 6: Once the raw devices are created, use the dd
command to zero out the device and make sure no data is written to
the raw devices. Only perform this action from one of the Oracle
RAC nodes in the cluster:
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw1
dd: writing to '/dev/raw/raw1': No space left on device
1048516+0 records in
1048515+0 records out
536839680 bytes (537 MB) copied, 773.145 seconds, 694 kB/s
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw2
dd: writing to '/dev/raw/raw2': No space left on device
1048516+0 records in
1048515+0 records out
536839680 bytes (537 MB) copied, 769.974 seconds, 697 kB/s
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw3
dd: writing to '/dev/raw/raw3': No space left on device
65505+0 records in
65504+0 records out
33538048 bytes (34 MB) copied, 47.9176 seconds, 700 kB/s
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw4
dd: writing to '/dev/raw/raw4': No space left on device
65505+0 records in
65504+0 records out
33538048 bytes (34 MB) copied, 47.9915 seconds, 699 kB/s
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw5
dd: writing to '/dev/raw/raw5': No space left on device
65505+0 records in
65504+0 records out
33538048 bytes (34 MB) copied, 48.2684 seconds, 695 kB/s
Administering the OCR File
View OCR Configuration Information
Two methods exist to verify how many OCR files are configured for
the cluster as well as their location. If the cluster is up and
running, use the ocrcheck utility as either the oracle or root user
account:
[oracle@racnode1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4660
Available space (kbytes) : 257460
ID : 1331197
Device/File Name : /u02/oradata/racdb/OCRFile <-- OCR (primary)
Device/File integrity check succeeded
Device/File not configured <-- OCR Mirror (not configured)
Cluster registry integrity check succeeded
If CRS is down, you can still determine the location and number
of OCR files by viewing the file ocr.loc, whose location is somewhat
platform dependent. For example, on the Linux platform it is located
in /etc/oracle/ocr.loc while on Sun Solaris it is located at /var/opt/oracle/ocr.loc:
[root@racnode1 ~]# cat /etc/oracle/ocr.loc
ocrconfig_loc=/u02/oradata/racdb/OCRFile
local_only=FALSE
To view the actual contents of the OCR in a human-readable
format, run the ocrdump command. This command requires the CRS stack
to be running. Running the ocrdump command will dump the contents of
the OCR into an ASCII text file in the current directory named
OCRDUMPFILE:
[root@racnode1 ~]# ocrdump
[root@racnode1 ~]# ls -l OCRDUMPFILE
-rw-r--r-- 1 root root 250304 Oct 2 22:46 OCRDUMPFILE
The ocrdump utility also allows for different output
options:
#
# Write OCR contents to specified file name.
#
[root@racnode1 ~]# ocrdump /tmp/'hostname'_ocrdump_'date +%m%d%y:%H%M'
#
# Print OCR contents to the screen.
#
[root@racnode1 ~]# ocrdump -stdout -keyname SYSTEM.css
#
# Write OCR contents out to XML format.
#
[root@racnode1 ~]# ocrdump -stdout -keyname SYSTEM.css -xml > ocrdump.xml
Add an OCR File
Starting with Oracle Clusterware 10g Release 2 (10.2), users now
have the ability to multiplex (mirror) the OCR. Oracle Clusterware
allows for a maximum of two OCR locations; one is the primary and
the second is an OCR mirror. To avoid simultaneous loss of multiple
OCR files, each copy of the OCR should be placed on a shared storage
device that does not share any components (controller, interconnect,
and so on) with the storage devices used for the other OCR file.
Before attempting to add a mirrored OCR, determine how many OCR
files are currently configured for the cluster as well as their
location. If the cluster is up and running, use the ocrcheck utility
as either the oracle or root user account:
[oracle@racnode1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4660
Available space (kbytes) : 257460
ID : 1331197
Device/File Name : /u02/oradata/racdb/OCRFile <-- OCR (primary)
Device/File integrity check succeeded
Device/File not configured <-- OCR Mirror (not configured yet)
Cluster registry integrity check succeeded
If CRS is down, you can still determine the location and number
of OCR files by viewing the file ocr.loc, whose location is somewhat
platform dependent. For example, on the Linux platform it is located
in /etc/oracle/ocr.loc while on Sun Solaris it is located at /var/opt/oracle/ocr.loc:
[root@racnode1 ~]# cat /etc/oracle/ocr.loc
ocrconfig_loc=/u02/oradata/racdb/OCRFile
local_only=FALSE
The results above indicate I have only one OCR file and that it
is located on an OCFS2 file system. Since we are allowed a maximum
of two OCR locations, I intend to create an OCR mirror and locate it
on the same OCFS2 file system in the same directory as the primary
OCR. Please note that I am doing this for the sake brevity. The OCR
mirror should always be placed on a separate device than the primary
OCR file to guard against a single point of failure.
Note that the Oracle Clusterware stack should be online and
running on all nodes in the cluster while adding, replacing, or
removing the OCR location and hence does not require any system
downtime.
Note: The operations performed in this section
affect the OCR for the entire cluster. However, the ocrconfig
command cannot modify OCR configuration information for nodes that
are shut down or for nodes on which Oracle Clusterware is not
running. So, you should avoid shutting down nodes while modifying
the OCR using the ocrconfig command. If for any reason, any of the
nodes in the cluster are shut down while modifying the OCR using the
ocrconfig command, you will need to perform a repair on the stopped
node before it can brought online to join the cluster. Please see
the section "Repair an OCR File on a Local Node" for instructions on
repairing the OCR file on the affected node.
You can add an OCR mirror after an upgrade or after completing
the Oracle Clusterware installation. The Oracle Universal Installer
(OUI) allows you to configure either one or two OCR locations during
the installation of Oracle Clusterware. If you already mirror the
OCR, then you do not need to add a new OCR location; Oracle
Clusterware automatically manages two OCRs when you configure normal
redundancy for the OCR. As previously mentioned, Oracle RAC
environments do not support more than two OCR locations; a primary
OCR and a secondary (mirrored) OCR.
Run the following command to add or relocate an OCR mirror using
either destination_file or disk to designate the
target location of the additional OCR:
ocrconfig -replace ocrmirror <destination_file>
ocrconfig -replace ocrmirror <disk>
You must be logged in as the root user to run the ocrconfig
command.
Please note that ocrconfig -replace is the only way to
add/relocate OCR files/mirrors. Attempting to copy the existing OCR
file to a new location and then manually adding/changing the file
pointer in the ocr.loc file is not supported and will actually fail
to work.
For example:
#
# Verify CRS is running on node 1.
#
[root@racnode1 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
#
# Verify CRS is running on node 2.
#
[root@racnode2 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
#
# Configure the shared OCR destination_file/disk before
# attempting to create the new ocrmirror on it. This example
# creates a destination_file on an OCFS2 file system.
# Failure to pre-configure the new destination_file/disk
# before attempting to run ocrconfig will result in the
# following error:
#
# PROT-21: Invalid parameter
#
[root@racnode1 ~]# cp /dev/null /u02/oradata/racdb/OCRFile_mirror
[root@racnode1 ~]# chown root /u02/oradata/racdb/OCRFile_mirror
[root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/OCRFile_mirror
[root@racnode1 ~]# chmod 640 /u02/oradata/racdb/OCRFile_mirror
#
# Add new OCR mirror.
#
[root@racnode1 ~]# ocrconfig -replace ocrmirror /u02/oradata/racdb/OCRFile_mirror
After adding the new OCR mirror, check that it can be seen from
all nodes in the cluster:
#
# Verify new OCR mirror from node 1.
#
[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4668
Available space (kbytes) : 257452
ID : 1331197
Device/File Name : /u02/oradata/racdb/OCRFile
Device/File integrity check succeeded
Device/File Name : /u02/oradata/racdb/OCRFile_mirror <-- New OCR Mirror
Device/File integrity check succeeded
Cluster registry integrity check succeeded
[root@racnode1 ~]# cat /etc/oracle/ocr.loc
#Device/file getting replaced by device /u02/oradata/racdb/OCRFile_mirror
ocrconfig_loc=/u02/oradata/racdb/OCRFile
ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror
#
# Verify new OCR mirror from node 2.
#
[root@racnode2 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4668
Available space (kbytes) : 257452
ID : 1331197
Device/File Name : /u02/oradata/racdb/OCRFile
Device/File integrity check succeeded
Device/File Name : /u02/oradata/racdb/OCRFile_mirror <-- New OCR Mirror
Device/File integrity check succeeded
Cluster registry integrity check succeeded
[root@racnode2 ~]# cat /etc/oracle/ocr.loc
#Device/file getting replaced by device /u02/oradata/racdb/OCRFile_mirror
ocrconfig_loc=/u02/oradata/racdb/OCRFile
ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror
As mentioned earlier, you can have at most two OCR files in the
cluster; the primary OCR and a single OCR mirror. Attempting to add
an extra mirror will actually relocate the current OCR mirror
to the new location specified in the command:
[root@racnode1 ~]# cp /dev/null /u02/oradata/racdb/OCRFile_mirror2
[root@racnode1 ~]# chown root /u02/oradata/racdb/OCRFile_mirror2
[root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/OCRFile_mirror2
[root@racnode1 ~]# chmod 640 /u02/oradata/racdb/OCRFile_mirror2
[root@racnode1 ~]# ocrconfig -replace ocrmirror /u02/oradata/racdb/OCRFile_mirror2
[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4668
Available space (kbytes) : 257452
ID : 1331197
Device/File Name : /u02/oradata/racdb/OCRFile
Device/File integrity check succeeded
Device/File Name : /u02/oradata/racdb/OCRFile_mirror2 <-- Mirror was Relocated!
Device/File integrity check succeeded
Cluster registry integrity check succeeded
Relocate an OCR File
Just as we were able to add a new ocrmirror while the CRS stack
was online, the same holds true when relocating an OCR file or OCR
mirror and therefore does not require any system downtime.
You can relocate OCR only when the OCR is mirrored. A mirror copy
of the OCR file is required to move the OCR online. If there is no
mirror copy of the OCR, first create the mirror using the
instructions in the previous section.
Attempting to relocate OCR when an OCR mirror does not exist will
produce the following error: :
ocrconfig -replace ocr /u02/oradata/racdb/OCRFile
PROT-16: Internal Error
If the OCR mirror is not required in the cluster after relocating
the OCR, it can be safely removed.
Run the following command as the root account to
relocate the current OCR file to a new location using either
destination_file or disk to designate the new target
location for the OCR:
ocrconfig -replace ocr <destination_file>
ocrconfig -replace ocr <disk>
Run the following command as the root account to
relocate the current OCR mirror to a new location using either
destination_file or disk to designate the new target
location for the OCR mirror:
ocrconfig -replace ocrmirror <destination_file>
ocrconfig -replace ocrmirror <disk>
The following example assumes the OCR is mirrored and
demonstrates how to relocate the current OCR file (/u02/oradata/racdb/OCRFile)
from the OCFS2 file system to a new raw device (/dev/raw/raw1):
#
# Verify CRS is running on node 1.
#
[root@racnode1 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
#
# Verify CRS is running on node 2.
#
[root@racnode2 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
#
# Verify current OCR configuration.
#
[root@racnode2 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4668
Available space (kbytes) : 257452
ID : 1331197
Device/File Name : /u02/oradata/racdb/OCRFile <-- Current OCR to Relocate
Device/File integrity check succeeded
Device/File Name : /u02/oradata/racdb/OCRFile_mirror
Device/File integrity check succeeded
Cluster registry integrity check succeeded
#
# Verify new raw storage device exists, is configured with
# the correct permissions, and can be seen from all nodes
# in the cluster.
#
[root@racnode1 ~]# ls -l /dev/raw/raw1
crw-r----- 1 root oinstall 162, 1 Oct 2 19:54 /dev/raw/raw1
[root@racnode2 ~]# ls -l /dev/raw/raw1
crw-r----- 1 root oinstall 162, 1 Oct 2 19:54 /dev/raw/raw1
#
# Clear out the contents from the new raw device.
#
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw1
#
# Relocate primary OCR file to new raw device. Note that
# there is no deletion of the old OCR file but simply a
# replacement.
#
[root@racnode1 ~]# ocrconfig -replace ocr /dev/raw/raw1
After relocating the OCR file, check that the change can be seen
from all nodes in the cluster:
#
# Verify new OCR file from node 1.
#
[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4668
Available space (kbytes) : 257452
ID : 1331197
Device/File Name : /dev/raw/raw1 <-- Relocated OCR
Device/File integrity check succeeded
Device/File Name : /u02/oradata/racdb/OCRFile_mirror
Device/File integrity check succeeded
Cluster registry integrity check succeeded
[root@racnode1 ~]# cat /etc/oracle/ocr.loc
#Device/file /u02/oradata/racdb/OCRFile getting replaced by device /dev/raw/raw1
ocrconfig_loc=/dev/raw/raw1
ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror
#
# Verify new OCR file from node 2.
#
[root@racnode2 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4668
Available space (kbytes) : 257452
ID : 1331197
Device/File Name : /dev/raw/raw1 <-- Relocated OCR
Device/File integrity check succeeded
Device/File Name : /u02/oradata/racdb/OCRFile_mirror
Device/File integrity check succeeded
Cluster registry integrity check succeeded
[root@racnode2 ~]# cat /etc/oracle/ocr.loc
#Device/file /u02/oradata/racdb/OCRFile getting replaced by device /dev/raw/raw1
ocrconfig_loc=/dev/raw/raw1
ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror
After verifying the relocation was successful, remove the old OCR
file at the OS level:
[root@racnode1 ~]# rm -v /u02/oradata/racdb/OCRFile
removed '/u02/oradata/racdb/OCRFile'
In part two of this series, we will continue on with our
exploration of the administration of the OCR file, with methods to
repair and remove it, and then on to backup and recovery of the OCR
file.