ASM File Name / Volume Name / Device Name Bytes File Type
--------------------------------------------------------------- ------------------ ------------------
+CRS/racnode-cluster/ASMPARAMETERFILE/REGISTRY.253.734544679 1,536 ASMPARAMETERFILE
+CRS/racnode-cluster/OCRFILE/REGISTRY.255.734544681 272,756,736 OCRFILE
------------------
272,758,272
+DOCSDG1 [DOCSVOL1] /dev/asm/docsvol1-300 34,359,738,368 ASMVOL
+DOCSDG1 [DOCSVOL2] /dev/asm/docsvol2-300 34,359,738,368 ASMVOL
+DOCSDG1 [DOCSVOL3] /dev/asm/docsvol3-300 26,843,545,600 ASMVOL
------------------
95,563,022,336
+FRA/RACDB/ARCHIVELOG/2010_11_08/thread_1_seq_69.264.734565029 42,991,616 ARCHIVELOG
+FRA/RACDB/ARCHIVELOG/2010_11_08/thread_2_seq_2.266.734565685 41,260,544 ARCHIVELOG
<SNIP >
+FRA/RACDB/ONLINELOG/group_3.259.734554873 52,429,312 ONLINELOG
+FRA/RACDB/ONLINELOG/group_4.260.734554877 52,429,312 ONLINELOG
------------------
12,227,537,408
+RACDB_DATA/RACDB/CONTROLFILE/Current.256.734552525 18,890,752 CONTROLFILE
+RACDB_DATA/RACDB/DATAFILE/EXAMPLE.263.734552611 157,294,592 DATAFILE
+RACDB_DATA/RACDB/DATAFILE/SYSAUX.260.734552569 1,121,984,512 DATAFILE
+RACDB_DATA/RACDB/DATAFILE/SYSTEM.259.734552539 744,497,152 DATAFILE
+RACDB_DATA/RACDB/DATAFILE/UNDOTBS1.261.734552595 791,683,072 DATAFILE
+RACDB_DATA/RACDB/DATAFILE/UNDOTBS2.264.734552619 209,723,392 DATAFILE
+RACDB_DATA/RACDB/DATAFILE/USERS.265.734552627 5,251,072 DATAFILE
+RACDB_DATA/RACDB/ONLINELOG/group_1.257.734552529 52,429,312 ONLINELOG
+RACDB_DATA/RACDB/ONLINELOG/group_2.258.734552533 52,429,312 ONLINELOG
+RACDB_DATA/RACDB/ONLINELOG/group_3.266.734554871 52,429,312 ONLINELOG
+RACDB_DATA/RACDB/ONLINELOG/group_4.267.734554875 52,429,312 ONLINELOG
+RACDB_DATA/RACDB/PARAMETERFILE/spfile.268.734554879 4,608 PARAMETERFILE
+RACDB_DATA/RACDB/TEMPFILE/TEMP.262.734552605 93,331,456 TEMPFILE
+RACDB_DATA/RACDB/spfileracdb.ora 4,608 PARAMETERFILE
------------------
3,352,382,464
Oracle ACFS and other supported 3rd party file systems can use
Oracle ADVM as a volume management platform to create and manage
file systems while leveraging the full power and functionality of
Oracle ASM features. A volume may be created in its own Oracle ASM
disk group or can share space in an already existing disk group. Any
number of volumes can be created in an ASM disk group. Creating a
new volume in an Oracle ASM disk group can be performed using the
ASM Configuration Assistant (ASMCA), Oracle Enterprise Manager
(OEM), SQL , or ASMCMD. For example:
asmcmd volcreate -G
docsdg1 -s 20G docsvol3
Once a new volume is created in Linux, the ADVM device driver
automatically creates a volume device on the OS that is used by
clients to access the volume. These volumes may be used as block
devices, may contain a file system such as ext3, ext4, reiserfs,
OCFS2 or Oracle ACFS may used (as described in this guide) in which
case the oracleacfs driver is also used for I/O to the file system.
(Note that on the Linux platform, Oracle ADVM volume devices are
created as block devices regardless of the configuration of the
underlying storage in the Oracle ASM disk group. Do not use raw (8)
to map Oracle ADVM volume block devices into raw volume devices.)
Under Linux, all volume devices are externalized to the OS and
appear dynamically as special files in the /dev/asm
directory. In this guide, we will use this OS volume device to
create an Oracle ACFS:
$ ls -l
/dev/asm
total 0
brwxrwx--- 1 root asmadmin 252, 153601 Nov 28 13:49 docsvol1-300
brwxrwx--- 1 root asmadmin 252, 153602 Nov 28 13:49 docsvol2-300
brwxrwx--- 1 root asmadmin 252, 153603 Nov 28 13:56 docsvol3-300
$ /sbin/mkfs -t acfs -b 4k /dev/asm/docsvol3-300 -n "DOCSVOL3"
Oracle ADVM implements its own extent and striping algorithm to
ensure the highest performance for general purpose files. An ADVM
volume is four columns of 64MB extents and 128KB stripe width by
default. ADVM writes data in 128KB stripes in a Round Robin fashion
to each column before starting on the next four column extents. ADVM
uses Dirty Region Logging (DRL) for mirror recovery after a node or
instance failure. This DRL scheme requires a DRL file in the ASM
disk group to be associated with each ASM dynamic volume.
ACFS Prerequisites
Install Oracle Grid Infrastructure
Oracle Grid Infrastructure 11g Release 2 (11.2) or higher is
required for Oracle ACFS. Oracle grid infrastructure includes Oracle
Clusterware, Oracle ASM, Oracle ACFS, Oracle ADVM, and driver
resources software components, which are installed into the Grid
Infrastructure Home using the Oracle Universal Installation (OUI)
tool. Refer to this guide for instructions on how to configure
Oracle grid infrastructure as part of an Oracle RAC 11g release 2
database install on Linux.
Log In as the Grid Infrastructure User
To perform the examples demonstrated in this guide, it is assumed
that the Oracle grid infrastructure owner is 'grid'. If the Oracle
grid infrastructure owner is 'oracle', then log in as the oracle
account.
Log in as the Oracle grid infrastructure owner and switch to the
Oracle ASM environment on node 1 of the RAC when performing non-root
ACFS tasks:
[grid@racnode1
~]$ id
uid=1100(grid) gid=1000(oinstall)
groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)
[grid@racnode1 ~]$ . oraenv
ORACLE_SID = [+ASM1] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is
/u01/app/grid
[grid@racnode1 ~]$ dbhome
/u01/app/11.2.0/grid
[grid@racnode1 ~]$ echo $ORACLE_SID
+ASM1
Verify / Create ASM Disk Group
After validating the Oracle grid infrastructure installation and
logging in as the Oracle grid infrastructure owner (grid),
the next step is to decide which Oracle ASM disk group should be
used to create the Oracle ASM dynamic volume(s). The following SQL
demonstrates how to search the available ASM disk groups:
break on inst_id skip 1
column inst_id format 9999999 heading "Instance ID" justify left
column name format a15 heading "Disk Group" justify left
column total_mb format 999,999,999 heading "Total (MB)" justify right
column free_mb format 999,999,999 heading "Free (MB)" justify right
column pct_free format 999.99 heading "% Free" justify right
======================================================================
SQL> select inst_id, name, total_mb, free_mb, round((free_mb/total_mb)*100,2) pct_free
2 from gv$asm_diskgroup
3 where total_mb != 0
4 order by inst_id, name;
Instance ID Disk Group Total (MB) Free (MB) % Free
----------- --------------- ------------ ------------ -------
1 CRS 2,205 1,809 82.04
FRA 33,887 24,802 73.19
RACDB_DATA 33,887 30,623 90.37
2 CRS 2,205 1,809 82.04
FRA 33,887 24,802 73.19
RACDB_DATA 33,887 30,623 90.37
The same task can be accomplished using the ASMCMD
command-line utility:
[grid@racnode1 ~]$ asmcmd lsdg
If you find an existing Oracle ASM disk group that has adequate
space, the Oracle ASM dynamic volume(s) can be created on that free
space or a new ASM disk group can be created.
For the purpose of this guide, I will be creating a dedicated Oracle
ASM disk group named DOCSDG1 which will be used for all three Oracle
ASM dynamic volumes. I already setup a shared iSCSI volume and
provisioned it using ASMLib. The ASMLib shared volume that will be
used to create the new disk group is named ORCL:ASMDOCSVOL1.
[grid@racnode1 ~]$ sqlplus / as sysasm
SQL> select path, name, header_status, os_mb from v$asm_disk;
PATH NAME HEADER_STATUS OS_MB
------------------ --------------- ------------- ----------
ORCL:ASMDOCSVOL1 PROVISIONED 98,303
ORCL:CRSVOL1 CRSVOL1 MEMBER 2,205
ORCL:DATAVOL1 DATAVOL1 MEMBER 33,887
ORCL:FRAVOL1 FRAVOL1 MEMBER 33,887
After identifying the ASMLib volume and verifying it is
accessible from all Oracle RAC nodes, log in to the Oracle ASM
instance and create the new disk group from one of the Oracle RAC
nodes. After verifying the disk group was created, log in to the
Oracle ASM instance on all other RAC nodes and mount the new disk
group:
[grid@racnode1 ~]$ sqlplus / as sysasm
SQL> CREATE DISKGROUP docsdg1 EXTERNAL REDUNDANCY DISK 'ORCL:ASMDOCSVOL1' SIZE 98303 M;
Diskgroup created.
SQL> @asm_diskgroups
Disk Group Sector Block Allocation
Name Size Size Unit Size State Type Total Size (MB) Used Size (MB) Pct. Used
---------- ------- ------ ----------- -------- ------ --------------- -------------- ---------
CRS 512 4,096 1,048,576 MOUNTED EXTERN 2,205 396 17.96
DOCSDG1 512 4,096 1,048,576 MOUNTED EXTERN 98,303 50 .05
FRA 512 4,096 1,048,576 MOUNTED EXTERN 33,887 9,085 26.81
RACDB_DATA 512 4,096 1,048,576 MOUNTED EXTERN 33,887 3,264 9.63
--------------- --------------
Grand Total: 168,282 12,795
===============================================================================================
[grid@racnode2 ~]$ sqlplus / as sysasm
SQL> ALTER DISKGROUP docsdg1 MOUNT;
Diskgroup altered.
SQL> @asm_diskgroups
Disk Group Sector Block Allocation
Name Size Size Unit Size State Type Total Size (MB) Used Size (MB) Pct. Used
---------- ------- ------ ----------- -------- ------ --------------- -------------- ---------
CRS 512 4,096 1,048,576 MOUNTED EXTERN 2,205 396 17.96
DOCSDG1 512 4,096 1,048,576 MOUNTED EXTERN 98,303 50 .05
FRA 512 4,096 1,048,576 MOUNTED EXTERN 33,887 9,085 26.81
RACDB_DATA 512 4,096 1,048,576 MOUNTED EXTERN 33,887 3,264 9.63
--------------- --------------
Grand Total: 168,282 12,795
Verify Oracle ASM Volume Driver
The operating environment used in this guide is CentOS 5.5 x86_64:
[root@racnode1 ~]# uname -a
Linux racnode1 2.6.18-194.el5 #1 SMP Fri Apr 2 14:58:14 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
On supported operating systems, the Oracle ACFS modules will be
configured and the Oracle ASM volume driver started by default after
installing Oracle grid infrastructure. With CentOS and other
unsupported operating systems, a workaround is required to enable
Oracle ACFS. One of the first tasks is to manually start the Oracle
ASM volume driver:
[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/acfsload start -s
ADVM/ACFS is not supported on centos-release-5-5.el5.centos
The failed output from the above command should come as no
surprise given Oracle ACFS is not supported on CentOS.
By default, the Oracle ACFS modules do not get installed on CentOS
because it is not a supported operating environment. This section
provides a simple, but unsupported, workaround to get Oracle ACFS
working on CentOS. This workaround includes some of the manual steps
that are required to launch the Oracle ASM volume driver when
installing Oracle ACFS on a non-clustered system. (Note that the
steps documented in this section serve as a workaround in order to
setup Oracle ACFS on CentOS and is by no means supported by Oracle
Corporation. Do not attempt these steps on a critical production
environment. You have been warned!)
The following steps will need to be run from all nodes in an Oracle
RAC database cluster as root.
First, make a copy of the following Perl module:
[root@racnode1 ~]#
cd /u01/app/11.2.0/grid/lib
[root@racnode1 lib]# cp -p osds_acfslib.pm osds_acfslib.pm.orig
[root@racnode2 ~]# cd /u01/app/11.2.0/grid/lib
[root@racnode2 lib]# cp -p osds_acfslib.pm osds_acfslib.pm.orig
Next, edit the osds_acfslib.pm Perl module. Search for the string
'support this release' (which was line 278 in my case).
Replace
if (($release =~ /enterprise-release-5/) ||
($release =~ /redhat-release-5/))
with
if (($release =~ /enterprise-release-5/) ||
($release =~ /redhat-release-5/) ||
($release =~ /centos-release-5/))
This will get you past the supported version check; however, if
you attempt to load the Oracle ASM volume driver from either Oracle
RAC node, you get the following error:
[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/acfsload start -s
acfsload: ACFS-9129: ADVM/ACFS not installed
To install ADVM/ACFS, copy the following kernel modules from the
Oracle grid infrastructure home to the expected location:
[root@racnode1 ~]# mkdir /lib/modules/2.6.18-194.el5/extra/usm
[root@racnode1 ~]# cd /u01/app/11.2.0/grid/install/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5-x86_64/bin
[root@racnode1 bin]# cp *ko /lib/modules/2.6.18-194.el5/extra/usm/
[root@racnode2 ~]# mkdir /lib/modules/2.6.18-194.el5/extra/usm
[root@racnode2 ~]# cd /u01/app/11.2.0/grid/install/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5-x86_64/bin
[root@racnode2 bin]# cp *ko /lib/modules/2.6.18-194.el5/extra/usm/
Once the kernel modules have been copied, we can verify the ADVM/ACFS
installation by running the following from all Oracle RAC nodes:
[root@racnode1 ~]# cd /u01/app/11.2.0/grid/bin
[root@racnode1 bin]# ./acfsdriverstate -orahome /u01/app/11.2.0/grid version
ACFS-9205: OS/ADVM,ACFS installed version = 2.6.18-8.el5(x86_64)/090715.1
[root@racnode2 ~]# cd /u01/app/11.2.0/grid/bin
[root@racnode2 bin]# ./acfsdriverstate -orahome /u01/app/11.2.0/grid version
ACFS-9205: OS/ADVM,ACFS installed version = 2.6.18-8.el5(x86_64)/090715.1
The next step is to record dependencies for the new kernel
modules:
[root@racnode1 ~]# depmod
[root@racnode2 ~]# depmod
Now, running acfsload start -s will complete without any further
messages:
[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/acfsload start -s
[root@racnode2 ~]# /u01/app/11.2.0/grid/bin/acfsload start -s
Check that the modules were successfully loaded on all Oracle RAC
nodes:
[root@racnode1 ~]# lsmod | grep oracle
oracleacfs 877320 4
oracleadvm 221760 8
oracleoks 276880 2 oracleacfs,oracleadvm
oracleasm 84136 1
[root@racnode2 ~]# lsmod | grep oracle
oracleacfs 877320 4
oracleadvm 221760 8
oracleoks 276880 2 oracleacfs,oracleadvm
oracleasm 84136 1
Configure the Oracle ASM volume driver to load automatically on
system startup on all Oracle RAC nodes. You will need to create an
initialization script (/etc/init.d/acfsload) that contains the
runlevel configuration and the acfsload command. Change the
permissions on the /etc/init.d/acfsload script to allow it to be
executed by root and then create links in the rc2.d, rc3.d, rc4.d,
and rc5.d runlevel directories using 'chkconfig --add':
[root@racnode1 ~]# chkconfig --list | grep acfsload
[root@racnode2 ~]# chkconfig --list | grep acfsload
=======================================================
[root@racnode1 ~]# cat > /etc/init.d/acfsload <<EOF
#!/bin/sh
# chkconfig: 2345 30 21
# description: Load Oracle ASM volume driver on system startup
ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_HOME
\$ORACLE_HOME/bin/acfsload start -s
EOF
[root@racnode2 ~]# cat > /etc/init.d/acfsload <<EOF
#!/bin/sh
# chkconfig: 2345 30 21
# description: Load Oracle ASM volume driver on system startup
ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_HOME
\$ORACLE_HOME/bin/acfsload start -s
EOF
=======================================================
[root@racnode1 ~]# chmod 755 /etc/init.d/acfsload
[root@racnode2 ~]# chmod 755 /etc/init.d/acfsload
=======================================================
[root@racnode1 ~]# chkconfig --add acfsload
[root@racnode2 ~]# chkconfig --add acfsload
=======================================================
[root@racnode1 ~]# chkconfig --list | grep acfsload
acfsload 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@racnode2 ~]# chkconfig --list | grep acfsload
acfsload 0:off 1:off 2:on 3:on 4:on 5:on 6:off
If the Oracle grid infrastructure 'ora.registry.acfs' resource does not exist, create it. This only needs to be performed
from one of the Oracle RAC nodes:
[root@racnode1 ~]# su - grid -c crs_stat | grep acfs
[root@racnode2 ~]# su - grid -c crs_stat | grep acfs
=======================================================
[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl add type ora.registry.acfs.type \
-basetype ora.local_resource.type \
-file /u01/app/11.2.0/grid/crs/template/registry.acfs.type
[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl add resource ora.registry.acfs \
-attr ACL=\'owner:root:rwx,pgrp:oinstall:r-x,other::r--\' \
-type ora.registry.acfs.type -f
=======================================================
[root@racnode1 ~]# su - grid -c crs_stat | grep acfs
NAME=ora.registry.acfs
TYPE=ora.registry.acfs.type
[root@racnode2 ~]# su - grid -c crs_stat | grep acfs
NAME=ora.registry.acfs
TYPE=ora.registry.acfs.type
Next, copy the Oracle ACFS executables to /sbin and set the
appropriate permissions. The Oracle ACFS executables are located in
the GRID_HOME/install/usm/EL5/<ARCHITECTURE>/<KERNEL_VERSION>/<FULL_KERNEL_VERSION>/bin
directory (12 files) and include any file without the
*.ko extension:
[root@racnode1 ~]# cd /u01/app/11.2.0/grid/install/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5-x86_64/bin
[root@racnode1 bin]# cp acfs* /sbin; chmod 755 /sbin/acfs*
[root@racnode1 bin]# cp advmutil* /sbin; chmod 755 /sbin/advmutil*
[root@racnode1 bin]# cp fsck.acfs* /sbin; chmod 755 /sbin/fsck.acfs*
[root@racnode1 bin]# cp mkfs.acfs* /sbin; chmod 755 /sbin/mkfs.acfs*
[root@racnode1 bin]# cp mount.acfs* /sbin; chmod 755 /sbin/mount.acfs*
[root@racnode2 ~]# cd /u01/app/11.2.0/grid/install/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5-x86_64/bin
[root@racnode2 bin]# cp acfs* /sbin; chmod 755 /sbin/acfs*
[root@racnode2 bin]# cp advmutil* /sbin; chmod 755 /sbin/advmutil*
[root@racnode2 bin]# cp fsck.acfs* /sbin; chmod 755 /sbin/fsck.acfs*
[root@racnode2 bin]# cp mkfs.acfs* /sbin; chmod 755 /sbin/mkfs.acfs*
[root@racnode2 bin]# cp mount.acfs* /sbin; chmod 755 /sbin/mount.acfs*
As a final step, modify any of the Oracle ACFS shell scripts
copied to the /sbin directory (above) to include the ORACLE_HOME for
grid infrastructure. The successful execution of these scripts
requires access to certain Oracle shared libraries that are found in
the grid infrastructure Oracle home. Since many of the Oracle ACFS
shell scripts will be executed as the root user account, the
ORACLE_HOME environment variable will typically not be set in the
shell and will result in the executable to fail. An easy workaround
to get past this error is to set the ORACLE_HOME environment
variable for the Oracle grid infrastructure home in the Oracle ACFS
shell scripts on all Oracle RAC nodes. The ORACLE_HOME should be set
at the beginning of the file after the header comments as shown in
the following example:
#!/bin/sh
#
# Copyright (c) 2001, 2009, Oracle and/or its affiliates. All rights reserved.
#
ORACLE_HOME=/u01/app/11.2.0/grid
ORA_CRS_HOME=%ORA_CRS_HOME%
if [ ! -d $ORA_CRS_HOME ]; then
ORA_CRS_HOME=$ORACLE_HOME
fi
...
Add the ORACLE_HOME environment variable for the Oracle grid
infrastructure home as noted above to the following Oracle ACFS
shell scripts on all Oracle RAC nodes:
-
/sbin/acfsdbg
-
/sbin/acfsutil
-
/sbin/advmutil
-
/sbin/fsck.acfs
-
/sbin/mkfs.acfs
-
/sbin/mount.acfs
Verify ASM Disk Group Compatibility Level
The compatibility level for the Oracle ASM disk group must be at
least 11.2 in order to create an Oracle ASM volume. From the Oracle
ASM instance, perform the following checks:
SQL> SELECT compatibility, database_compatibility
2 FROM v$asm_diskgroup
3 WHERE name = 'DOCSDG1';
COMPATIBILITY DATABASE_COMPATIBILITY
---------------- -----------------------
10.1.0.0.0 10.1.0.0.0
If the results show something other than 11.2 or higher (as the
above example shows), we need to set the compatibility to at least
11.2 by issuing the following series of SQL statements from the
Oracle ASM instance:
[grid@racnode1 ~]$ sqlplus / as sysasm
SQL> ALTER DISKGROUP docsdg1 SET ATTRIBUTE 'compatible.asm' = '11.2';
Diskgroup altered.
SQL> ALTER DISKGROUP docsdg1 SET ATTRIBUTE 'compatible.rdbms' = '11.2';
Diskgroup altered.
SQL> ALTER DISKGROUP docsdg1 SET ATTRIBUTE 'compatible.advm' = '11.2';
Diskgroup altered.
If you receive an error while attempting to set the 'compatible.advm'
attribute, verify that the Oracle ASM volume driver is running:
SQL> ALTER DISKGROUP docsdg1 SET ATTRIBUTE 'compatible.advm' = '11.2';
ALTER DISKGROUP docsdg1 SET ATTRIBUTE 'compatible.advm' = '11.2'
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15242: could not set attribute compatible.advm
ORA-15238: 11.2 is not a valid value for attribute compatible.advm
ORA-15477: cannot communicate with the volume driver
Verify the changes to the compatibility level:
SQL> SELECT compatibility, database_compatibility
2 FROM v$asm_diskgroup
3 WHERE name = 'DOCSDG1';
COMPATIBILITY DATABASE_COMPATIBILITY
---------------- -----------------------
11.2.0.0.0 11.2.0.0.0
ASM Configuration Assistant (ASMCA)
This section includes step-by-step instructions on how to create
an Oracle ASM cluster file system using the Oracle ASM Configuration
Assistant (ASMCA). Note that at the time of this writing, ASMCA only
supports the creation of volumes and file systems. Deleting an
Oracle ASM volume or file system requires the command-line.
Create Mount Point
From each Oracle RAC node, create a directory that will be used
to mount the new Oracle ACFS:
[root@racnode1 ~]# mkdir /documents1
[root@racnode2 ~]# mkdir /documents1
Create ASM Cluster File System
As the Oracle grid infrastructure owner, run the ASM
Configuration Assistant (asmca) from only one node in the cluster
(racnode1 for example):
[grid@racnode1 ~]$ asmca
The following are the screen names and responses with the ASM
Configuration Assistant
Disk Groups: When the Oracle ASM configuration assistant
starts you are presented with the 'Disk Groups' tab as shown
here.

Volumes: Click on the 'Volumes' tab then click the
[Create] button.

Create ASM Volume: Then create a new ASM volume by
supplying a "Volume Name", "Disk Group Name", and "Size". For the
purpose of this example, I will be creating a 32GB volume named
"docsvol1" on the "DOCSDG1" ASM disk group. After verifying all
values in this dialog are correct, click the [OK] button.

Volume Created: After the volume is created,
acknowledge the 'Volume: Creation' dialog. When returned to
the "Volumes" tab, the "State" for the new ASM volume should be
ENABLED for all Oracle RAC nodes (i.e. 'ENABLED(2 of
2)')..

ASM Cluster File Systems: Click on the 'ASM Cluster
File Systems' tab then click the [Create] button.

Create ASM Cluster File System: Verify that the newly
created volume (DOCSVOL1) is selected in the 'Volume'
list. Select the 'General Purpose File System' option. Enter
the previously created mount point directory (/documents1) or
leave the suggested mount point. Select the 'Yes' option for
'Register MountPoint'. After verifying all values in this
dialog are correct, click the [OK] button.

ASM Cluster File System Created: After the
ASM Cluster File System is created, acknowledge the 'ASM Cluster
File System: Creation' dialog.

ASM Cluster File Systems: The newly created
Oracle ASM cluster file system is now listed under the 'ASM
Cluster File Systems' tab. Note that the new clustered file
system is not mounted. That will need to be performed manually on
all Oracle RAC nodes as a privileged user (root) after exiting from
the ASMCA. Exit the ASM Configuration Assistant by clicking the
[Exit] button.

In
part two of this installment, we will pick up on this process
by mounting the new Oracle ASM cluster file system!