blog menu1

ACFS on Linux Part 1

ACFS on Linux


Using the Oracle ASM Cluster File System (Oracle ACFS) on Linux 
Introduction 
Introduced with Oracle ASM 11g release 2, Oracle ASM Cluster File System (Oracle ACFS) is a general purpose POSIX compliant cluster file system implemented as part of Oracle Automatic Storage Management (Oracle ASM). Being POSIX compliant, all operating system utilities we use with ext3 and other file systems can also be used with Oracle ACFS given it belongs to the same family of related standards. Oracle ACFS extends the Oracle ASM architecture and is used to support many types of files which are typically maintained outside of the Oracle database. For example Oracle ACFS can be used to store BFILEs, database trace files, executables, report files and even general purpose files like image, text, video, and audio files. In addition, Oracle ACFS can be used as a shared file system for Oracle home binaries.


The features included with Oracle ACFS allow users to create, mount, and manage ACFS using familiar Linux commands. Oracle ACFS provides support for snapshots and the ability to dynamically resize existing file systems online using Oracle ASM Dynamic Volume Manager (ADVM).


Oracle ACFS leverages Oracle ASM functionality that enables:



  • Oracle ACFS dynamic file system resizing
  • Maximized performance through direct access to Oracle ASM disk group storage
  • Balanced distribution of Oracle ACFS across Oracle ASM disk group storage for increased I/O parallelism
  • Data reliability through Oracle ASM mirroring protection mechanisms
While Oracle ACFS is useful for storing general purpose files, there are certain files that it is not meant for. For example, Oracle ASM (traditional disk groups) is still the preferred storage manager for all database files because Oracle ACFS does not support directIO for file read and write operations in 11g release 2 (11.2). Oracle ASM was specifically designed and optimized to provide the best performance for database file types. In addition to Oracle database files, Oracle ACFS does not support files for the Oracle grid infrastructure home. Finally, Oracle ACFS does not support any Oracle files that can be directly stored in Oracle ASM. For example, SPFILE, flashback log files, control files, archived redo log files, the grid infrastructure OCR and voting disk, etc should be stored in Oracle ASM disk groups. The key point to remember is that Oracle ACFS is the preferred file manager for non-databasefiles and is optimized for general purpose / customer files which are maintained outside of the Oracle database.


This article describes three ways to create an Oracle ASM Cluster File System in an Oracle 11g release 2 RAC database on the Linux operating environment:



  • ASM Configuration Assistant (ASMCA)
  • Oracle Enterprise Manager (OEM)
  • Command Line / SQL
There is actually a fourth method that can be employed to create an Oracle ASM Cluster File System which is to use the ASMCMD command line interface. Throughout this guide, I'll demonstrate how to use the ASMCMD command line interface in place of SQL where appropriate.


The Linux distribution used in this guide is CentOS 5.5. CentOS is a free Enterprise-class Linux Distribution derived from the Red Hat Enterprise Linux (RHEL) source and aims to be 100% binary compatible. Although Centos 5 is equivalent to RHEL 5, the CentOS operating system is not supported by Oracle ASM Cluster File System. Refer to the workaround documented in the prerequisites section of this article if you are using CentOS or a similar Red Hat clone.





It is assumed that an Oracle RAC database is already installed, configured, and running. Refer to this guide for instructions on how to build an inexpensive two-node Oracle RAC 11g release 2 database on Linux.

ACFS Components 
Before diving into the details on how to create and manage Oracle ASM Cluster File System, it may be helpful to first discuss the major components. 
Figure 1 shows the various components that make up Oracle ACFS and provides an illustration of the example configuration that will be created using this guide. 

Everything starts with an Oracle ASM disk group. An Oracle ASM disk group is made up of one or more disks shown in figure 1 as DOCSDG1. The next component is an Oracle ASM volume which is created within an Oracle ASM disk group. The example configuration illustrated above shows that we will be creating three volumes named docsvol1, docsvol2, anddocsvol3 on the new disk group named DOCSDG1. Finally, we will be creating a cluster file system for each volume whose mount points will be /documents1, /documents2, and/documents3 respectively.


With Oracle ACFS, as long as there exists free space within the ASM disk group, any of the volumes can be dynamically expanded which means the file system gets expanded as a result. As I will demonstrate later in this article, expanding a volume / file system is an effortless process and can be performed online without the need to take any type of outage!

(use this command to resize ACFS - /sbin/acfsutil size 10g /u02)


Oracle ASM Dynamic Volume Manager (ADVM)


Besides an Oracle ASM disk group, another key component to Oracle ACFS is the new Oracle ASM Dynamic Volume Manager (ADVM). ADVM provides volume management services and a standard driver interface to its client (ACFS, ext3, ext4, reiserfs, OCFS2, etc.). The ADVM services include functionality to create, resize, delete, enable, and disable dynamic volumes. An ASM dynamic volume is constructed out of an ASM file with an 'ASMVOL' type attribute to distinguish it from other ASM file types (i.e. DATAFILE, TEMPFILE, ONLINELOG, etc.):



ASM File Name / Volume Name / Device Name Bytes File Type
--------------------------------------------------------------- ------------------ ------------------
+CRS/racnode-cluster/ASMPARAMETERFILE/REGISTRY.253.734544679 1,536 ASMPARAMETERFILE
+CRS/racnode-cluster/OCRFILE/REGISTRY.255.734544681 272,756,736 OCRFILE

272,758,272
+DOCSDG1 [DOCSVOL1] /dev/asm/docsvol1-300 34,359,738,368 ASMVOL 
+DOCSDG1 [DOCSVOL2] /dev/asm/docsvol2-300 34,359,738,368 ASMVOL 
+DOCSDG1 [DOCSVOL3] /dev/asm/docsvol3-300 26,843,545,600 ASMVOL

95,563,022,336
+FRA/RACDB/ARCHIVELOG/2010_11_08/thread_1_seq_69.264.734565029 42,991,616 ARCHIVELOG
+FRA/RACDB/ARCHIVELOG/2010_11_08/thread_2_seq_2.266.734565685 41,260,544 ARCHIVELOG
< SNIP >
+FRA/RACDB/ONLINELOG/group_3.259.734554873 52,429,312 ONLINELOG
+FRA/RACDB/ONLINELOG/group_4.260.734554877 52,429,312 ONLINELOG

12,227,537,408
+RACDB_DATA/RACDB/CONTROLFILE/Current.256.734552525 18,890,752 CONTROLFILE
+RACDB_DATA/RACDB/DATAFILE/EXAMPLE.263.734552611 157,294,592 DATAFILE
+RACDB_DATA/RACDB/DATAFILE/SYSAUX.260.734552569 1,121,984,512 DATAFILE
+RACDB_DATA/RACDB/DATAFILE/SYSTEM.259.734552539 744,497,152 DATAFILE
+RACDB_DATA/RACDB/DATAFILE/UNDOTBS1.261.734552595 791,683,072 DATAFILE
+RACDB_DATA/RACDB/DATAFILE/UNDOTBS2.264.734552619 209,723,392 DATAFILE
+RACDB_DATA/RACDB/DATAFILE/USERS.265.734552627 5,251,072 DATAFILE
+RACDB_DATA/RACDB/ONLINELOG/group_1.257.734552529 52,429,312 ONLINELOG
+RACDB_DATA/RACDB/ONLINELOG/group_2.258.734552533 52,429,312 ONLINELOG
+RACDB_DATA/RACDB/ONLINELOG/group_3.266.734554871 52,429,312 ONLINELOG
+RACDB_DATA/RACDB/ONLINELOG/group_4.267.734554875 52,429,312 ONLINELOG
+RACDB_DATA/RACDB/PARAMETERFILE/spfile.268.734554879 4,608 PARAMETERFILE
+RACDB_DATA/RACDB/TEMPFILE/TEMP.262.734552605 93,331,456 TEMPFILE
+RACDB_DATA/RACDB/spfileracdb.ora 4,608 PARAMETERFILE

3,352,382,464
Oracle ACFS and other supported 3rd party file systems can use Oracle ADVM as a volume management platform to create and manage file systems while leveraging the full power and functionality of Oracle ASM features. A volume may be created in its own Oracle ASM disk group or can share space in an already existing disk group. Any number of volumes can be created in an ASM disk group. Creating a new volume in an Oracle ASM disk group can be performed using the ASM Configuration Assistant (ASMCA)Oracle Enterprise Manager (OEM)SQL, or ASMCMD. For example:



asmcmd volcreate -G docsdg1 -s 20G docsvol3
Once a new volume is created in Linux, the ADVM device driver automatically creates a volume device on the OS that is used by clients to access the volume. These volumes may be used as block devices, may contain a file system such as ext3, ext4, reiserfs, OCFS2 or Oracle ACFS may used (as described in this guide) in which case the oracleacfs driver is also used for I/O to the file system.





On the Linux platform, Oracle ADVM volume devices are created as block devices regardless of the configuration of the underlying storage in the Oracle ASM disk group. Do not use raw (8) to map Oracle ADVM volume block devices into raw volume devices.


Under Linux, all volume devices are externalized to the OS and appear dynamically as special files in the /dev/asm directory. In this guide, we will use this OS volume device to create an Oracle ACFS: 


$ ls -l /dev/asm
total 0
brwxrwx--- 1 root asmadmin 252, 153601 Nov 28 13:49 docsvol1-300
brwxrwx--- 1 root asmadmin 252, 153602 Nov 28 13:49 docsvol2-300
brwxrwx--- 1 root asmadmin 252, 153603 Nov 28 13:56 docsvol3-300
$ /sbin/mkfs -t acfs -b 4k /dev/asm/docsvol3-300 n "DOCSVOL3"
Oracle ADVM implements its own extent and striping algorithm to ensure the highest performance for general purpose files. An ADVM volume is four columns of 64MB extents and 128KB stripe width by default. ADVM writes data in 128KB stripes in a Round Robin fashion to each column before starting on the next four column extents. ADVM uses Dirty Region Logging (DRL) for mirror recovery after a node or instance failure. This DRL scheme requires a DRL file in the ASM disk group to be associated with each ASM dynamic volume.


ACFS Prerequisites 
Install Oracle Grid Infrastructure 
Oracle Grid Infrastructure 11g Release 2 (11.2) or higher is required for Oracle ACFS. Oracle grid infrastructure includes Oracle Clusterware, Oracle ASM, Oracle ACFS, Oracle ADVM, and driver resources software components, which are installed into the Grid Infrastructure Home using the Oracle Universal Installation (OUI) tool. Refer to this guide for instructions on how to configure Oracle grid infrastructure as part of an Oracle RAC 11g release 2 database install on Linux.


Log In as the Grid Infrastructure User


To perform the examples demonstrated in this guide, it is assumed that the Oracle grid infrastructure owner is 'grid'. If the Oracle grid infrastructure owner is 'oracle', then log in as the oracle account.


Log in as the Oracle grid infrastructure owner and switch to the Oracle ASM environment on node 1 of the RAC when performing non-root ACFS tasks:



[grid@racnode1 ~]$ id
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)
[grid@racnode1 ~]$ . oraenv
ORACLE_SID = [+ASM1] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid
[grid@racnode1 ~]$ dbhome
/u01/app/11.2.0/grid
[grid@racnode1 ~]$ echo $ORACLE_SID
+ASM1
Verify / Create ASM Disk Group


After validating the Oracle grid infrastructure installation and logging in as the Oracle grid infrastructure owner (grid), the next step is to decide which Oracle ASM disk group should be used to create the Oracle ASM dynamic volume(s). The following SQL demonstrates how to search the available ASM disk groups:



break on inst_id skip 1
column inst_id format 9999999 heading "Instance ID" justify left
column name format a15 heading "Disk Group" justify left
column total_mb format 999,999,999 heading "Total (MB)" justify right
column free_mb format 999,999,999 heading "Free (MB)" justify right
column pct_free format 999.99 heading "% Free" justify right
==========================================================
SQL> select inst_id, name, total_mb, free_mb, round((free_mb/total_mb)*100,2) pct_free
from gv$asm_diskgroup
where total_mb != 0
order by inst_id, name;
Instance ID Disk Group Total (MB) Free (MB) % Free
----------- --------------- ------------ ------------ -------
1 CRS 2,205 1,809 82.04
FRA 33,887 24,802 73.19
RACDB_DATA 33,887 30,623 90.37
2 CRS 2,205 1,809 82.04
FRA 33,887 24,802 73.19
RACDB_DATA 33,887 30,623 90.37


The same task can be accomplished using the ASMCMD command-line utility: 
[grid@racnode1 ~]$ asmcmd lsdg


If you find an existing Oracle ASM disk group that has adequate space, the Oracle ASM dynamic volume(s) can be created on that free space or a new ASM disk group can be created. 




[grid@racnode1 ~]$ sqlplus / as sysasm
SQL> select path, name, header_status, os_mb from v$asm_disk;
PATH NAME HEADER_STATUS OS_MB
------------------ --------------- ------------- ----------
ORCL:ASMDOCSVOL1 PROVISIONED 98,303
ORCL:CRSVOL1 CRSVOL1 MEMBER 2,205
ORCL:DATAVOL1 DATAVOL1 MEMBER 33,887
ORCL:FRAVOL1 FRAVOL1 MEMBER 33,887
After identifying the ASMLib volume and verifying it is accessible from all Oracle RAC nodes, log in to the Oracle ASM instance and create the new disk group from one of the Oracle RAC nodes. After verifying the disk group was created, log in to the Oracle ASM instance on all other RAC nodes and mount the new disk group:



[grid@racnode1 ~]$ sqlplus / as sysasm
SQL> CREATE DISKGROUP docsdg1 EXTERNAL REDUNDANCY DISK 'ORCL:ASMDOCSVOL1' SIZE 98303 M;
Diskgroup created.
SQL> @asm_diskgroups
Disk Group Sector Block Allocation
NameSizeSize UnitSize StateType Total Size (MB) Used Size (MB) Pct. Used
---------- ------- ------ ----------- -------- ------ --------------- -------------- ---------
CRS 512 4,096 1,048,576 MOUNTED EXTERN 2,205 396 17.96
DOCSDG1 512 4,096 1,048,576 MOUNTED EXTERN 98,303 50 .05
FRA 512 4,096 1,048,576 MOUNTED EXTERN 33,887 9,085 26.81
RACDB_DATA 512 4,096 1,048,576 MOUNTED EXTERN 33,887 3,264 9.63
--------------- --------------
Grand Total: 168,282 12,795
===================================================================================
[grid@racnode2 ~]$ sqlplus / as sysasm
SQL> ALTER DISKGROUP docsdg1 MOUNT;
Diskgroup altered.
SQL> @asm_diskgroups
Disk Group Sector Block Allocation
NameSizeSize UnitSize StateType Total Size (MB) Used Size (MB) Pct. Used
---------- ------- ------ ----------- -------- ------ --------------- -------------- ---------
CRS 512 4,096 1,048,576 MOUNTED EXTERN 2,205 396 17.96
DOCSDG1 512 4,096 1,048,576 MOUNTED EXTERN 98,303 50 .05
FRA 512 4,096 1,048,576 MOUNTED EXTERN 33,887 9,085 26.81
RACDB_DATA 512 4,096 1,048,576 MOUNTED EXTERN 33,887 3,264 9.63
--------------- --------------
Grand Total: 168,282 12,795
Verify Oracle ASM Volume Driver


The operating environment used in this guide is CentOS 5.5 x86_64:



[root@racnode1 ~]# uname -a
Linux racnode1 2.6.18-194.el5 #1 SMP Fri Apr 2 14:58:14 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
On supported operating systems, the Oracle ACFS modules will be configured and the Oracle ASM volume driver started by default after installing Oracle grid infrastructure. With CentOS and other unsupported operating systems, a workaround is required to enable Oracle ACFS. One of the first tasks is to manually start the Oracle ASM volume driver:



[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/acfsload start -s
ADVM/ACFS is not supported on centos-release-5-5.el5.centos
The failed output from the above command should come as no surprise given Oracle ACFS is not supported on CentOS.


By default, the Oracle ACFS modules do not get installed on CentOS because it is not a supported operating environment. This section provides a simple, but unsupported, workaround to get Oracle ACFS working on CentOS. This workaround includes some of the manual steps that are required to launch the Oracle ASM volume driver when installing Oracle ACFS on a non-clustered system.





The steps documented in this section serve as a workaround in order to setup Oracle ACFS on CentOS and is by no means supported by Oracle Corporation. Do not attempt these steps on a critical production environment. You have been warned.
The following steps will need to be run from all nodes in an Oracle RAC database cluster as root.


First, make a copy of the following Perl module:



[root@racnode1 ~]# cd /u01/app/11.2.0/grid/lib
[root@racnode1 lib]# cp -p osds_acfslib.pm osds_acfslib.pm.orig
[root@racnode2 ~]# cd /u01/app/11.2.0/grid/lib
[root@racnode2 lib]# cp -p osds_acfslib.pm osds_acfslib.pm.orig
Next, edit the osds_acfslib.pm Perl module. Search for the string 'support this release' (which was line 278 in my case).


Replace



if (($release =~ /enterprise-release-5/)($release =~ /redhat-release-5/))
with 
if (($release =~ /enterprise-release-5/)($release =~ /redhat-release-5/)($release =~ /centos-release-5/))
This will get you past the supported version check; however, if you attempt to load the Oracle ASM volume driver from either Oracle RAC node, you get the following error: 
[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/acfsload start -s
acfsload: ACFS-9129: ADVM/ACFS not installed
To install ADVM/ACFS, copy the following kernel modules from the Oracle grid infrastructure home to the expected location: 
[root@racnode1 ~]# mkdir /lib/modules/2.6.18-194.el5/extra/usm
[root@racnode1 ~]# cd /u01/app/11.2.0/grid/install/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5-x86_64/bin
[root@racnode1 bin]# cp *ko /lib/modules/2.6.18-194.el5/extra/usm/
[root@racnode2 ~]# mkdir /lib/modules/2.6.18-194.el5/extra/usm
[root@racnode2 ~]# cd /u01/app/11.2.0/grid/install/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5-x86_64/bin
[root@racnode2 bin]# cp *ko /lib/modules/2.6.18-194.el5/extra/usm/
Once the kernel modules have been copied, we can verify the ADVM/ACFS installation by running the following from all Oracle RAC nodes:



[root@racnode1 ~]# cd /u01/app/11.2.0/grid/bin
[root@racnode1 bin]# ./acfsdriverstate -orahome /u01/app/11.2.0/grid version
ACFS-9205: OS/ADVM,ACFS installed version = 2.6.18-8.el5(x86_64)/090715.1
[root@racnode2 ~]# cd /u01/app/11.2.0/grid/bin
[root@racnode2 bin]# ./acfsdriverstate -orahome /u01/app/11.2.0/grid version
ACFS-9205: OS/ADVM,ACFS installed version = 2.6.18-8.el5(x86_64)/090715.1
The next step is to record dependencies for the new kernel modules:



[root@racnode1 ~]# depmod
[root@racnode2 ~]# depmod
Now, running acfsload start -s will complete without any further messages:



[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/acfsload start -s
[root@racnode2 ~]# /u01/app/11.2.0/grid/bin/acfsload start -s
Check that the modules were successfully loaded on all Oracle RAC nodes: 
[root@racnode1 ~]# lsmod | grep oracle
oracleacfs 877320 4 
oracleadvm 221760 8 
oracleoks 276880 2 oracleacfs,oracleadvm
oracleasm 84136 1
[root@racnode2 ~]# lsmod | grep oracle
oracleacfs 877320 4 
oracleadvm 221760 8 
oracleoks 276880 2 oracleacfs,oracleadvm
oracleasm 84136 1
Configure the Oracle ASM volume driver to load automatically on system startup on all Oracle RAC nodes. You will need to create an initialization script (/etc/init.d/acfsload) that contains the runlevel configuration and the acfsload command. Change the permissions on the /etc/init.d/acfsload script to allow it to be executed by root and then create links in the rc2.d, rc3.d, rc4.d, and rc5.d runlevel directories using 'chkconfig --add':



[root@racnode1 ~]# chkconfig --list | grep acfsload
[root@racnode2 ~]# chkconfig --list | grep acfsload
===========================================
[root@racnode1 ~]# cat > /etc/init.d/acfsload <<EOF
#!/bin/sh
  • chkconfig: 2345 30 21
  • description: Load Oracle ASM volume driver on system startup
ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_HOME
\$ORACLE_HOME/bin/acfsload start -s
EOF
[root@racnode2 ~]# cat > /etc/init.d/acfsload <<EOF
#!/bin/sh
  • chkconfig: 2345 30 21
  • description: Load Oracle ASM volume driver on system startup
ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_HOME
\$ORACLE_HOME/bin/acfsload start -s
EOF
===========================================
[root@racnode1 ~]# chmod 755 /etc/init.d/acfsload
[root@racnode2 ~]# chmod 755 /etc/init.d/acfsload
===========================================
[root@racnode1 ~]# chkconfig --add acfsload
[root@racnode2 ~]# chkconfig --add acfsload
===========================================
[root@racnode1 ~]# chkconfig --list | grep acfsload
acfsload 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@racnode2 ~]# chkconfig --list | grep acfsload
acfsload 0:off 1:off 2:on 3:on 4:on 5:on 6:off
If the Oracle grid infrastructure 'ora.registry.acfs' resource does not exist, create it. This only needs to be performed from one of the Oracle RAC nodes:



[root@racnode1 ~]# su - grid -c crs_stat | grep acfs
[root@racnode2 ~]# su - grid -c crs_stat | grep acfs
===========================================
[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl add type ora.registry.acfs.type \
-basetype ora.local_resource.type \
-file /u01/app/11.2.0/grid/crs/template/registry.acfs.type
[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl add resource ora.registry.acfs \
-attr ACL=\'owner:root:rwx,pgrp:oinstall:r-x,other::r--\' \
-type ora.registry.acfs.type -f
===========================================
[root@racnode1 ~]# su - grid -c crs_stat | grep acfs
NAME=ora.registry.acfs
TYPE=ora.registry.acfs.type
[root@racnode2 ~]# su - grid -c crs_stat | grep acfs
NAME=ora.registry.acfs
TYPE=ora.registry.acfs.type
Next, copy the Oracle ACFS executables to /sbin and set the appropriate permissions. The Oracle ACFS executables are located in theGRID_HOME/install/usm/EL5/<ARCHITECTURE>/<KERNEL_VERSION>/<FULL_KERNEL_VERSION>/bin directory (12 files) and include any file without the *.ko extension: 
[root@racnode1 ~]# cd /u01/app/11.2.0/grid/install/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5-x86_64/bin
[root@racnode1 bin]# cp acfs* /sbin; chmod 755 /sbin/acfs*
[root@racnode1 bin]# cp advmutil* /sbin; chmod 755 /sbin/advmutil*
[root@racnode1 bin]# cp fsck.acfs* /sbin; chmod 755 /sbin/fsck.acfs*
[root@racnode1 bin]# cp mkfs.acfs* /sbin; chmod 755 /sbin/mkfs.acfs*
[root@racnode1 bin]# cp mount.acfs* /sbin; chmod 755 /sbin/mount.acfs*
[root@racnode2 ~]# cd /u01/app/11.2.0/grid/install/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5-x86_64/bin
[root@racnode2 bin]# cp acfs* /sbin; chmod 755 /sbin/acfs*
[root@racnode2 bin]# cp advmutil* /sbin; chmod 755 /sbin/advmutil*
[root@racnode2 bin]# cp fsck.acfs* /sbin; chmod 755 /sbin/fsck.acfs*
[root@racnode2 bin]# cp mkfs.acfs* /sbin; chmod 755 /sbin/mkfs.acfs*
[root@racnode2 bin]# cp mount.acfs* /sbin; chmod 755 /sbin/mount.acfs*
As a final step, modify any of the Oracle ACFS shell scripts copied to the /sbin directory (above) to include the ORACLE_HOME for grid infrastructure. The successful execution of these scripts requires access to certain Oracle shared libraries that are found in the grid infrastructure Oracle home. Since many of the Oracle ACFS shell scripts will be executed as the root user account, the ORACLE_HOME environment variable will typically not be set in the shell and will result in the executable to fail. An easy workaround to get past this error is to set the ORACLE_HOME environment variable for the Oracle grid infrastructure home in the Oracle ACFS shell scripts on all Oracle RAC nodes. The ORACLE_HOME should be set at the beginning of the file after the header comments as shown in the following example:



#!/bin/sh
#
  1. Copyright (c) 2001, 2009, Oracle and/or its affiliates. All rights reserved.
#
ORACLE_HOME=/u01/app/11.2.0/grid
ORA_CRS_HOME=%ORA_CRS_HOME%
if [ ! -d $ORA_CRS_HOME ]; then
ORA_CRS_HOME=$ORACLE_HOME
fi
...
Add the ORACLE_HOME environment variable for the Oracle grid infrastructure home as noted above to the following Oracle ACFS shell scripts on all Oracle RAC nodes:



  • /sbin/acfsdbg
  • /sbin/acfsutil
  • /sbin/advmutil
  • /sbin/fsck.acfs
  • /sbin/mkfs.acfs
  • /sbin/mount.acfs
Verify ASM Disk Group Compatibility Level


The compatibility level for the Oracle ASM disk group must be at least 11.2 in order to create an Oracle ASM volume. From the Oracle ASM instance, perform the following checks:



SQL> SELECT compatibility, database_compatibility
FROM v$asm_diskgroup
WHERE name = 'DOCSDG1';
COMPATIBILITY DATABASE_COMPATIBILITY
---------------- -----------------------
10.1.0.0.0 10.1.0.0.0
If the results show something other than 11.2 or higher (as the above example shows), we need to set the compatibility to at least 11.2 by issuing the following series of SQL statements from the Oracle ASM instance:



[grid@racnode1 ~]$ sqlplus / as sysasm
SQL> ALTER DISKGROUP docsdg1 SET ATTRIBUTE 'compatible.asm' = '11.2';
Diskgroup altered.
SQL> ALTER DISKGROUP docsdg1 SET ATTRIBUTE 'compatible.rdbms' = '11.2';
Diskgroup altered.
SQL> ALTER DISKGROUP docsdg1 SET ATTRIBUTE 'compatible.advm' = '11.2';
Diskgroup altered.



If you receive an error while attempting to set the 'compatible.advm' attribute, verify that theOracle ASM volume driver is running: 
SQL> ALTER DISKGROUP docsdg1 SET ATTRIBUTE 'compatible.advm' = '11.2';
ALTER DISKGROUP docsdg1 SET ATTRIBUTE 'compatible.advm' = '11.2' 

ERROR at line 1: 
ORA-15032: not all alterations performed 
ORA-15242: could not set attribute compatible.advm 
ORA-15238: 11.2 is not a valid value for attribute compatible.advm 
ORA-15477: cannot communicate with the volume driver


Verify the changes to the compatibility level: 


SQL> SELECT compatibility, database_compatibility
FROM v$asm_diskgroup
WHERE name = 'DOCSDG1';
COMPATIBILITY DATABASE_COMPATIBILITY
---------------- -----------------------
11.2.0.0.0 11.2.0.0.0
ASM Configuration Assistant (ASMCA)


This section includes step-by-step instructions on how to create an Oracle ASM cluster file system using the Oracle ASM Configuration Assistant (ASMCA). Note that at the time of this writing, ASMCA only supports the creation of volumes and file systems. Deleting an Oracle ASM volume or file system requires the command-line.


Create Mount Point


From each Oracle RAC node, create a directory that will be used to mount the new Oracle ACFS:



[root@racnode1 ~]# mkdir /documents1
[root@racnode2 ~]# mkdir /documents1
Create ASM Cluster File System


As the Oracle grid infrastructure owner, run the ASM Configuration Assistant (asmca) from only one node in the cluster (racnode1 for example):



[grid@racnode1 ~]$ asmca


Mount the New Cluster File System


Now that the new Oracle ASM cluster file system has been created and registered in the Oracle ACFS mount registry, log in to all Oracle RAC nodes as root and run the following mount command:



[root@racnode1 ~]# /bin/mount -t acfs /dev/asm/docsvol1-300 /documents1
/sbin/mount.acfs.bin: error while loading shared libraries: libhasgen11.so: 
cannot open shared object file: No such file or directory
[root@racnode2 ~]# /bin/mount -t acfs /dev/asm/docsvol1-300 /documents1
/sbin/mount.acfs.bin: error while loading shared libraries: libhasgen11.so: 
cannot open shared object file: No such file or directory
If you don't have the ORACLE_HOME environment variable set to the Oracle grid infrastructure home as explained in the prerequisites section to this guide, the mount command will fail as shown above. In order to mount the new cluster file system, the Oracle ASM ACFS binaries need access to certain shared libraries in the ORACLE_HOME for grid infrastructure. An easy workaround to get past this error is to set the ORACLE_HOME environment variable for grid infrastructure in the file /sbin/mount.acfs on all Oracle RAC nodes. The ORACLE_HOME should be set at the beginning of the file after the header comments as follows:



#!/bin/sh
#
  1. Copyright (c) 2001, 2009, Oracle and/or its affiliates. All rights reserved.
#
ORACLE_HOME=/u01/app/11.2.0/grid
ORA_CRS_HOME=%ORA_CRS_HOME%
if [ ! -d $ORA_CRS_HOME ]; then
ORA_CRS_HOME=$ORACLE_HOME
fi
...
You should now be able to successfully mount the volume:



[root@racnode1 ~]# /bin/mount -t acfs /dev/asm/docsvol1-300 /documents1
[root@racnode2 ~]# /bin/mount -t acfs /dev/asm/docsvol1-300 /documents1
Verify Mounted Cluster File System


To verify that the new cluster file system mounted properly, run the following mount command from all Oracle RAC nodes:



[root@racnode1 ~]# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sdb1 on /local type ext3 (rw)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
domo:PUBLIC on /domo type nfs (rw,addr=192.168.1.121)
oracleasmfs on /dev/oracleasm type oracleasmfs (rw)
/dev/asm/docsvol1-300 on /documents1 type acfs (rw)
[root@racnode2 ~]# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sdb1 on /local type ext3 (rw)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
domo:Public on /domo type nfs (rw,addr=192.168.1.121)
oracleasmfs on /dev/oracleasm type oracleasmfs (rw)
/dev/asm/docsvol1-300 on /documents1 type acfs (rw)
Set Permissions


With the new cluster file system now mounted on all Oracle RAC nodes, change the permissions to allow user access. For the purpose of this example, I want to grant the oracleuser account and dba group read/write permissions. Run the following as root from only one node in the Oracle RAC:



[root@racnode1 ~]# chown oracle.dba /documents1
[root@racnode1 ~]# chmod 775 /documents1

Test 
Now let's perform a test to see if all of our hard work paid off. 
Node 1 
Log in to the first Oracle RAC node as the oracle user account and create a test file on the new cluster file system: 

[oracle@racnode1 ~]$ id
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)
[oracle@racnode1 ~]$ echo "The Wiki of Deepak Kachole" > /documents1/test.txt
[oracle@racnode1 ~]$ ls -l /documents1
total 72
drwxr-xr-x 5 root root 4096 Nov 23 21:17 .ACFS/
drwx------ 2 root root 65536 Nov 23 21:17 lost+found/
-rw-r--r-- 1 oracle oinstall 42 Nov 23 21:25 test.txt
Node 2


Log in to the second Oracle RAC node as the oracle user account and verify the presence and content of the test file:



[oracle@racnode2 ~]$ id
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)
[oracle@racnode2 ~]$ ls -l /documents1
total 72
drwxr-xr-x 5 root root 4096 Nov 23 21:17 .ACFS/
drwx------ 2 root root 65536 Nov 23 21:17 lost+found/
-rw-r--r-- 1 oracle oinstall 42 Nov 23 21:25 test.txt
[oracle@racnode2 ~]$ cat /documents1/test.txt
The Wiki of Deepak Kachole

Oracle Enterprise Manager (OEM) 
This section presents a second method that can be used to create an Oracle ASM cluster file system; namely, Oracle Enterprise Manager (OEM). Similar to the ASM Configuration Assistant (ASMCA), OEM provides a convenient graphical user interface for creating and maintaining ASM cluster file systems. 
Create Mount Point 
From each Oracle RAC node, create a directory that will be used to mount the new Oracle ACFS: 

[root@racnode1 ~]# mkdir /documents2
[root@racnode2 ~]# mkdir /documents2

Create ASM Cluster File System 
Log in to Oracle Enterprise Manager (OEM) as a privileged database user. 

[oracle@racnode1 ~]$ emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0
Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved.
https://racnode1.abc.info:1158/em/console/aboutApplication
Oracle Enterprise Manager 11g is running.

Logs are generated in directory /u01/app/oracle/product/11.2.0/dbhome_1/racnode1_racdb/sysman/log
[oracle@racnode2 ~]$ emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0
Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved.
https://racnode1.abc.info:1158/em/console/aboutApplication
EM Daemon is running.

Logs are generated in directory /u01/app/oracle/product/11.2.0/dbhome_1/racnode2_racdb/sysman/log


Verify Mounted Cluster File System 
To verify that the new cluster file system mounted properly, run the following mount command from all Oracle RAC nodes: 

[root@racnode1 ~]# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sdb1 on /local type ext3 (rw)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
domo:PUBLIC on /domo type nfs (rw,addr=192.168.1.121)
oracleasmfs on /dev/oracleasm type oracleasmfs (rw)
/dev/asm/docsvol1-300 on /documents1 type acfs (rw)
/dev/asm/docsvol2-300 on /documents2 type acfs (rw)
[root@racnode2 ~]# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sdb1 on /local type ext3 (rw)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
domo:Public on /domo type nfs (rw,addr=192.168.1.121)
oracleasmfs on /dev/oracleasm type oracleasmfs (rw)
/dev/asm/docsvol1-300 on /documents1 type acfs (rw)
/dev/asm/docsvol2-300 on /documents2 type acfs (rw)
Set Permissions


With the new cluster file system now mounted on all Oracle RAC nodes, change the permissions to allow user access. For the purpose of this example, I want to grant the oracleuser account and dba group read/write permissions. Run the following as root from only one node in the Oracle RAC:



[root@racnode1 ~]# chown oracle.dba /documents2
[root@racnode1 ~]# chmod 775 /documents2

Test 
Now let's perform a test to see if all of our hard work paid off. 
Node 1 
Log in to the first Oracle RAC node as the oracle user account and create a test file on the new cluster file system: 

[oracle@racnode1 ~]$ id
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)
[oracle@racnode1 ~]$ echo " The Wiki of Deepak Kachole " > /documents2/test.txt
[oracle@racnode1 ~]$ ls -l /documents2
total 72
drwxr-xr-x 5 root root 4096 Nov 24 13:32 .ACFS/
drwx------ 2 root root 65536 Nov 24 13:32 lost+found/
-rw-r--r-- 1 oracle oinstall 42 Nov 24 14:10 test.txt
Node 2


Log in to the second Oracle RAC node as the oracle user account and verify the presence and content of the test file:



[oracle@racnode2 ~]$ id
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)
[oracle@racnode2 ~]$ ls -l /documents2
total 72
drwxr-xr-x 5 root root 4096 Nov 24 13:32 .ACFS/
drwx------ 2 root root 65536 Nov 24 13:32 lost+found/
-rw-r--r-- 1 oracle oinstall 42 Nov 24 14:10 test.txt
[oracle@racnode2 ~]$ cat /documents2/test.txt
The Wiki of Deepak Kachole

Command Line / SQL 
This section presents the third and final method described in this guide that can be used to create an Oracle ASM cluster file system; namely, the command line interface and SQL. Unlike using the ASM Configuration Assistant (ASMCA) or Oracle Enterprise Manager (OEM), using the command line tools does not require a graphic user interface and is the preferred method when working remotely over a slow network. 
Create Oracle ASM Dynamic Volume 
Log in as the Oracle grid infrastructure owner and switch to the Oracle ASM environment on node 1 of the RAC: 

[grid@racnode1 ~]$ id
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)
[grid@racnode1 ~]$ . oraenv
ORACLE_SID = [+ASM1] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid
[grid@racnode1 ~]$ dbhome
/u01/app/11.2.0/grid
[grid@racnode1 ~]$ echo $ORACLE_SID
+ASM1
As the Oracle grid infrastructure owner, log in to the Oracle ASM instance using SQL*Plus and issue the following SQL: 
[grid@racnode1 ~]$ sqlplus / as sysasm
SQL> ALTER DISKGROUP docsdg1 ADD VOLUME docsvol3 SIZE 20G;
Diskgroup altered.



The same task can be accomplished using the ASMCMD command-line utility: 
[grid@racnode1 ~]$ asmcmd volcreate -G docsdg1 -s 20G --redundancy unprotected docsvol3


To verify that the new Oracle ASM dynamic volume was successfully created, query the view V$ASM_VOLUME (or since we are using Oracle RAC, GV$ASM_VOLUME). Make certain that the STATE column for each Oracle RAC instance is ENABLED


break on inst_id skip 1
column inst_id format 9999999 heading "Instance ID" justify left
column volume_name format a13 heading "Volume Name" justify left
column volume_device format a23 heading "Volume Device" justify left
column size_mb format 999,999,999 heading "Size (MB)" justify right
column usage format a5 heading "Usage" justify right
column state format a7 heading "State" justify right
================================================================
SQL> select inst_id, volume_name, volume_device, size_mb, usage, state
from gv$asm_volume
order by inst_id, volume_name;
Instance ID Volume Name Volume Device Size (MB)UsageState
----------- ------------- ----------------------- ------------ ----- -------
1 DOCSVOL1 /dev/asm/docsvol1-300 32,768 ACFS ENABLED
DOCSVOL2 /dev/asm/docsvol2-300 32,768 ACFS ENABLED
DOCSVOL3 /dev/asm/docsvol3-300 20,480 ENABLED 
2 DOCSVOL1 /dev/asm/docsvol1-300 32,768 ACFS ENABLED
DOCSVOL2 /dev/asm/docsvol2-300 32,768 ACFS ENABLED
DOCSVOL3 /dev/asm/docsvol3-300 20,480 ENABLED 


The same task can be accomplished using the ASMCMD command-line utility: 
[grid@racnode1 ~]$ asmcmd volinfo -G docsdg1 -a
Diskgroup Name: DOCSDG1
Volume Name: DOCSVOL1
Volume Device: /dev/asm/docsvol1-300
State: ENABLED
Size (MB): 32768
Resize Unit (MB): 256
Redundancy: UNPROT
Stripe Columns: 4
Stripe Width (K): 128
Usage: ACFS
Mountpath: /documents1
Volume Name: DOCSVOL2
Volume Device: /dev/asm/docsvol2-300
State: ENABLED
Size (MB): 32768
Resize Unit (MB): 256
Redundancy: UNPROT
Stripe Columns: 4
Stripe Width (K): 128
Usage: ACFS
Mountpath: /documents2
Volume Name: DOCSVOL3 
Volume Device: /dev/asm/docsvol3-300 
State: ENABLED 
Size (MB): 20480 
Resize Unit (MB): 256 
Redundancy: UNPROT 
Stripe Columns: 4 
Stripe Width (K): 128 
Usage: 
Mountpath:





[root@racnode1 ~]# /sbin/advmutil volinfo /dev/asm/docsvol3-300
Interface Version: 1
Size (MB): 20480
Resize Increment (MB): 256
Redundancy: unprotected
Stripe Columns: 4
Stripe Width (KB): 128
Disk Group: DOCSDG1
Volume: DOCSVOL3
As a final check, list the newly created device file on the file system: 
[root@racnode1 ~]# ls -l /dev/asm/*
brwxrwx--- 1 root asmadmin 252, 153601 Nov 26 16:55 /dev/asm/docsvol1-300
brwxrwx--- 1 root asmadmin 252, 153602 Nov 26 16:55 /dev/asm/docsvol2-300
brwxrwx--- 1 root asmadmin 252, 153603 Nov 26 17:20 /dev/asm/docsvol3-300
Create Oracle ASM Cluster File System


The next step is to create the Oracle ASM cluster file system on the new Oracle ASM volume created in the previous section. This is performed using the mkfs OS command from only one of the Oracle RAC nodes:



[grid@racnode1 ~]$ /sbin/mkfs -t acfs -b 4k /dev/asm/docsvol3-300 -n "DOCSVOL3"
mkfs.acfs: version = 11.2.0.1.0.0
mkfs.acfs: on-disk version = 39.0
mkfs.acfs: volume = /dev/asm/docsvol3-300
mkfs.acfs: volume size = 21474836480
mkfs.acfs: Format complete.
In the above mkfs command, the -t flag indicates that the new file system should be of type ACFS. The block size is set to 4K (-b). Finally, specify the Linux dynamic volume device (/dev/asm/docsvol3-300) and the volume label (DOCSVOL3).


Mount the New Cluster File System


The mkfs command in the previous section only prepared the volume to be mounted as a file system, but does not actually mount it. To mount the new cluster file system, first create a directory on each Oracle RAC node as the root user account that will be used to mount the new Oracle ACFS:



[root@racnode1 ~]# mkdir /documents3
[root@racnode2 ~]# mkdir /documents3
Mount the cluster file system on each Oracle RAC node using the regular Linux mount command as follows:



[root@racnode1 ~]# /bin/mount -t acfs /dev/asm/docsvol3-300 /documents3
/sbin/mount.acfs.bin: error while loading shared libraries: libhasgen11.so: 
cannot open shared object file: No such file or directory
[root@racnode2 ~]# /bin/mount -t acfs /dev/asm/docsvol3-300 /documents3
/sbin/mount.acfs.bin: error while loading shared libraries: libhasgen11.so: 
cannot open shared object file: No such file or directory
If you don't have the ORACLE_HOME environment variable set to the Oracle grid infrastructure home as explained in the prerequisites section to this guide, the mount command will fail as shown above. In order to mount the new cluster file system, the Oracle ASM ACFS binaries need access to certain shared libraries in the ORACLE_HOME for grid infrastructure. An easy workaround to get past this error is to set the ORACLE_HOME environment variable for grid infrastructure in the file /sbin/mount.acfs on all Oracle RAC nodes. The ORACLE_HOME should be set at the beginning of the file after the header comments as follows:



#!/bin/sh
#
  1. Copyright (c) 2001, 2009, Oracle and/or its affiliates. All rights reserved.
#
ORACLE_HOME=/u01/app/11.2.0/grid
ORA_CRS_HOME=%ORA_CRS_HOME%
if [ ! -d $ORA_CRS_HOME ]; then
ORA_CRS_HOME=$ORACLE_HOME
fi
...
You should now be able to successfully mount the volume:



[root@racnode1 ~]# /bin/mount -t acfs /dev/asm/docsvol3-300 /documents3
[root@racnode2 ~]# /bin/mount -t acfs /dev/asm/docsvol3-300 /documents3

Verify Mounted Cluster File System 
To verify that the new cluster file system mounted properly, run thefollowing mount command from all Oracle RAC nodes: 

[root@racnode1 ~]# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sdb1 on /local type ext3 (rw)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
domo:PUBLIC on /domo type nfs (rw,addr=192.168.1.121)
oracleasmfs on /dev/oracleasm type oracleasmfs (rw)
/dev/asm/docsvol1-300 on /documents1 type acfs (rw)
/dev/asm/docsvol2-300 on /documents2 type acfs (rw)
/dev/asm/docsvol3-300 on /documents3 type acfs (rw)
[root@racnode2 ~]# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sdb1 on /local type ext3 (rw)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
domo:Public on /domo type nfs (rw,addr=192.168.1.121)
oracleasmfs on /dev/oracleasm type oracleasmfs (rw)
/dev/asm/docsvol1-300 on /documents1 type acfs (rw)
/dev/asm/docsvol2-300 on /documents2 type acfs (rw)
/dev/asm/docsvol3-300 on /documents3 type acfs (rw)
Set Permissions


With the new cluster file system now mounted on all Oracle RAC nodes, change the permissions to allow user access. For the purpose of this example, I want to grant the oracleuser account and dba group read/write permissions. Run the following as root from only one node in the Oracle RAC:



[root@racnode1 ~]# chown oracle.dba /documents3
[root@racnode1 ~]# chmod 775 /documents3
Register New Volume


When creating the Oracle ACFS using the ASM Configuration Assistant (ASMCA) and Oracle Enterprise Manager (OEM), I glossed over this notion of registering the new volume in the Oracle ACFS mount registry. When a node configured with Oracle ACFS reboots, the newly created file systems do not remount by default. The Oracle ACFS mount registry acts as a global file system reference, much like the /etc/fstab file does in a UNIX/Linux environment. When mount points are registered in the Oracle ACFS mount registry, Oracle grid infrastructure will mount and unmount volumes on startup and shutdown respectively.


Use the /sbin/acfsutil utility on only one of Oracle RAC nodes to register the new mount point in the Oracle ACFS mount registry:



[root@racnode1 ~]# /sbin/acfsutil registry -f -a /dev/asm/docsvol3-300 /documents3
acfsutil registry: mount point /documents3 successfully added to Oracle Registry
Query the Oracle ACFS mount registry from all Oracle RAC nodes to verify the volume and mount point was successfully registered:



[root@racnode1 ~]# /sbin/acfsutil registry
MountObject:
Device: /dev/asm/docsvol1-300
Mount Point: /documents1
Disk Group: DOCSDG1
Volume: DOCSVOL1
Options: none
Nodes: all
MountObject:
Device: /dev/asm/docsvol2-300
Mount Point: /documents2
Disk Group: DOCSDG1
Volume: DOCSVOL2
Options: none
Nodes: all
MountObject:
Device: /dev/asm/docsvol3-300 
Mount Point: /documents3 
Disk Group: DOCSDG1 
Volume: DOCSVOL3 
Options: none 
Nodes: all
Test


Now let's perform a test to see if all of our hard work paid off.


Node 1


Log in to the first Oracle RAC node as the oracle user account and create a test file on the new cluster file system:



[oracle@racnode1 ~]$ id
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)
[oracle@racnode1 ~]$ echo " The Wiki of Deepak Kachole " > /documents3/test.txt
[oracle@racnode1 ~]$ ls -l /documents3
total 72
drwxr-xr-x 5 root root 4096 Nov 24 18:44 .ACFS/
drwx------ 2 root root 65536 Nov 24 18:44 lost+found/
-rw-r--r-- 1 oracle oinstall 42 Nov 24 18:56 test.txt
Node 2


Log in to the second Oracle RAC node as the oracle user account and verify the presence and content of the test file:



[oracle@racnode2 ~]$ id
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)
[oracle@racnode2 ~]$ ls -l /documents3
total 72
drwxr-xr-x 5 root root 4096 Nov 24 18:44 .ACFS/
drwx------ 2 root root 65536 Nov 24 18:44 lost+found/
-rw-r--r-- 1 oracle oinstall 42 Nov 24 18:56 test.txt
[oracle@racnode2 ~]$ cat /documents3/test.txt
The Wiki of Deepak Kachole

No comments:

Post a Comment