ACFC on Linux Part 2
When a file is modified, only the changed blocks are copied to the snapshot location which helps conserve disk space. |
Oracle ACFS snapshots can be created and deleted on demand without the need to take the file system offline. ACFS snapshots provide a point-in-time consistent view of the entire file system which can be used to restore deleted or modified files and to perform backups. |
All storage for Oracle ACFS snapshots are maintained within the file system which eliminates the need for separate storage pools for file systems and snapshots. As shown in the next section, Oracle ACFS file systems can be dynamically re-sized to accommodate addition file and snapshot storage requirements.
Oracle ACFS snapshots are administered with the acfsutil snap command. This section will provide an overview on how to create and retrieve Oracle ACFS snapshots.
Oracle ACFS Snapshot Location
Whenever you create an Oracle ACFS file system, a hidden directory is created as a sub-directory to the Oracle ACFS file system named .ACFS. (Note that hidden files and directories in Linux start with leading period.)
[oracle@racnode1 ~]$ ls -lFA /documents3 total 2851148 drwxr-xr-x 5 root root 4096 Nov 26 17:57 .ACFS/ -rw-r--r-- 1 oracle oinstall 1239269270 Nov 27 16:02 linux.x64_11gR2_database_1of2.zip -rw-r--r-- 1 oracle oinstall 1111416131 Nov 27 16:03 linux.x64_11gR2_database_2of2.zip -rw-r--r-- 1 oracle oinstall 555366950 Nov 27 16:03 linux.x64_11gR2_examples.zip drwx------ 2 root root 65536 Nov 26 17:57 lost+found/ |
Found in the .ACFS are two directories named repl and snaps. All Oracle ACFS snapshots are stored in the snaps directory.
[oracle@racnode1 ~]$ ls -lFA /documents3/.ACFS total 12 drwx------ 2 root root 4096 Nov 26 17:57 .fileid/ drwx------ 6 root root 4096 Nov 26 17:57 repl/ drwxr-xr-x 2 root root 4096 Nov 27 15:53 snaps/ |
Since no Oracle ACFS snapshots exist, the snaps directory is empty.
[oracle@racnode1 ~]$ ls -lFA /documents3/.ACFS/snaps total 0 |
Create Oracle ACFS Snapshot
Let's start by creating an Oracle ACFS snapshot named snap1 for the Oracle ACFS mounted on /documents3. This operation should be performed as root or the Oracle grid infrastructure owner:
[root@racnode1 ~]# /sbin/acfsutil snap create snap1 /documents3 acfsutil snap create: Snapshot operation is complete. |
[oracle@racnode1 ~]$ ls -lFA /documents3/.ACFS/snaps/snap1 total 2851084 drwxr-xr-x 5 root root 4096 Nov 26 17:57 .ACFS/ -rw-r--r-- 1 oracle oinstall 1239269270 Nov 27 16:02 linux.x64_11gR2_database_1of2.zip -rw-r--r-- 1 oracle oinstall 1111416131 Nov 27 16:03 linux.x64_11gR2_database_2of2.zip -rw-r--r-- 1 oracle oinstall 555366950 Nov 27 16:03 linux.x64_11gR2_examples.zip ?--------- ? ? ? ? ? lost+found |
[oracle@racnode1 ~]$ rm /documents3/linux.x64_11gR2_examples.zip |
[oracle@racnode1 ~]$ cp /documents3/.ACFS/snaps/snap1/linux.x64_11gR2_examples.zip /documents3 |
[oracle@racnode1 ~]$ /sbin/acfsutil info fs /documents3 /documents3 ACFS Version: 11.2.0.1.0.0 flags: MountPoint,Available mount time: Sat Nov 27 03:07:50 2010 volumes: 1 total size: 26843545600 total free: 23191826432 primary volume: /dev/asm/docsvol3-300 label: DOCSVOL3 flags: Primary,Available on-disk version: 39.0 allocation unit: 4096 major, minor: 252, 153603 size: 26843545600 free: 23191826432 number of snapshots: 1 snapshot space usage: 560463872 |
Delete Oracle ACFS Snapshot
[root@racnode1 ~]# /sbin/acfsutil snap delete snap1 /documents3 acfsutil snap delete: Snapshot operation is complete. |
[root@racnode1 ~]# sync [root@racnode1 ~]# sync |
This should go without saying, but I'll say it anyway. DO NOT attempt the following on a production environment. |
Any subsequent attempt to access an offline file system on that node will result in an I/O error:
[oracle@racnode1 ~]$ ls -l /documents3 ls: /documents3: Input/output error [oracle@racnode1 ~]$ df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 145344992 22459396 115383364 17% /dev/sdb1 151351424 192072 143346948 1% /local /dev/sda1 101086 12632 83235 14% /boot tmpfs 2019256 0 2019256 0% /dev/shm df: `/documents1': Input/output error df: `/documents2': Input/output error df: `/documents3': Input/output error domo:PUBLIC 4799457152 1901758592 2897698560 40% /domo |
[root@racnode1 ~]# umount /documents1 [root@racnode1 ~]# umount /documents2 [root@racnode1 ~]# umount /documents3 umount: /documents3: device is busy umount: /documents3: device is busy |
[root@racnode1 ~]# fuser /documents3 /documents3: 16263c [root@racnode1 ~]# kill -9 16263 [root@racnode1 ~]# umount /documents3 |
[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster [root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster |
[root@racnode1 ~]# mount /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sdb1 on /local type ext3 (rw) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) oracleasmfs on /dev/oracleasm type oracleasmfs (rw) domo:PUBLIC on /domo type nfs (rw,addr=192.168.1.121) /dev/asm/docsvol1-300 on /documents1 type acfs (rw) /dev/asm/docsvol2-300 on /documents2 type acfs (rw) /dev/asm/docsvol3-300 on /documents3 type acfs (rw) |
SQL> select name, total_mb, free_mb, round((free_mb/total_mb)*100,2) pct_free 2 from v$asm_diskgroup 3 where total_mb != 0 4 order by name; Disk Group Total (MB) Free (MB) % Free --------------- ------------ ------------ ------- CRS 2,205 1,809 82.04 DOCSDG1 98,303 12,187 12.40 FRA 33,887 22,795 67.27 RACDB_DATA 33,887 30,584 90.25 |
The same task can be accomplished using the ASMCMD command-line utility: [grid@racnode1 ~]$ asmcmd lsdg |
[root@racnode1 ~]# /sbin/acfsutil size +5G /documents3 acfsutil size: new file system size: 26843545600 (25600MB) |
Verify the new size of the file system from all Oracle RAC nodes:
[root@racnode1 ~]# df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 145344992 21952712 115890048 16% /dev/sdb1 151351424 192072 143346948 1% /local /dev/sda1 101086 12632 83235 14% /boot tmpfs 2019256 1135852 883404 57% /dev/shm domo:PUBLIC 4799457152 1901103872 2898353280 40% /domo /dev/asm/docsvol1-300 33554432 197668 33356764 1% /documents1 /dev/asm/docsvol2-300 33554432 197668 33356764 1% /documents2 /dev/asm/docsvol3-300 26214400 183108 26031292 1% /documents3 [root@racnode2 ~]# df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 145344992 13803084 124039676 11% /dev/sdb1 151351424 192072 143346948 1% /local /dev/sda1 101086 12632 83235 14% /boot tmpfs 2019256 1135852 883404 57% /dev/shm domo:Public 4799457152 1901103872 2898353280 40% /domo /dev/asm/docsvol1-300 33554432 197668 33356764 1% /documents1 /dev/asm/docsvol2-300 33554432 197668 33356764 1% /documents2 /dev/asm/docsvol3-300 26214400 183108 26031292 1% /documents3 |
[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/acfsload start -s |
Unload the Oracle ASM volume driver:
[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/acfsload stop |
Check if Oracle ASM volume driver is loaded:
[root@racnode1 ~]# lsmod | grep oracle oracleacfs 877320 4 oracleadvm 221760 8 oracleoks 276880 2 oracleacfs,oracleadvm oracleasm 84136 1 |
[grid@racnode1 ~]$ asmcmd volcreate –G docsdg1 -s 20G --redundancy unprotected docsvol3 |
[root@racnode1 ~]# /sbin/acfsutil size +5G /documents3 acfsutil size: new file system size: 26843545600 (25600MB) |
[grid@racnode1 ~]$ asmcmd voldelete -G docsdg1 docsvol3 |
[grid@racnode1 ~]$ asmcmd lsdg |
[grid@racnode1 ~]$ /sbin/mkfs -t acfs -b 4k /dev/asm/docsvol3-300 -n "DOCSVOL3" mkfs.acfs: version = 11.2.0.1.0.0 mkfs.acfs: on-disk version = 39.0 mkfs.acfs: volume = /dev/asm/docsvol3-300 mkfs.acfs: volume size = 21474836480 mkfs.acfs: Format complete. |
Get detailed file system information:
[root@racnode1 ~]# /sbin/acfsutil info fs /documents1 ACFS Version: 11.2.0.1.0.0 flags: MountPoint,Available mount time: Fri Nov 26 18:38:48 2010 volumes: 1 total size: 34359738368 total free: 34157326336 primary volume: /dev/asm/docsvol1-300 label: flags: Primary,Available,ADVM on-disk version: 39.0 allocation unit: 4096 major, minor: 252, 153601 size: 34359738368 free: 34157326336 ADVM diskgroup DOCSDG1 ADVM resize increment: 268435456 ADVM redundancy: unprotected ADVM stripe columns: 4 ADVM stripe width: 131072 number of snapshots: 0 snapshot space usage: 0 /documents2 ACFS Version: 11.2.0.1.0.0 flags: MountPoint,Available mount time: Fri Nov 26 18:38:48 2010 volumes: 1 total size: 34359738368 total free: 34157326336 primary volume: /dev/asm/docsvol2-300 label: flags: Primary,Available,ADVM on-disk version: 39.0 allocation unit: 4096 major, minor: 252, 153602 size: 34359738368 free: 34157326336 ADVM diskgroup DOCSDG1 ADVM resize increment: 268435456 ADVM redundancy: unprotected ADVM stripe columns: 4 ADVM stripe width: 131072 number of snapshots: 0 snapshot space usage: 0 /documents3 ACFS Version: 11.2.0.1.0.0 flags: MountPoint,Available mount time: Fri Nov 26 18:38:48 2010 volumes: 1 total size: 26843545600 total free: 26656043008 primary volume: /dev/asm/docsvol3-300 label: DOCSVOL3 flags: Primary,Available,ADVM on-disk version: 39.0 allocation unit: 4096 major, minor: 252, 153603 size: 26843545600 free: 26656043008 ADVM diskgroup DOCSDG1 ADVM resize increment: 268435456 ADVM redundancy: unprotected ADVM stripe columns: 4 ADVM stripe width: 131072 number of snapshots: 0 snapshot space usage: 0 |
[grid@racnode1 ~]$ asmcmd volinfo -a Diskgroup Name: DOCSDG1 Volume Name: DOCSVOL1 Volume Device: /dev/asm/docsvol1-300 State: ENABLED Size (MB): 32768 Resize Unit (MB): 256 Redundancy: UNPROT Stripe Columns: 4 Stripe Width (K): 128 Usage: ACFS Mountpath: /documents1 Volume Name: DOCSVOL2 Volume Device: /dev/asm/docsvol2-300 State: ENABLED Size (MB): 32768 Resize Unit (MB): 256 Redundancy: UNPROT Stripe Columns: 4 Stripe Width (K): 128 Usage: ACFS Mountpath: /documents2 Volume Name: DOCSVOL3 Volume Device: /dev/asm/docsvol3-300 State: ENABLED Size (MB): 25600 Resize Unit (MB): 256 Redundancy: UNPROT Stripe Columns: 4 Stripe Width (K): 128 Usage: ACFS Mountpath: /documents3 |
[grid@racnode1 ~]$ asmcmd volstat DISKGROUP NUMBER / NAME: 2 / DOCSDG1 VOLUME_NAME READS BYTES_READ READ_TIME READ_ERRS WRITES BYTES_WRITTEN WRITE_TIME WRITE_ERRS DOCSVOL1 517 408576 1618 0 17007 69280768 63456 0 DOCSVOL2 512 406016 2547 0 17007 69280768 66147 0 DOCSVOL3 13961 54525952 172007 0 10956 54410240 41749 0 |
[grid@racnode1 ~]$ asmcmd volenable -G docsdg1 docsvol3 |
[root@racnode1 ~]# umount /documents3 [root@racnode2 ~]# umount /documents3 [grid@racnode1 ~]$ asmcmd voldisable -G docsdg1 docsvol3 |
Mount Commands
Mount single Oracle ACFS volume on the local node:
[root@racnode1 ~]# /bin/mount -t acfs /dev/asm/docsvol3-300 /documents3 |
Unmount single Oracle ACFS volume on the local node:
[root@racnode1 ~]# umount /documents3 |
Mount all Oracle ACFS volumes on the local node using the metadata found in the Oracle ACFS mount registry:
[root@racnode1 ~]# /sbin/mount.acfs -o all |
Unmount all Oracle ACFS volumes on the local node using the metadata found in the Oracle ACFS mount registry:
[root@racnode1 ~]# /bin/umount -t acfs –a |
Oracle ACFS Mount Registry
Register new mount point in the Oracle ACFS mount registry:
[root@racnode1 ~]# /sbin/acfsutil registry -f -a /dev/asm/docsvol3-300 /documents3 acfsutil registry: mount point /documents3 successfully added to Oracle Registry |
Query the Oracle ACFS mount registry:
[root@racnode1 ~]# /sbin/acfsutil registry MountObject: Device: /dev/asm/docsvol1-300 Mount Point: /documents1 Disk Group: DOCSDG1 Volume: DOCSVOL1 Options: none Nodes: all MountObject: Device: /dev/asm/docsvol2-300 Mount Point: /documents2 Disk Group: DOCSDG1 Volume: DOCSVOL2 Options: none Nodes: all Mount Object: Device: /dev/asm/docsvol3-300 Mount Point: /documents3 Disk Group: DOCSDG1 Volume: DOCSVOL3 Options: none Nodes: all |
Unregister volume and mount point from the Oracle ACFS mount registry:
[root@racnode1 ~]# acfsutil registry -d /documents3 acfsutil registry: successfully removed ACFS mount point /documents3 from Oracle Registry |
Oracle ACFS Snapshots
Use the 'acfsutil snap create' command to create an Oracle ACFS snapshot named snap1 for an Oracle ACFS mounted on /documents3:
[root@racnode1 ~]# /sbin/acfsutil snap create snap1 /documents3 acfsutil snap create: Snapshot operation is complete. |
Use the 'acfsutil snap delete' command to delete an existing Oracle ACFS snapshot:
[root@racnode1 ~]# /sbin/acfsutil snap delete snap1 /documents3 acfsutil snap delete: Snapshot operation is complete. |
Oracle ASM / ACFS Dynamic Views
This section contains information about using dynamic views to display Oracle Automatic Storage Management (Oracle ASM), Oracle Automatic Storage Management Cluster File System (Oracle ACFS), and Oracle ASM Dynamic Volume Manager (Oracle ADVM) information. These views are accessible from the Oracle ASM instance.
|||| Oracle Automatic Storage Management (Oracle ASM) ||Description | |
V$ASM_ALIAS | Contains one row for every alias present in every disk group mounted by the Oracle ASM instance. |
V$ASM_ATTRIBUTE | Displays one row for each attribute defined. In addition to attributes specified by CREATE DISKGROUP andALTER DISKGROUPstatements, the view may show other attributes that are created automatically. Attributes are only displayed for disk groups whereCOMPATIBLE.ASM is set to 11.1 or higher. |
V$ASM_CLIENT | In an Oracle ASM instance, identifies databases using disk groups managed by the Oracle ASM instance. In a DB instance, contains information about the Oracle ASM instance if the database has any open Oracle ASM files. |
V$ASM_DISK | Contains one row for every disk discovered by the Oracle ASM instance, including disks that are not part of any disk group. This view performs disk discovery every time it is queried. |
V$ASM_DISK_IOSTAT | Displays information about disk I/O statistics for each Oracle ASM client. In a DB instance, only the rows for that instance are shown. |
V$ASM_DISK_STAT | Contains the same columns as V$ASM_DISK, but to reduce overhead, does not perform a discovery when it is queried. It only returns information about any disks that are part of mounted disk groups in the storage system. To see all disks, use V$ASM_DISK instead. |
V$ASM_DISKGROUP | Describes a disk group (number, name, size related info, state, and redundancy type). This view performs disk discovery every time it is queried. |
V$ASM_DISKGROUP_STAT | Contains the same columns as V$ASM_DISKGROUP, but to reduce overhead, does not perform a discovery when it is queried. It does not return information about any disks that are part of mounted disk groups in the storage system. To see all disks, use V$ASM_DISKGROUPinstead. |
V$ASM_FILE | Contains one row for every Oracle ASM file in every disk group mounted by the Oracle ASM instance. |
V$ASM_OPERATION | In an Oracle ASM instance, contains one row for every active Oracle ASM long running operation executing in the Oracle ASM instance. In a DB instance, contains no rows. |
V$ASM_TEMPLATE | Contains one row for every template present in every disk group mounted by the Oracle ASM instance. |
V$ASM_USER | Contains the effective operating system user names of connected database instances and names of file owners. |
V$ASM_USERGROUP | Contains the creator for each Oracle ASM File Access Control group. |
V$ASM_USERGROUP_MEMBER | Contains the members for each Oracle ASM File Access Control group. |
Description | |
V$ASM_ACFSSNAPSHOTS | Contains snapshot information for every mounted Oracle ACFS file system. |
V$ASM_ACFSVOLUMES | Contains information about mounted Oracle ACFS volumes, correlated with V$ASM_FILESYSTEM. |
V$ASM_FILESYSTEM | Contains columns that display information for every mounted Oracle ACFS file system. |
V$ASM_VOLUME | Contains information about each Oracle ADVM volume that is a member of an Oracle ASM instance. |
V$ASM_VOLUME_STAT | Contains information about statistics for each Oracle ADVM volume. |
Use fsck to Check and Repair the Cluster File System
Use the regular Linux fsck command to check and repair the Oracle ACFS. This only needs to be performed from one of the Oracle RAC nodes:
[root@racnode1 ~]# /sbin/fsck -t acfs /dev/asm/docsvol3-300 fsck 1.39 (29-May-2006) fsck.acfs: version = 11.2.0.1.0.0 fsck.acfs: ACFS-00511: /dev/asm/docsvol3-300 is mounted on at least one node of the cluster. fsck.acfs: ACFS-07656: Unable to continue |
The fsck operating cannot be performed while the file system is online. Unmount the cluster file system from all Oracle RAC nodes:
[root@racnode1 ~]# umount /documents3 [root@racnode2 ~]# umount /documents3 |
Now check the cluster file system with the file system unmounted:
[root@racnode1 ~]# /sbin/fsck -t acfs /dev/asm/docsvol3-300 fsck 1.39 (29-May-2006) fsck.acfs: version = 11.2.0.1.0.0 Oracle ASM Cluster File System (ACFS) On-Disk Structure Version: 39.0 * Pass 1: * The ACFS volume was created at Fri Nov 26 17:20:27 2010 Checking primary file system... Files checked in primary file system: 100% Checking if any files are orphaned... 0 orphans found fsck.acfs: Checker completed with no errors. |
Remount the cluster file system on all Oracle RAC nodes:
[root@racnode1 ~]# /bin/mount -t acfs /dev/asm/docsvol3-300 /documents3 [root@racnode2 ~]# /bin/mount -t acfs /dev/asm/docsvol3-300 /documents3 |
Drop ACFS / ASM Volume
Unmount the cluster file system from all Oracle RAC nodes:
[root@racnode1 ~]# umount /documents3 [root@racnode2 ~]# umount /documents3 |
Log in to the ASM instance and drop the ASM dynamic volume from one of the Oracle RAC nodes:
[grid@racnode1 ~]$ sqlplus / as sysasm SQL> ALTER DISKGROUP docsdg1 DROP VOLUME docsvol3; Diskgroup altered. |
The same task can be accomplished using the ASMCMD command-line utility: [grid@racnode1 ~]$ asmcmd voldelete -G docsdg1 docsvol3 |
[root@racnode1 ~]# acfsutil registry -d /documents3 acfsutil registry: successfully removed ACFS mount point /documents3 from Oracle Registry |
Finally, remove the mount point directory from all Oracle RAC nodes (if necessary):
[root@racnode1 ~]# rmdir /documents3 [root@racnode2 ~]# rmdir /documents3 |
e Count: 5071
No comments:
Post a Comment