sourCEntral - mobile manpages

pdf

gfs_mount

NAME

gfs_mount - GFS mount options

SYNOPSIS

mount [StandardMountOptions] -t gfs DEVICE MOUNTPOINT -o [GFSOption1,GFSOption2,GFSOptionX...]

DESCRIPTION

GFS may be used as a local (single computer) filesystem, but its real purpose is in clusters, where multiple computers (nodes) share a common storage device.

Above is the format typically used to mount a GFS filesystem, using the mount(8) command. The device may be any block device on which you have created a GFS filesystem. Examples include a single disk partition (e.g. /dev/sdb3), a loopback device, a device exported from another node (e.g. an iSCSI device or a gnbd(8) device), or a logical volume (typically comprised of a number of individual disks).

device does not necessarily need to match the device name as seen on another node in the cluster, nor does it need to be a logical volume. However, the use of a cluster-aware volume manager such as CLVM2 (see lvm(8)) will guarantee that the managed devices are named identically on each node in a cluster (for much easier management), and will allow you to configure a very large volume from multiple storage units (e.g. disk drives).

device must make the entire filesystem storage area visible to the computer. That is, you cannot mount different parts of a single filesystem on different computers. Each computer must see an entire filesystem. You may, however, mount several GFS filesystems if you want to distribute your data storage in a controllable way.

mountpoint is the same as dir in the mount(8) man page.

This man page describes GFS-specific options that can be passed to the GFS file system at mount time, using the -o flag. There are many other -o options handled by the generic mount command mount(8). However, the options described below are specifically for GFS, and are not interpreted by the mount command nor by the kernel’s Virtual File System. GFS and non-GFS options may be intermingled after the -o, separated by commas (but no spaces).

As an alternative to mount command line options, you may send mount options to gfs using "gfs_tool margs" (after loading the gfs kernel module, but before mounting GFS). For example, you may need to do this when working from an initial ramdisk initrd(4). The options are restricted to the ones described on this man page (no general mount(8) options will be recognized), must not be preceded by -o, and must be separated by commas (no spaces). Example:

# gfs_tool margs "lockproto=lock_nolock,ignore_local_fs"

Options loaded via "gfs_tool margs" have a lifetime of only one GFS mount. If you wish to mount another GFS filesystem, you must set another group of options with "gfs_tool margs".

If you have trouble mounting GFS, check the syslog (e.g. /var/log/messages) for specific error messages.

OPTIONS

lockproto=LockModuleName

This specifies which inter-node lock protocol is used by the GFS filesystem for this mount, overriding the default lock protocol name stored in the filesystem’s on-disk superblock.

The LockModuleName must be an exact match of the protocol name presented by the lock module when it registers with the lock harness. Traditionally, this matches the .o filename of the lock module, e.g. lock_dlm, or lock_nolock.

The default lock protocol name is written to disk initially when creating the filesystem with gfs_mkfs(8), -p option. It can be changed on-disk by using the gfs_tool(8) utility’s sb proto command.

The lockproto mount option should be used only under special circumstances in which you want to temporarily use a different lock protocol without changing the on-disk default.

locktable=LockTableName

This specifies the identity of the cluster and of the filesystem for this mount, overriding the default cluster/filesystem identify stored in the filesystem’s on-disk superblock. The cluster/filesystem name is recognized globally throughout the cluster, and establishes a unique namespace for the inter-node locking system, enabling the mounting of multiple GFS filesystems.

The format of LockTableName is lock-module-specific. For lock_dlm, the format is clustername:fsname. For lock_nolock, the field is ignored.

The default cluster/filesystem name is written to disk initially when creating the filesystem with gfs_mkfs(8), -t option. It can be changed on-disk by using the gfs_tool(8) utility’s sb table command.

The locktable mount option should be used only under special circumstances in which you want to mount the filesystem in a different cluster, or mount it as a different filesystem name, without changing the on-disk default.

localcaching

This flag tells GFS that it is running as a local (not clustered) filesystem, so it can turn on some block caching optimizations that can’t be used when running in cluster mode.

This is turned on automatically by the lock_nolock module, but can be overridden by using the ignore_local_fs option.

localflocks

This flag tells GFS that it is running as a local (not clustered) filesystem, so it can allow the kernel VFS layer to do all flock and fcntl file locking. When running in cluster mode, these file locks require inter-node locks, and require the support of GFS. When running locally, better performance is achieved by letting VFS handle the whole job.

This is turned on automatically by the lock_nolock module, but can be overridden by using the ignore_local_fs option.

oopses_ok

Normally, GFS automatically turns on the "kernel.panic_on_oops" sysctl to cause the machine to panic if an oops (an in-kernel segfault or GFS assertion failure) happens. An oops on one machine of a cluster filesystem can cause the filesystem to stall on all machines in the cluster. (Panics don’t have this "feature".) By turning on "panic_on_oops", GFS tries to make sure the cluster remains in operation even if one machine has a problem. There are cases, however, where this behavior is not desirable -- debugging being the main one. The oopses_ok option causes GFS to leave the "panic_on_oops" variable alone so oopses can happen. Use this option with care.

This is turned on automatically by the lock_nolock module, but can be overridden by using the ignore_local_fs option.

ignore_local_fs

By default, using the nolock lock module automatically turns on the localcaching and localflocks optimizations. ignore_local_fs forces GFS to treat the filesystem as if it were a multihost (clustered) filesystem, with localcaching and localflocks optimizations turned off.

upgrade

This flag tells GFS to upgrade the filesystem’s on-disk format to the version supported by the current GFS software installation on this computer. If you try to mount an old-version disk image, GFS will notify you via a syslog message that you need to upgrade. Try mounting again, using the -o upgrade option. When upgrading, only one node may mount the GFS filesystem.

num_glockd

Tunes GFS to alleviate memory pressure when rapidly acquiring many locks (e.g. several processes scanning through huge directory trees). GFS’ glockd kernel daemon cleans up memory for no-longer-needed glocks. Multiple instances of the daemon clean up faster than a single instance. The default value is one daemon, with a maximum of 32. Since this option was introduced, other methods of rapid cleanup have been developed within GFS, so this option may go away in the future.

acl

Enables POSIX Access Control List acl(5) support within GFS.

spectator

Mount this filesystem using a special form of read-only mount. The mount does not use one of the filesystem’s journals.

suiddir

Sets owner of any newly created file or directory to be that of parent directory, if parent directory has S_ISUID permission attribute bit set. Sets S_ISUID in any new directory, if its parent directory’s S_ISUID is set. Strips all execution bits on a new file, if parent directory owner is different from owner of process creating the file. Set this option only if you know why you are setting it.

LINKS

http://sources.redhat.com/cluster

-- home site of GFS

http://www.suse.de/~agruen/acl/linux-acls/

-- good writeup on ACL support in Linux

SEE ALSO

gfs(8), mount(8) for general mount options, chmod(1) and chmod(2) for access permission flags, acl(5) for access control lists, lvm(8) for volume management, ccs(7) for cluster management, umount(8), initrd(4).

pdf