Difference between revisions of "Storm GPFS Install"
Wahid bhimji (Talk | contribs) |
(No difference)
|
Latest revision as of 10:36, 22 February 2010
This documents the experiences installing GPFS (v3.3.0-1) with Storm at Edinburgh. You should also consult the IBM GPFS documentation. Instructions for installing Storm can be found on the Storm_Install page. The exact parameters used have not been tested under high load so should not be necessarily considered optimal. Further testing will be done.
Contents
Installing GPFS
Install RPMS
Start by installing the base release and then any updates on top of that.
Run install script and accept licence terms - requires an X connection unless --text-only is used
./gpfs_install-3.3.0-0_x86_64 --text-only
Install RPMS
cd /usr/lpp/mmfs/3.3/ rpm -Uvh gpfs.docs-3.3.0-0.noarch.rpm rpm -Uvh gpfs.gpl-3.3.0-0.noarch.rpm rpm -Uvh gpfs.gui-3.3.0-0.x86_64.rpm rpm -Uvh gpfs.msg.en_US-3.3.0-0.noarch.rpm
If they are update RPMS install those
cd /root/3.3.0-1 rpm -Uvh gpfs*rpm cd /usr/lpp/mmfs/
If necessary change /etc/redhat-release to make sure it claims to be Red Hat Enterprise Linux rather than Scientific Linux
Build source
cd /usr/lpp/mmfs/src make Autoconfig make World make InstallImages
Create GPFS Cluster
Ensure disk servers can access each other as root (via ssh or whatever methods selected) without password and that they can talk to each other on all relevant ports.
Add mmfs command dir to PATH
export PATH=$PATH:/usr/lpp/mmfs/bin
Create a GPFS cluster
mmcrcluster -N pool1.glite.ecdf.ed.ac.uk:manager-quorum,pool2.glite.ecdf.ed.ac.uk:manager-quorum \ -p pool1.glite.ecdf.ed.ac.uk -s pool2.glite.ecdf.ed.ac.uk -r /usr/bin/ssh -R /usr/bin/scp -C gridpp.ecdf.ed.ac.uk
-p and -s indicate the primary and secondary servers -r -R the remote shell and copy commands -C gives the cluster name (not resolved)
Say that you have a licence for the server nodes you will be using
mmchlicense server -N pool1,pool2
Startup cluster
mmstartup -a
For each disk make a file sdk.dsc of the form
DiskName:ServerList::DiskUsage:FailureGroup:DesiredName:StoragePool
e.g.
sdk:pool1.glite.ecdf.ed.ac.uk::dataAndMetadata:7:cab7vd03vol2
The failure group (a number from -1 to 4000) is so that when gpfs creates replicas in can avoid putting them on the devices in the same failure group.
Then create the NSD with this file
mmcrnsd -F sdk.dsc
Add other servers as needed
mmchnsd cab7vd03vol2:pool1.glite.ecdf.ed.ac.uk,pool2.glite.ecdf.ed.ac.uk
Create GPFS filesystem
mmcrfs fs_test cab7vd03vol2:::dataAndMetadata:7:::: -A no -B 512K -D posix -E no -j cluster -k all -K whenpossible -m 1 -M 2 -n 256 -T /exports/fs_test
fs_test is the name of the filessytem. cab7vd03vol2:::dataAndMetadata:7:::: could be replaced with -F and the desc file above which will have been rewritten by mmcrnsd to contain this -A no : Don't mount automatically -B 512k Block size. Bob Cregan at Bristol recommends making this much larger - say 2M. This should be further tested. -D posix (Unless you want to support nfsv4) -E no - Don't always update mtime values -j cluster . After round robining across disks in a filesystem, GPFS can "cluster" try to keep adjacent blocks together or scatter randomly. Chosen cluster for now. -k all - support a range of acls -K whenpossible - Whether replication is enforced -m -M : minimum and maximum number of metadata replicas (Ideally set to more than one to assist in recoveries) -n : nodes accessing filesystem -T : mountpoint
phew. Now mount filesystems
mmmount all
Adding GPFS to storm
Mount GPFS space onto StoRM node
Follow steps above to install GPFS onto the storm node.
On one of the existing nodes in the GPFS cluster do
mmaddnode se2.glite.ecdf.ed.ac.uk
On the StoRM node do
mmchlicense server -N se2
Check the state
mmgetstate -a
Startup
mmstartup -N se2
Mount filesytem.
mmmount fs_test
Configure StoRM to use GPFS drivers
Add either through yaim or directly into the namsespace.xml file.
For YAIM set
STORM_ATLAS_FSTAB=gpfs STORM_ATLAS_FSDRIVER=gpfs STORM_ATLAS_SPCDRIVER=gpfs
for each VO. Then rerun Yaim.
Testing
Copy file in with lcg-cp
lcg-cp -D srmv2 -b -v --vo atlas file:/phys/linux/wbhimji/bob srm://se2.glite.ecdf.ed.ac.uk:8444/srm/managerv2?SFN=/atlas/brian2
Watch it magically appear in your GPFS space.