Glasgow SRM Setup
This page is intended for sites to use as a template in order for them to describe their SRM setup in detail. Obviously, sites can change/alter the format to suit their specific needs, but do not edit this page! The questions posed below are just to give sites an indication of the sort of information that should go here. Suggestions for additions are welcome and should be sent to Greig.
SRM Endpoint
1.1.0: srm://svr018.gla.scotgrid.ac.uk:8443/ 2.2.0: srm://svr018.gla.scotgrid.ac.uk:8446/
Which VOs are supported?
alice atlas babar biomed camont cdf cms dteam dzero ilc lhcb ngs ops pheno sixt supernemo.vo.eu-egee.org totalep zeus
Admin node(s)
DPM headnode is svr018.gla.scotgrid.ac.uk. Runs DPM, DPNS and SRM daemons (v1 and v2). Only a small filesystem on this host for testing purposes - usually set to read only.
Disk Server(s)
Nine grid disk servers, disk032-disk041 (but not disk037, which is for "local" cluster disk).
Hardware 22x500GB Hitachi SATA drives connected to an Areca PCI-X card.
Disk servers are setup as RAID 6 with 1 hot space, so have ~9.5TB of usable space.
To support the Areca card SLC is used (currently SLC4X, i386).
Optimisation
Filesystems are ext2 formatted. Will move to xfs once x86_64 is supported by DPM (see Performance and Tuning). Having no filesystems on headnode improves transfer reliability.
Additional Information
DPM has been a pretty trouble free experience.