Site Notes
Contents
- 1 RAL-LCG2
- 2 UKI-LT2-Brunel
- 3 UKI-LT2-IC-HEP
- 4 UKI-LT2-QMUL
- 5 UKI-LT2-RHUL
- 6 UKI-LT2-UCL-HEP
- 7 UKI-NORTHGRID-LANCS-HEP
- 8 UKI-NORTHGRID-LIV-HEP
- 9 UKI-NORTHGRID-MAN-HEP
- 10 UKI-NORTHGRID-SHEF-HEP
- 11 UKI-SCOTGRID-DURHAM
- 12 UKI-SCOTGRID-ECDF
- 13 UKI-SCOTGRID-GLASGOW
- 14 UKI-SOUTHGRID-BHAM-HEP
- 15 UKI-SOUTHGRID-BRIS-HEP
- 16 UKI-SOUTHGRID-CAM-HEP
- 17 UKI-SOUTHGRID-OX-HEP
- 18 UKI-SOUTHGRID-RALPP
RAL-LCG2
Current status of CVMFS: Production
VOs currently using CVMFS at site: ATLAS, LHCb, ALICE
Hardware setup: Two caching squids squids lcg0617 and lcg0679.
Blocking issues: Waiting for new version of CVMFS before switching name space. Can't use conditions data until this change has been made.
Internal documentation of setup can be found here.
UKI-LT2-Brunel
UKI-LT2-IC-HEP
UKI-LT2-QMUL
CVMFS atlas.cern.ch and atlas-condb is installed on all worker nodes. An upgrade to cvmfs 2 is planned in the near future, then changing the environment variables.
UKI-LT2-RHUL
SO far have only experimented with CVMFS on Tier3 resources. Would like to install on WNs but busy with higher priority issues at the moment. Main issue so far has been re-partitioning local disk to provide some protected space for the cache.
(Simon)
UKI-LT2-UCL-HEP
This site does not exist any longer.
UKI-NORTHGRID-LANCS-HEP
Installed a "Tier 3" cvmfs. Looking at rolling it out into production for one of our clusters by the end of August, and move fully to it in September.
UKI-NORTHGRID-LIV-HEP
Installed for local desktops and Tier3 resources (using the non-/opt setup). Not yet rolled out to grid WNs but the SQUID, repos, config etc are all ready to go once we're sure it won't break everything.
UKI-NORTHGRID-MAN-HEP
Installed and working in production both for software area and condb since 7/7/2011. With no links from /opt, using local single squid. Installation notes with setup here: http://northgrid-tech.blogspot.com/2011/07/cvmfs-installation.html.
UKI-NORTHGRID-SHEF-HEP
Nothing currently setup. Studying documentation.
UKI-SCOTGRID-DURHAM
UKI-SCOTGRID-ECDF
UKI-SCOTGRID-GLASGOW
The Glasgow CVMFS install currently is limited to 5 worker nodes. Nodes001 - Node005. These nodes are excluded from the ATLAS SGM Software updates. Configuration setup:
The simplest possible (and what we (Stuart) deployed on Friday) is the following in maui.cfg.
- Keep atlas sgm jobs off the CVMFS nodes.
- Ideally, this would work via node specific tags, so it could all be controled via cfengine... but this is a quick hack
SRCFG[nosgmoncvmfs] PERIOD=INFINITY SRCFG[nosgmoncvmfs] HOSTLIST=node00[1-5] SRCFG[nosgmoncvmfs] ACCESS=DEDICATED SRCFG[nosgmoncvmfs] GROUPLIST=!atlassgm
Notes: 1. The host list is hard wired. At some point, we could use cfengine to put tags into the mom config on the node, to be picked up by the schedular. That way we define once in cfengine what nodes are to use cvmfs, and everything else picks up that change. Consider that an 'advanced manoeuvre', and dependant on the configuration integration layer. 2. There is an exclamation mark on the GROUPLIST line. We have the SGM users coiming through as a separate Unix group, which makes this easy to specify. If that was not true, then we'd have to specify !user[000-100] or similar. 3. This was put into maui's config file, as a standing reservation. That was because it's duration of need is indefinite, and therefore should persit.
Glasgow will deploy more CVMFS nodes over the coming weeks with this setup.
UKI-SOUTHGRID-BHAM-HEP
UKI-SOUTHGRID-BRIS-HEP
Current status of CVMFS: Production
VOs currently using CVMFS at site: ATLAS, CMS, LHCb, ILC
Hardware setup: Two caching squids lcgsq1.phy.bris.ac.uk & lcgsq2.phy.bris.ac.uk
UKI-SOUTHGRID-CAM-HEP
UKI-SOUTHGRID-OX-HEP
CVMFS is installed on all our worker nodes, and we'll switch VO_ATLAS_SW_DIR to point to it as soon as you want us to. We're not enthusiastic about doing any weird local hackery (things in /opt and whatnot) to make it work though.