Site Notes

From GridPP Wiki
Revision as of 13:29, 14 July 2014 by Winnie Lacesso d355d79c8f (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

RAL-LCG2

Current status of CVMFS: Production

VOs currently using CVMFS at site: ATLAS, LHCb, ALICE

Hardware setup: Two caching squids squids lcg0617 and lcg0679.

Blocking issues: Waiting for new version of CVMFS before switching name space. Can't use conditions data until this change has been made.

Internal documentation of setup can be found here.

UKI-LT2-Brunel

UKI-LT2-IC-HEP

UKI-LT2-QMUL

CVMFS atlas.cern.ch and atlas-condb is installed on all worker nodes. An upgrade to cvmfs 2 is planned in the near future, then changing the environment variables.

UKI-LT2-RHUL

SO far have only experimented with CVMFS on Tier3 resources. Would like to install on WNs but busy with higher priority issues at the moment. Main issue so far has been re-partitioning local disk to provide some protected space for the cache.

(Simon)

UKI-LT2-UCL-HEP

Not yet installed (as of 2011-08-25) but will do so when time allows.

UKI-NORTHGRID-LANCS-HEP

Installed a "Tier 3" cvmfs. Looking at rolling it out into production for one of our clusters by the end of August, and move fully to it in September.

UKI-NORTHGRID-LIV-HEP

Installed for local desktops and Tier3 resources (using the non-/opt setup). Not yet rolled out to grid WNs but the SQUID, repos, config etc are all ready to go once we're sure it won't break everything.

UKI-NORTHGRID-MAN-HEP

Installed and working in production both for software area and condb since 7/7/2011. With no links from /opt, using local single squid. Installation notes with setup here: http://northgrid-tech.blogspot.com/2011/07/cvmfs-installation.html.

UKI-NORTHGRID-SHEF-HEP

Nothing currently setup. Studying documentation.

UKI-SCOTGRID-DURHAM

UKI-SCOTGRID-ECDF

UKI-SCOTGRID-GLASGOW

The Glasgow CVMFS install currently is limited to 5 worker nodes. Nodes001 - Node005. These nodes are excluded from the ATLAS SGM Software updates. Configuration setup:

The simplest possible (and what we (Stuart) deployed on Friday) is the following in maui.cfg.

  1. Keep atlas sgm jobs off the CVMFS nodes.
  2. Ideally, this would work via node specific tags, so it could all be controled via cfengine... but this is a quick hack

SRCFG[nosgmoncvmfs] PERIOD=INFINITY SRCFG[nosgmoncvmfs] HOSTLIST=node00[1-5] SRCFG[nosgmoncvmfs] ACCESS=DEDICATED SRCFG[nosgmoncvmfs] GROUPLIST=!atlassgm

Notes: 1. The host list is hard wired. At some point, we could use cfengine to put tags into the mom config on the node, to be picked up by the schedular. That way we define once in cfengine what nodes are to use cvmfs, and everything else picks up that change. Consider that an 'advanced manoeuvre', and dependant on the configuration integration layer. 2. There is an exclamation mark on the GROUPLIST line. We have the SGM users coiming through as a separate Unix group, which makes this easy to specify. If that was not true, then we'd have to specify !user[000-100] or similar. 3. This was put into maui's config file, as a standing reservation. That was because it's duration of need is indefinite, and therefore should persit.

Glasgow will deploy more CVMFS nodes over the coming weeks with this setup.

UKI-SOUTHGRID-BHAM-HEP

UKI-SOUTHGRID-BRIS-HEP

Current status of CVMFS: Production

VOs currently using CVMFS at site: ATLAS, CMS, LHCb, ILC

Hardware setup: Two caching squids lcgsq1.phy.bris.ac.uk & lcgsq2.phy.bris.ac.uk

UKI-SOUTHGRID-CAM-HEP

UKI-SOUTHGRID-OX-HEP

CVMFS is installed on all our worker nodes, and we'll switch VO_ATLAS_SW_DIR to point to it as soon as you want us to. We're not enthusiastic about doing any weird local hackery (things in /opt and whatnot) to make it work though.

UKI-SOUTHGRID-RALPP