The following examples are updated from originals given by Pete Gronbech.
Not all the VOs below are approved by GridPP but many are enabled within GridPP sites. More details can often be found on the CIC portal http://cic.gridops.org/index.php?section=home&page=volist
General notes and settings
The records in the sections below were prepared by the VomsSnooper Tools, that are located at http://www.sysadmin.hep.ac.uk/rpms/fabric-management/RPMS.vomstools/ .
First, it is important to realize that many of the Yaim variables described below can be given to Yaim in one of two formats. The original format, for inclusion within the site-info.def file itself, are know as SID records. Due to restrictions with DNS style names (with dots in them) it was later necessary to invent a new format, whereby the records are stored in their own separate files under a vo.d directory. This format is known as VODs format. To see an example, scroll down to the ALICE VOMS records, below. Any VO can be defined well in VOD format, but VOs with dots in their names are cumbersome to define in SID format.
In the boxes below, each VO has a set of records that are used by Yaim to configure a site. Some other required records follow a generic pattern; these are VO_*_SW_DIR AND VO_*_DEFAULT_SE. The values for these records are described generically as follows, using the Liverpool site-info.def as the model. These will vary according to the specifics at your site. In any case, by way of example (at Liverpool) the following is defined first.
MY_DOMAIN=ph.liv.ac.uk DPM_HOST=hepgrid11.ph.liv.ac.uk VO_SW_DIR=/opt/exp_soft_sl5/ STORAGE_PATH=/dpm/$MY_DOMAIN/home
Then the generic records are defined as follows.
Special note on CVMFS Following the introduction of CVMFS for distributing experiment software, some VOs, including LHCB and ATLAS, differ from that generic pattern with regard to their SW_DIR directories. For these VOs, the settings should be simiular to these examples taken from Liverpool.
VO_ATLAS_SW_DIR=/cvmfs/atlas.cern.ch/repo/sw VO_HONE_SW_DIR=/cvmfs/hone.gridpp.ac.uk VO_LHCB_SW_DIR=/cvmfs/lhcb.cern.ch VO_MICE_SW_DIR=/cvmfs/mice.gridpp.ac.uk VO_NA62_VO_GRIDPP_AC_UK_SW_DIR=/cvmfs/na62.gridpp.ac.uk
or, equivalently in vo.d format:
vo.d/atlas:SW_DIR=/cvmfs/atlas.cern.ch/repo/sw vo.d/hone:SW_DIR=/cvmfs/hone.gridpp.ac.uk vo.d/lhcb:SW_DIR=/cvmfs/lhcb.cern.ch vo.d/mice:SW_DIR=/cvmfs/mice.gridpp.ac.uk vo.d/na62.vo.gridpp.ac.uk:SW_DIR=/cvmfs/na62.gridpp.ac.uk
Entries for these records for each supported VO must be used in addition to the records in the tables below. To prepare the entries, replace <UCVONAME> with the short VO name in upper case, and replace <LCVONAME> with the short VO name in lower case.
In addition to those, there is an optional Yaim variable of the form VO_<UCVONAME>_QUEUES, which contains a list of the queues that can handle jobs from the VO concerned. These are sometimes referred to as per VO QUEUE records (technically, this variable is processed by a script, libexec/YAIM2gLiteConvertor.py, which builds up a map of what VO can run of what queue. This is used by (e.g.) a torque server). An example might be:
Some sites don't use VO_<UCVONAME>_QUEUES records at all; instead defining shared queues for VOs (see http://northgrid-tech.blogspot.co.uk/2008/12/phasing-out-vo-queues.html). The point is that the use of QUEUES records varies a lot depending on site circumstances, and all the possible settings are outside the scope of this document at present. But by way of a guide, the verbatim settings of the shared queues configuration at Liverpool (which use no pre-VO queues) are presented below, showing the use of the VOS, QUEUES and _GROUP_ENABLE variables.
VOS="alice atlas biomed calice camont cdf cms dteam dzero esr fusion geant4 hone gridpp ilc \ lhcb magic mice ops pheno planck vo.sixt.cern.ch snoplus.snolab.ca \ t2k.org vo.northgrid.ac.uk zeus" QUEUES="long" LONG_GROUP_ENABLE=" alice /alice/ROLE=lcgadmin /alice/ROLE=production atlas /atlas/ROLE=lcgadmin /atlas/ROLE=production /atlas/ROLE=pilot \ /biomed/ROLE=lcgadmin calice /calice/ROLE=lcgadmin /calice/ROLE=production \ camont /camont/ROLE=lcgadmin cdf /cdf/ROLE=lcgadmin cms /cms/ROLE=lcgadmin /cms/ROLE=production \ dteam /dteam/ROLE=lcgadmin /dteam/ROLE=production dzero /dzero/ROLE=lcgadmin esr /esr/ROLE=lcgadmin \ fusion /fusion/ROLE=production geant4 /geant4/ROLE=lcgadmin /geant4/ROLE=production gridpp /gridpp/ROLE=lcgadmin \ hone /hone/ROLE=lcgadmin /hone/ROLE=production ilc /ilc/ROLE=lcgadmin /ilc/ROLE=production \ lhcb /lhcb/ROLE=lcgadmin /lhcb/ROLE=production /lhcb/ROLE=pilot \ magic /magic/ROLE=lcgadmin mice /mice/ROLE=lcgadmin /mice/ROLE=production \ vo.northgrid.ac.uk /vo.northgrid.ac.uk/ROLE=lcgadmin \ ops /ops/ROLE=lcgadmin /ops/ROLE=pilot pheno /pheno/ROLE=lcgadmin planck /planck/ROLE=lcgadmin /planck/ROLE=production \ vo.sixt.cern.ch /vo.sixt.cern.ch/ROLE=lcgadmin snoplus.snolab.ca /snoplus.snolab.ca/ROLE=lcgadmin /snoplus.snolab.ca/ROLE=production \ t2k.org /t2k.org/ROLE=lcgadmin /t2k.org/ROLE=production \ zeus /zeus/ROLE=lcgadmin /zeus/ROLE=production"
VO Yaim Records
This section presents the VO records for each approved VO, extracted from the Operations Portal.
Note about SIDs versus VODs:
YAIM processes VO records that are collectively bunched in the site-info.def. Such records have the following names.
VO_<UCVONAME>_VOMS_SERVERS, VO_<UCVONAME>_VOMSES and VO_<UCVONAME>_VOMS_CA_DN.
Where UCVONAME is the VO name in uppercase, with underscores instead of dots. I call this style of record "SID format".
Alternatively, YAIM can process records in files in the /root/glitecfg/vo.d directory. The name of each file is the VO name. Hence, no UCVONAME is needed. I call this style "VOD format". VOD format is recommended.
Tip about DNS style VO names:
One can represent any VO using VOD records, but some VOs (i.e. DNS style names; those with dots in the name) are awkward to express in SID format. In the tables below, SID records for such VOs are given but they are commented out with hash signs. In these cases, it might be best to use VOD format records.
Note about record multiplicity:
Data in the CIC portal for the VO_<UCVONAME>_VOMS_SERVERS, VO_<UCVONAME>_VOMSES and VO_<UCVONAME>_VOMS_CA_DN records is related by order and multiplicity - they match up.
When generating the Yaim versions of this data, records for VO_<UCVONAME>_VOMSES and VO_<UCVONAME>_VOMS_CA_DN must match in order and multiplicity.
But it is optional with respect to the VO_<UCVONAME>_VOMS_SERVERS record. According to Maarten Litmaath "for the CERN servers it actually was deemed desirable that grid-mapfiles be generated using voms.cern.ch only, because lcg-voms.cern.ch is already running the VOMRS (sic) service as an extra load".
Thus, in the sections below, VO_<UCVONAME>_VOMS_SERVERS for CERN based VOs are restricted to a single record related to voms.cern.ch.