Currently LSST UK setup uses the VOMS servers at FNAL, and two CVMFS areas, one at FNAL and one at IN2P3. In UK jobs can be submitted either through the UK DIRAC instance at Imperial or via panda hosted at BNL. The DC under discussion will be done via the latter.
Here is Operations Portal LSST page
VOMS_SERVERS="'vomss://voms2.fnal.gov:8443/voms/lsst?/lsst'" VOMSES="'lsst voms1.fnal.gov 15003 /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=voms1.fnal.gov lsst' 'lsst voms2.fnal.gov 15003 /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=voms2.fnal.gov lsst' " VOMS_CA_DN="'/DC=org/DC=cilogon/C=US/O=CILogon/CN=CILogon OSG CA 1' '/DC=org/DC=cilogon/C=US/O=CILogon/CN=CILogon OSG CA 1' "
"/lsst/Role=pilot/Capability=NULL" "/lsst/Role=pilot" "/lsst/Role=NULL/Capability=NULL" "/lsst"
- 10 normal accounts mapped to the generic role
- 10 pilot accounts mapped to the pilot role
There isn't yet a precise requirement on the size but LSST users will use few TB across different sites currently using 8TB across 4 sites.
If you want to set a software area env var like YAIM used to do
VO_LSST_SW_DIR = /cvmfs/lsst.opensciencegrid.org/panda
- Packages: LSST will use xrootd and gridfp as transfer protocols and needs both the xrootd-client and gfal2-util packages installed. xrootd-client is not installed on SL6 WNs by default and has to be explicitely installed, it has been added to the CentOS7 WN meta-rpm instead and it will be automatically pulled in. In order to avoid any problem it is recommended to install both latest versions.
- CVMFS repositories: LSST will use two repositories one in the US at FNAL and one in France at IN2P3. The repositories will both replicated at RAL and will be added to the EGI configuration which is recommended.
- /cvmfs/lsst.opensciencegrid.org this is automatically mounted at EGI sites and is replicated on some of the EGI stratum1 sites don't have to configure anything.
- /cvmfs/lsst.in2p3.fr this currently not supported in EGI yet and is behind a firewall. From August 2017 it will be replicated to one of the stratum1 at RAL so that sites don't have to ask to be added to the firewall but they will have to install the cvmfs-config-egi package to get a centralized configuration installed. The package will replace cvmfs-config-default, but should be the only needed configuration. There is no YUM repository for now (July 2017) for this rpm so to install it
yum localinstall https://ecsft.cern.ch/dist/cvmfs/cvmfs-config-egi/cvmfs-config-egi-2.0-1.el6.noarch.rpm
DC1 at NERSC
- 500 nodes (Haswell x 32 cores each) at NERSC
- DC1 utilization at NERSC: http://portal.nersc.gov/project/lsst/glanzman/graph6.html
DC2 on the grid
- Under discussion
Presentations and progress reports
- GridPP LSST progress
- LSST case study
- "Shear brilliance: computing tackles the mystery of the dark universe", UoM News, 24 November 2016
- A. Forti GridPP resources report, LSST-DESC Collaboration meeting 13/7/2017
- P. Svirin Panda for LSST update, LSST-DESC Collaboration meeting 13/7/2017