Difference between revisions of "LSST UK"
From GridPP Wiki
Line 47: | Line 47: | ||
gfal2-util | gfal2-util | ||
* '''CVMFS repositories''': LSST will use two repositories one in the US at FNAL and one in France at IN2P3. The repositories will both replicated at RAL and will be added to the EGI configuration which is recommended. | * '''CVMFS repositories''': LSST will use two repositories one in the US at FNAL and one in France at IN2P3. The repositories will both replicated at RAL and will be added to the EGI configuration which is recommended. | ||
− | ** ''' | + | ** '''/cvmfs/lsst.opensciencegrid.org''' this is automatically mounted at EGI sites and is replicated on some of the EGI stratum1 sites don't have to configure anything. |
− | ** ''' | + | ** '''/cvmfs/lsst.in2p3.fr''' this currently not supported in EGI yet and is behind a firewall. From August 2017 it will be replicated to one of the stratum1 at RAL so that sites don't have to ask to be added to the firewall but they will have to install the cvmfs-config-egi package to get a centralized configuration installed. The package will replace cvmfs-config-default, but should be the only needed configuration. There is no YUM repository for now (July 2017) for this rpm so to install it |
yum localinstall https://ecsft.cern.ch/dist/cvmfs/cvmfs-config-egi/cvmfs-config-egi-2.0-1.el6.noarch.rpm | yum localinstall https://ecsft.cern.ch/dist/cvmfs/cvmfs-config-egi/cvmfs-config-egi-2.0-1.el6.noarch.rpm | ||
Revision as of 11:58, 19 July 2017
Contents
General
Currently LSST UK setup uses the VOMS servers at FNAL, it has a CVMFS software area always hosted at fnal
User setup
Sites setup
Here is Operations Portal LSST page
VOMS config
VOMS_SERVERS="'vomss://voms2.fnal.gov:8443/voms/lsst?/lsst'" VOMSES="'lsst voms1.fnal.gov 15003 /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=voms1.fnal.gov lsst' 'lsst voms2.fnal.gov 15003 /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=voms2.fnal.gov lsst' " VOMS_CA_DN="'/DC=org/DC=cilogon/C=US/O=CILogon/CN=CILogon OSG CA 1' '/DC=org/DC=cilogon/C=US/O=CILogon/CN=CILogon OSG CA 1' "
Roles
"/lsst/Role=pilot/Capability=NULL" "/lsst/Role=pilot" "/lsst/Role=NULL/Capability=NULL" "/lsst"
Accounts
- 10 normal accounts mapped to the generic role
- 10 pilot accounts mapped to the pilot role
Storage
There isn't yet a precise requirement on the size but LSST users will use few TB across different sites currently using 8TB across 4 sites.
Software area
If you want to set a software area env var like YAIM used to do
VO_LSST_SW_DIR = /cvmfs/lsst.opensciencegrid.org/panda
Requested software
- Packages: LSST will use xrootd and gridfp as transfer protocols and needs both the xrootd-client and gfal2-util packages installed. xrootd-client is not installed on SL6 WNs by default and has to be explicitely installed, it has been added to the CentOS7 WN meta-rpm instead and it will be automatically pulled in. In order to avoid any problem it is recommended to install both latest versions.
xrootd-client gfal2-util
- CVMFS repositories: LSST will use two repositories one in the US at FNAL and one in France at IN2P3. The repositories will both replicated at RAL and will be added to the EGI configuration which is recommended.
- /cvmfs/lsst.opensciencegrid.org this is automatically mounted at EGI sites and is replicated on some of the EGI stratum1 sites don't have to configure anything.
- /cvmfs/lsst.in2p3.fr this currently not supported in EGI yet and is behind a firewall. From August 2017 it will be replicated to one of the stratum1 at RAL so that sites don't have to ask to be added to the firewall but they will have to install the cvmfs-config-egi package to get a centralized configuration installed. The package will replace cvmfs-config-default, but should be the only needed configuration. There is no YUM repository for now (July 2017) for this rpm so to install it
yum localinstall https://ecsft.cern.ch/dist/cvmfs/cvmfs-config-egi/cvmfs-config-egi-2.0-1.el6.noarch.rpm
Data Challenge
DC1 at NERSC
Resources
- 500 nodes (Haswell x 32 cores each) at NERSC
- DC1 utilization at NERSC: http://portal.nersc.gov/project/lsst/glanzman/graph6.html
DC2 on the grid
Resources
- 2x 3x DC1
Presentations and progress reports
- GridPP LSST progress
- A. Forti GridPP resources report, LSST-DESC Collaboration meeting 13/7/2017
- P. Svirin Panda for LSST update, LSST-DESC Collaboration meeting 13/7/2017