Difference between revisions of "LSST UK"
From GridPP Wiki
(→Presentations and progress reports) |
|||
(18 intermediate revisions by one user not shown) | |||
Line 1: | Line 1: | ||
== General == | == General == | ||
− | Currently LSST UK setup uses the VOMS servers at FNAL, | + | Currently LSST UK setup uses the VOMS servers at FNAL, and two CVMFS areas, one at FNAL and one at IN2P3. In UK jobs can be submitted either through the UK DIRAC instance at Imperial or via panda hosted at BNL. The DC under discussion will be done via the latter. |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
== Sites setup == | == Sites setup == | ||
Line 13: | Line 9: | ||
'''VOMS config''' | '''VOMS config''' | ||
− | VOMS_SERVERS="'vomss://voms2.fnal.gov:8443/voms/lsst?/lsst'" | + | VOMS_SERVERS="'vomss://voms2.fnal.gov:8443/voms/lsst?/lsst'" |
− | VOMSES="'lsst voms1.fnal.gov 15003 /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=voms1.fnal.gov lsst' 'lsst voms2.fnal.gov 15003 /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=voms2.fnal.gov lsst' " | + | VOMSES="'lsst voms1.fnal.gov 15003 /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=voms1.fnal.gov lsst' |
− | VOMS_CA_DN="'/DC=org/DC=cilogon/C=US/O=CILogon/CN=CILogon OSG CA 1' '/DC=org/DC=cilogon/C=US/O=CILogon/CN=CILogon OSG CA 1' " | + | 'lsst voms2.fnal.gov 15003 /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=voms2.fnal.gov lsst' " |
+ | VOMS_CA_DN="'/DC=org/DC=cilogon/C=US/O=CILogon/CN=CILogon OSG CA 1' '/DC=org/DC=cilogon/C=US/O=CILogon/CN=CILogon OSG CA 1' " | ||
'''Roles''' | '''Roles''' | ||
− | "/lsst/Role=pilot/Capability=NULL" | + | "/lsst/Role=pilot/Capability=NULL" |
− | "/lsst/Role=pilot" | + | "/lsst/Role=pilot" |
− | "/lsst/Role=NULL/Capability=NULL" | + | "/lsst/Role=NULL/Capability=NULL" |
− | "/lsst" | + | "/lsst" |
'''Accounts''' | '''Accounts''' | ||
− | * | + | * 10 normal accounts mapped to the generic role |
* 10 pilot accounts mapped to the pilot role | * 10 pilot accounts mapped to the pilot role | ||
− | '''Storage''' | + | '''Storage''' |
There isn't yet a precise requirement on the size but LSST users will use few TB across different sites currently using 8TB across 4 sites. | There isn't yet a precise requirement on the size but LSST users will use few TB across different sites currently using 8TB across 4 sites. | ||
Line 35: | Line 32: | ||
'''Software area''' | '''Software area''' | ||
− | VO_LSST_SW_DIR = /cvmfs/lsst.opensciencegrid.org/uk | + | If you want to set a software area env var like YAIM used to do |
+ | |||
+ | VO_LSST_SW_DIR = /cvmfs/lsst.opensciencegrid.org/panda | ||
+ | |||
+ | |||
+ | '''Requested software''' | ||
+ | |||
+ | * '''Packages''': LSST will use xrootd and gridfp as transfer protocols and needs both the xrootd-client and gfal2-util packages installed. xrootd-client is not installed on SL6 WNs by default and has to be explicitely installed, it has been added to the CentOS7 WN meta-rpm instead and it will be automatically pulled in. In order to avoid any problem it is recommended to install both latest versions. | ||
+ | xrootd-client | ||
+ | gfal2-util | ||
+ | * '''CVMFS repositories''': LSST will use two repositories one in the US at FNAL and one in France at IN2P3. The repositories will both replicated at RAL and will be added to the EGI configuration which is recommended. | ||
+ | ** '''/cvmfs/lsst.opensciencegrid.org''' this is automatically mounted at EGI sites and is replicated on some of the EGI stratum1 sites don't have to configure anything. | ||
+ | ** '''/cvmfs/lsst.in2p3.fr''' this currently not supported in EGI yet and is behind a firewall. From August 2017 it will be replicated to one of the stratum1 at RAL so that sites don't have to ask to be added to the firewall but they will have to install the cvmfs-config-egi package to get a centralized configuration installed. The package will replace cvmfs-config-default, but should be the only needed configuration. There is no YUM repository for now (July 2017) for this rpm so to install it | ||
+ | yum localinstall https://ecsft.cern.ch/dist/cvmfs/cvmfs-config-egi/cvmfs-config-egi-2.0-1.el6.noarch.rpm | ||
+ | |||
+ | == Data Challenge == | ||
+ | |||
+ | === DC1 at NERSC === | ||
+ | '''Resources''' | ||
+ | * 500 nodes (Haswell x 32 cores each) at NERSC | ||
+ | * DC1 utilization at NERSC: http://portal.nersc.gov/project/lsst/glanzman/graph6.html | ||
+ | === DC2 on the grid === | ||
+ | |||
+ | '''Resources''' | ||
+ | * Under discussion | ||
+ | |||
+ | == Presentations and progress reports == | ||
+ | * [https://www.gridpp.ac.uk/wiki/GridPP_VO_Incubator#LSST GridPP LSST progress] | ||
+ | * [https://www.gridpp.ac.uk/users/case-studies/lsst/ LSST case study] | ||
+ | * [http://www.manchester.ac.uk/discover/news/the-dark-universe "Shear brilliance: computing tackles the mystery of the dark universe"], UoM News, 24 November 2016 | ||
+ | * [https://drive.google.com/file/d/0B_tp6usAhDinSzFqRDhQVWFkTWc/view?usp=sharing A. Forti GridPP resources report], LSST-DESC Collaboration meeting 13/7/2017 | ||
+ | * [https://drive.google.com/file/d/0B_tp6usAhDinSEJLbDNla2c1TGM/view?usp=sharing P. Svirin Panda for LSST update], LSST-DESC Collaboration meeting 13/7/2017 |
Latest revision as of 13:09, 19 July 2017
Contents
General
Currently LSST UK setup uses the VOMS servers at FNAL, and two CVMFS areas, one at FNAL and one at IN2P3. In UK jobs can be submitted either through the UK DIRAC instance at Imperial or via panda hosted at BNL. The DC under discussion will be done via the latter.
Sites setup
Here is Operations Portal LSST page
VOMS config
VOMS_SERVERS="'vomss://voms2.fnal.gov:8443/voms/lsst?/lsst'" VOMSES="'lsst voms1.fnal.gov 15003 /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=voms1.fnal.gov lsst' 'lsst voms2.fnal.gov 15003 /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=voms2.fnal.gov lsst' " VOMS_CA_DN="'/DC=org/DC=cilogon/C=US/O=CILogon/CN=CILogon OSG CA 1' '/DC=org/DC=cilogon/C=US/O=CILogon/CN=CILogon OSG CA 1' "
Roles
"/lsst/Role=pilot/Capability=NULL" "/lsst/Role=pilot" "/lsst/Role=NULL/Capability=NULL" "/lsst"
Accounts
- 10 normal accounts mapped to the generic role
- 10 pilot accounts mapped to the pilot role
Storage
There isn't yet a precise requirement on the size but LSST users will use few TB across different sites currently using 8TB across 4 sites.
Software area
If you want to set a software area env var like YAIM used to do
VO_LSST_SW_DIR = /cvmfs/lsst.opensciencegrid.org/panda
Requested software
- Packages: LSST will use xrootd and gridfp as transfer protocols and needs both the xrootd-client and gfal2-util packages installed. xrootd-client is not installed on SL6 WNs by default and has to be explicitely installed, it has been added to the CentOS7 WN meta-rpm instead and it will be automatically pulled in. In order to avoid any problem it is recommended to install both latest versions.
xrootd-client gfal2-util
- CVMFS repositories: LSST will use two repositories one in the US at FNAL and one in France at IN2P3. The repositories will both replicated at RAL and will be added to the EGI configuration which is recommended.
- /cvmfs/lsst.opensciencegrid.org this is automatically mounted at EGI sites and is replicated on some of the EGI stratum1 sites don't have to configure anything.
- /cvmfs/lsst.in2p3.fr this currently not supported in EGI yet and is behind a firewall. From August 2017 it will be replicated to one of the stratum1 at RAL so that sites don't have to ask to be added to the firewall but they will have to install the cvmfs-config-egi package to get a centralized configuration installed. The package will replace cvmfs-config-default, but should be the only needed configuration. There is no YUM repository for now (July 2017) for this rpm so to install it
yum localinstall https://ecsft.cern.ch/dist/cvmfs/cvmfs-config-egi/cvmfs-config-egi-2.0-1.el6.noarch.rpm
Data Challenge
DC1 at NERSC
Resources
- 500 nodes (Haswell x 32 cores each) at NERSC
- DC1 utilization at NERSC: http://portal.nersc.gov/project/lsst/glanzman/graph6.html
DC2 on the grid
Resources
- Under discussion
Presentations and progress reports
- GridPP LSST progress
- LSST case study
- "Shear brilliance: computing tackles the mystery of the dark universe", UoM News, 24 November 2016
- A. Forti GridPP resources report, LSST-DESC Collaboration meeting 13/7/2017
- P. Svirin Panda for LSST update, LSST-DESC Collaboration meeting 13/7/2017