Difference between revisions of "GridPP5 Tier2 plans"
From GridPP Wiki
(→Sites batch system status) |
|||
(7 intermediate revisions by 5 users not shown) | |||
Line 71: | Line 71: | ||
|- | |- | ||
|UKI-LT2-RHUL | |UKI-LT2-RHUL | ||
− | | | + | |Torque/cream CE, HTCondor/ARC test |
− | | | + | |No |
− | | | + | |No |
− | | | + | |ILC,Pheno, Biomed, dune |
− | | | + | |DPM |
− | | | + | |Yes, biomed,pheno |
− | | | + | |No |
|. | |. | ||
| | | | ||
Line 102: | Line 102: | ||
|DPM | |DPM | ||
|t2k, sno+,biomed | |t2k, sno+,biomed | ||
− | | | + | |No |
|. | |. | ||
|We support 25 small VOs in total, using a python tool (voconfig.py) to make config (arc/condor/cream/torque/maui/user/groups/vac/argus etc.) from a central data file. | |We support 25 small VOs in total, using a python tool (voconfig.py) to make config (arc/condor/cream/torque/maui/user/groups/vac/argus etc.) from a central data file. | ||
Line 147: | Line 147: | ||
|- | |- | ||
|UKI-SCOTGRID-ECDF | |UKI-SCOTGRID-ECDF | ||
+ | |ARC/SGE | ||
+ | |YES | ||
+ | |YES(NFS) | ||
+ | |ilc | ||
+ | |DPM | ||
+ | |ilc, hyper-k | ||
+ | |NO | ||
|. | |. | ||
− | |. | + | |We have shared use of resources on cluster managed by the university. NFS is only used for transferring data to WN. |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
Line 198: | Line 198: | ||
|- | |- | ||
|UKI-SOUTHGRID-CAM-HEP | |UKI-SOUTHGRID-CAM-HEP | ||
− | |HTCondor/ARC | + | |PBS/Torque, |
+ | later HTCondor/ARC? | ||
|No | |No | ||
|No | |No | ||
Line 210: | Line 211: | ||
|- | |- | ||
|UKI-SOUTHGRID-OX-HEP | |UKI-SOUTHGRID-OX-HEP | ||
− | |HTCondor/ARC | + | |HTCondor/ARC (5% VIAB) |
|No | |No | ||
|No | |No | ||
|ILC, SNO+, Pheno | |ILC, SNO+, Pheno | ||
|DPM | |DPM | ||
− | | | + | |t2k, SNO+ |
|No | |No | ||
|. | |. |
Latest revision as of 15:50, 4 April 2017
Other links
Sites batch system status
This page has been set up to collect information from GridPP sites regarding their batch, middleware and storage system plans. The information will help with wider considerations and strategy. The table seeks the following:
- Site name
- Batch/CE system (the main batch system and CE you are intending to use in GridPP5. This might be one that you are testing as a replacement for, say, Torque/CREAM)
- Shared, non-CE? Yes/No (Is the batch system shared with users who don’t access it through the grid CE?)
- Shared filesystem? No/Name (Do users rely on a shared filesystem? e.g. Lustre. i.e. that couldn’t be replaced with local filesystems on worker nodes. Which one?)
- Non-LHC, non GridPP DIRAC VOs? No/Top3 (Do you support VOs, e.g. from EGI, that aren’t LHC experiments and don't use the GridPP DIRAC service. Please list the top 3.) (Please note that pheno and SNO+ use the GridPP dirac instance.)
- Storage system? No/Name (dCache, DPM, StoRM)
- Non-LHC storage? No/Top3 (Do you provide storage to non-LHC projects? Please list the top 3.)
- Local storage? Yes/No (Does your grid storage also provide space for local users, that they access interactively or in non-grid batch jobs?)
Site | Batch/CE system | Shared, non-CE? | Shared filesystem? | Non-LHC, non GridPP DIRAC VOs? | Storage system | Non-LHC storage? | Local storage? | . | Notes |
UKI-LT2-Brunel | Arc/HTCondor | No | No | ILC,Pheno, Biomed | DPM | yes | . | ||
UKI-LT2-IC-HEP | CREAM/SGE | No | No | ilc, biomed, mice | dCache | LZ (UK Data Centre), T2K, comet | Yes (CMS) | . |
|
UKI-LT2-QMUL | SLURM/Cream | NO(1) | YES(Lustre) | Biomed, ILC, Icecube, CEPC, Pheno, enmr.eu | StoRM | Yes (SNO+, T2K) | Yes | . | (1) very limited local usage of batchsystem for special workloads |
UKI-LT2-RHUL | Torque/cream CE, HTCondor/ARC test | No | No | ILC,Pheno, Biomed, dune | DPM | Yes, biomed,pheno | No | . |
|
UKI-NORTHGRID-LANCS-HEP | SonOfGridEngine/CREAM, ARC eventually | Yes | home/sandbox on NFS, but don't work in them, local users use Panasus | uboone. | DPM | Yes, Sno+ T2K | No | . | We actively try to support all UK dirac VOs. The site is treated as part of the University's "High End Computing" facility which we have admin rights and duties for. |
UKI-NORTHGRID-LIV-HEP | HTCondor/ARC, VAC | No | No | ilc, dune, biomed, t2k, na62, sno+ | DPM | t2k, sno+,biomed | No | . | We support 25 small VOs in total, using a python tool (voconfig.py) to make config (arc/condor/cream/torque/maui/user/groups/vac/argus etc.) from a central data file. |
UKI-NORTHGRID-MAN-HEP | HTCondor/ARC | No | No | Biomed, ILC, Icecube | DPM | LSST, biomed, pheno | Yes | . |
|
UKI-NORTHGRID-SHEF-HEP | Torque/cream CE, HTCondor/ARC under test | No | No | LZ, dune, t2k, biomed, pheno, sno+ | DPM | dune? | yes | . |
|
UKI-SCOTGRID-DURHAM | SLURM/ARC | YES | YES | Pheno, ILC | DPM | YES | YES | . | A Local Group has direct submission to SLURM, Local Pheno users have NFS Available as Home Space.
|
UKI-SCOTGRID-ECDF | ARC/SGE | YES | YES(NFS) | ilc | DPM | ilc, hyper-k | NO | . | We have shared use of resources on cluster managed by the university. NFS is only used for transferring data to WN.
|
UKI-SCOTGRID-GLASGOW | HTCondor/ARC | No/Maybe | Yes/NFS | Pheno,ILC,NA62 | DPM | Yes | Yes | . | Local University users use direct ARC submission, but have local storage provided via NFS. Usage is low (not in top 3) but does happen. Investigating allowing local user to directly submit to HTCondor pool. |
UKI-SOUTHGRID-BHAM-HEP | Torque/CREAM | No | No | ILC, Biomed, Pheno | DPM | NA62, ILC, Biomed | No | . | In the process of moving the vast majority/all of the resources to VAC.
|
UKI-SOUTHGRID-BRIS | HTCondor/ARC | Yes | No but partly Yes(1) | ILC & soon LZ | DmLite+HDFS | ThinkSo, DrK will confirm/deny | Yes | . | (1) they prefer to have NFS-mounted /users/$user & /software but can live without it. I think.
|
UKI-SOUTHGRID-CAM-HEP | PBS/Torque,
later HTCondor/ARC? |
No | No | ILC | DPM | No | No | . | Batch/CE decision could change depending on what is the least effort to maintain |
UKI-SOUTHGRID-OX-HEP | HTCondor/ARC (5% VIAB) | No | No | ILC, SNO+, Pheno | DPM | t2k, SNO+ | No | . |
|
UKI-SOUTHGRID-RALPP | HTCondor/ARC | Yes | Yes NFS and dCache | ILC, Biomed, T2K | dCache | Yes | Yes | . |
|
UKI-SOUTHGRID-SUSX | Univa Grid Engine/Cream | YES | YES (Lustre over IB) | NO | StoRM | YES (SNO+) | YES | .
|