Difference between revisions of "GridPP VO Incubator"
(→snoplus.snolab.ca) |
|||
(59 intermediate revisions by 8 users not shown) | |||
Line 1: | Line 1: | ||
This page is for monitoring the progress of new(ish) GridPP [[Virtual Organisation|VOs]]. | This page is for monitoring the progress of new(ish) GridPP [[Virtual Organisation|VOs]]. | ||
− | * PoC - Point of Contact | + | * PoC - Point of Contact <br> |
+ | Please consider archiving Closed/Completed issues, so that we can see what we need to work on. It is straight forward to create a page for your VO. <br> | ||
+ | |||
==All new VOs== | ==All new VOs== | ||
Line 39: | Line 41: | ||
|} | |} | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
==vo.DiRAC.ac.uk== | ==vo.DiRAC.ac.uk== | ||
Line 86: | Line 78: | ||
25/09/2017 In last three months, Cambridge/Edinburgh/Leicester have transferred 49/114/2633 files; equating to volumes of 6.5/10.4/41.6 TB of data. Issue raised at GRIDPP39 for mechanism of how to update data needs to be investigated. | 25/09/2017 In last three months, Cambridge/Edinburgh/Leicester have transferred 49/114/2633 files; equating to volumes of 6.5/10.4/41.6 TB of data. Issue raised at GRIDPP39 for mechanism of how to update data needs to be investigated. | ||
+ | 31/01/2018 Current Usage Durham/Cambridge/Edinburgh/Leicester are using 149/45/41/40 8TB tapes respectively each. Approx 2.2PB of space used | ||
+ | Last date for when data was ingested form each site is 10-07-2017/17-08-2017/17-11-2017/04-12-2017 | ||
+ | Errata: This may be flalse as timestamps on tape are "peculiar" | ||
− | + | Since CERN have switched to grafana monitoring tool, we have lost production of centralized plots for all small VOs. | |
− | + | 27/02/2018 Polled DiRAC sites for any outstanding issues. Non reported. | |
− | + | 20/08/2018 No reported problems. | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
|- | |- | ||
− | |VOI- | + | |VOI-DIRAC011 |
− | | | + | |Track Tape Volume usage at Tier 1 . |
− | | | + | |BD |
− | | | + | |2018-04-25 |
|Open | |Open | ||
| | | | ||
− | | | + | |Old RT ticket in tier 1 helpdesk to cover this: https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=177668 Adding here to track. |
|- | |- | ||
− | |VOI- | + | |VOI-DIRAC012 |
− | | | + | |New page to show current FTS data transfers. |
− | | | + | |BD |
− | | | + | |2018-04-25 |
|Open | |Open | ||
| | | | ||
− | | | + | | Since CERN deleted dashboard pages which showed non WLCG VO usage on FTS servers, info of how transfers succeed is missing. Need to find a new method. |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | | | + | |} |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ==EUCLID== | |
− | + | Wholly dealt with by IRIS. | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
==GalDyn== | ==GalDyn== | ||
− | * ''PoC: Tom Whyntie (TW)'' | + | * ''PoC: <strike>Tom Whyntie (TW)</strike> Matt Doidge'' |
− | * ''UCLan: Adam Clarke (AC)'' | + | * ''UCLan: <strike>Adam Clarke (AC)</strike> Victor Debattista'' |
{|border="1" cellpadding="1" | {|border="1" cellpadding="1" | ||
Line 229: | Line 188: | ||
|Matt D | |Matt D | ||
|2017-09-27 | |2017-09-27 | ||
+ | |Closed | ||
+ | |2018-10-30 | ||
+ | |Victor D from GalDyn was contacted and Victor was interested in continuing the GridPP work. Waiting on a new student to start to take up the work. Will contact GalDyn towards the end of October to build some momentum. Feb 2018 update - Galdyn's work has been delayed, but would like to compliment their dirac workflow with gridpp resources. Closed after new postdoc started and GalDyn recontacted. | ||
+ | |||
+ | |- | ||
+ | |VOI-GAL-008 | ||
+ | |Discuss and define GalDyn's needs with using GridPP resources to suppliment DiRAC (the HPC) work. | ||
+ | |Matt D/Jeremy | ||
+ | |2018-02-26 | ||
|Open | |Open | ||
| | | | ||
− | |Victor | + | |Victor has stated that he would like to use our resources to supplement their HPC (DiRAC) work. The orbit calculations described for this work are single core jobs. 27/4 - Victor contacted us asking for help getting started on submitting jobs. |
+ | |||
+ | |- | ||
+ | |VOI-GAL-009 | ||
+ | |Keep communication with GalDyn Open | ||
+ | |Matt D | ||
+ | |2018-10-30 | ||
+ | |Open | ||
+ | | | ||
+ | |Now a new postdoc has started (from the first) we await him being able to move the work onto Grid resources - keep pressure on and be ready to answer questions. | ||
|} | |} | ||
Line 239: | Line 216: | ||
''PoC: Catalin Condurache (CC)''<br> | ''PoC: Catalin Condurache (CC)''<br> | ||
''LIGO: Paul Hopkins (PH)''<br> | ''LIGO: Paul Hopkins (PH)''<br> | ||
− | + | ||
{|border="1" cellpadding="1" | {|border="1" cellpadding="1" | ||
Line 314: | Line 291: | ||
21/03/16: LOFAR should be in a posetion to perform an analysis run on a limited number of VMs with real data in the next few weeks. ''(GR)'' | 21/03/16: LOFAR should be in a posetion to perform an analysis run on a limited number of VMs with real data in the next few weeks. ''(GR)'' | ||
25/05/16: Note that LOFAR is a VO supported under 'STFC' not GridPP. Communication with SCD cloud users is good. A recent issue with the cloud storage/CEPH has led to less recent activity. | 25/05/16: Note that LOFAR is a VO supported under 'STFC' not GridPP. Communication with SCD cloud users is good. A recent issue with the cloud storage/CEPH has led to less recent activity. | ||
+ | 31/01/18: The last update from them was that they thought they had enough to get running in January 2017. The last few times I have tried to get in touch I have received no response. They don’t appear to have done anything since last January 2017 | ||
==LSST== | ==LSST== | ||
Line 321: | Line 299: | ||
''PoC: Alessandra Forti (AF)''<br> | ''PoC: Alessandra Forti (AF)''<br> | ||
− | ''Other people: Joe Zuntz (JZ), Andy Washbrook (AW), Steve Jones (SJ), Catalin Condurache (CC), Daniela Bauer (DB), Marcus Ebert (ME), Kashif Mohammad (KM), Dan Traynor (DT | + | ''Other people: Joe Zuntz (JZ), Andy Washbrook (AW), Steve Jones (SJ), Catalin Condurache (CC), Daniela Bauer (DB), Marcus Ebert (ME), Kashif Mohammad (KM), Dan Traynor (DT), Gareth Roy (GR), Matt Doidge (MD), Peter Love (PL), Pavlo Svirin (PS), Rob Currie (RC) |
{|border="1" cellpadding="1" | {|border="1" cellpadding="1" | ||
Line 335: | Line 313: | ||
!Notes | !Notes | ||
+ | |- | ||
+ | | VOI-LSST-028 | ||
+ | | Help James Perry to run DC2 | ||
+ | | AF, DB, SF, UE, RC | ||
+ | | | ||
+ | | Closed. | ||
+ | | 2018-06-01 | ||
+ | | James is running DC2 jobs at a handful of UK sites after installing the software on CVMFS and built his jobs to run multicore. | ||
|- | |- | ||
− | | VOI-LSST- | + | | VOI-LSST-029 |
− | | | + | | Help JP to increase the resources to run DC2 |
− | | AF, | + | | AF, DB, SF, UE, RC |
| | | | ||
− | | | + | | Closed. |
− | | | + | | 2018-08-17 |
− | | | + | | George Beckett asked for more resources on the 2018-07-20 because the LSST jobs are running quite well on the grid. Giving more resources to LSST has had some problems due to the way the jobs were broekered. Principal problems were dirac uses the closest storage concept so jobs were brokered only to sites with the data (Man, QM, IC), default OS is SL6 so none of the C7 resources were used, jobs are multicore and dirac doesn't target queues based on this. To solve the data problem ganga had to be modified to accept data LFNs in the input sandbox. Some sites like Manchester had to modify the queues ACLs to fix the flat brokering on all the queues and some others had to increase the LSST priority. Number of sites that run at least 2 jobs this week is now 8 among which RAL. |
|- | |- | ||
− | | VOI-LSST- | + | | VOI-LSST-030 |
− | | | + | | Reconfigure sites to point ot SLAC VOMS rather than FNAL |
− | | | + | | AF |
| | | | ||
| Open | | Open | ||
− | | | + | | 2018-10-22 |
− | | | + | | FNAL is trying to decomission its VOMS server and LSST as long term solution has decided to move to SLAC. Sites need to be reconfigured. I've added the information to the approved VO page. https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs#LSST |
− | + | ||
− | + | ||
|} | |} | ||
Line 361: | Line 345: | ||
<span style="color: green">In Production:</span> GridPP [https://www.gridpp.ac.uk/wiki/LZ LZ] page. | <span style="color: green">In Production:</span> GridPP [https://www.gridpp.ac.uk/wiki/LZ LZ] page. | ||
− | == | + | ==DUNE== |
− | + | ||
− | + | The [[DUNE|GridPP DUNE]] page. <br> | |
+ | |||
+ | PoC: Andrew McNab (AM)<br> | ||
+ | Others: Peter Clarke (PC), Stephen Jones (SJ), Raja Nandakumar (RN), Jaroslaw Nowak (JN), Andrew Washbrook (AW) | ||
+ | |||
+ | {|border="1" cellpadding="1" | ||
+ | |+ | ||
+ | |||
+ | |-style="background:#7C8AAF;color:white" | ||
+ | !Action ID | ||
+ | !Action description | ||
+ | !Owner | ||
+ | !Target date | ||
+ | !Status | ||
+ | !Last update | ||
+ | !Notes | ||
+ | |||
+ | |- | ||
+ | | VOI-DUNE-001 | ||
+ | | Review job submission to UK from FNAL | ||
+ | | AM | ||
+ | | 2018-05-31 | ||
+ | | Completed | ||
+ | | 2018-05-15 | ||
+ | | Existing Imperial and Sheffield jobs are centrally managed MC and some MC data processing | ||
+ | |||
+ | |- | ||
+ | | VOI-DUNE-002 | ||
+ | | protoDUNE data storage estimating | ||
+ | | AM/PC | ||
+ | | 2018-05-31 | ||
+ | | Completed | ||
+ | | 2018-05-15 | ||
+ | | DUNE centrally would like ~2PB in UK, which we believe to be feasible using GridPP+IRIS capacity | ||
+ | |||
+ | |- | ||
+ | | VOI-DUNE-003 | ||
+ | | Get DUNE jobs running on ARC | ||
+ | | SJ/AM | ||
+ | | 2018-05-31 | ||
+ | | Completed | ||
+ | | 2018-06-22 | ||
+ | | Production jobs working at LIV, MAN, and RAL via ARC. | ||
+ | |||
+ | |- | ||
+ | | VOI-DUNE-004 | ||
+ | | Get DUNE storage access working | ||
+ | | AM | ||
+ | | 2018-05-31 | ||
+ | | Completed | ||
+ | | 2018-06-22 | ||
+ | | Enabled LIV and MAN storage and tested from FNAL | ||
+ | |||
+ | |- | ||
+ | | VOI-DUNE-005 | ||
+ | | Recruit more sites for compute | ||
+ | | AM | ||
+ | | 2018-12-25 | ||
+ | | Open | ||
+ | | 2018-06-22 | ||
+ | | DUNE needs more sites for MC, and CPU-only is sufficient (Currently, LIV, MAN, RAL, LANC, IC, ED) | ||
+ | |||
+ | |- | ||
+ | | VOI-DUNE-006 | ||
+ | | Provide 2PB of storage and 1500 processors for protoDUNE data taking | ||
+ | | AM | ||
+ | | 2018-12-25 | ||
+ | | Completed | ||
+ | | 2018-11-06 | ||
+ | | Continue testing and recruiting sites willing to provide storage. ~1PB at Manchester. RAL and ED coming online. More needed during November to reach 2PB. | ||
+ | |||
+ | |||
+ | |||
+ | |} | ||
==<span style="color:green">na62.vo.gridpp.ac.uk== | ==<span style="color:green">na62.vo.gridpp.ac.uk== | ||
Line 369: | Line 426: | ||
[https://na62.gla.ac.uk/index.php?task=stats&view=sites NA62 Monitoring server.] <br> | [https://na62.gla.ac.uk/index.php?task=stats&view=sites NA62 Monitoring server.] <br> | ||
They welcome more sites supporting them. <br> | They welcome more sites supporting them. <br> | ||
+ | July 2018: NA62 wants to use Mainz. Tracked in GGUS 135805. Waiting for Mianz to configure their site. <br> | ||
October 2017: NA62 wants to use resources at CERN, which requires the GridPP DIRAC to submit to HTCondor. This requires major changes to DIRAC server and liaising with CERN to get NA62 enabled correctly. This is now done, as it turned out the problem wasn't HTCondor on DIRAC, but CERN's difficulties at installing a new VO. | October 2017: NA62 wants to use resources at CERN, which requires the GridPP DIRAC to submit to HTCondor. This requires major changes to DIRAC server and liaising with CERN to get NA62 enabled correctly. This is now done, as it turned out the problem wasn't HTCondor on DIRAC, but CERN's difficulties at installing a new VO. | ||
Line 443: | Line 501: | ||
==SKA Regional Centre== | ==SKA Regional Centre== | ||
* [[SKA Regional Centre|GridPP SKA regional centre information]] | * [[SKA Regional Centre|GridPP SKA regional centre information]] | ||
− | * PoC: | + | * PoC: Alessandra Forti (AF) |
− | * | + | * Previous contacts: Rohini Joshi (RJ), Andrew McNab (AM) |
* VO: skatelescope.eu | * VO: skatelescope.eu | ||
Line 490: | Line 548: | ||
|File replication across 100Gb/s JBO to London link | |File replication across 100Gb/s JBO to London link | ||
|AM | |AM | ||
− | |2018- | + | |2018-03-31 |
|Open | |Open | ||
|2017-10-31 | |2017-10-31 | ||
|Plan: set up endpoint machines as DIRAC SEs; do DIRAC file replications between them, over the 100Gb/s link registered in the DIRAC Replica Catalogue. Use DIRAC DMS for this directly at first, then use DMS to schedule RAL FTS for this. | |Plan: set up endpoint machines as DIRAC SEs; do DIRAC file replications between them, over the 100Gb/s link registered in the DIRAC Replica Catalogue. Use DIRAC DMS for this directly at first, then use DMS to schedule RAL FTS for this. | ||
+ | |||
+ | |- | ||
+ | |VOI-SKA-005 | ||
+ | |Run DIRAC jobs in large (whole node?) VMs | ||
+ | |AM/RJ | ||
+ | |2018-03-31 | ||
+ | |Open | ||
+ | |2018-03-06 | ||
+ | |Provide VMs which can run SKA DIRAC jobs with "lots" of memory. Probably >= 48 GB. | ||
+ | |||
+ | |- | ||
+ | |VOI-SKA-006 | ||
+ | |Activate/use skatelescope.eu in GGUS | ||
+ | |AM/RJ | ||
+ | |2018-03-31 | ||
+ | |Open | ||
+ | |2018-03-13 | ||
+ | |Test ticket from GGUS team processed. RJ will use GGUS to communicate with sites, as part of AENEAS evaluation of infrastructure tools | ||
+ | |||
+ | |- | ||
+ | |VOI-SKA-007 | ||
+ | |Test of the Transformation System | ||
+ | |DB/SF/RJ | ||
+ | |2018-12-31 | ||
+ | |Open | ||
+ | |2018-07-03 | ||
+ | |Test transformation system setup for SKA on dirac test server. Still hunting down bugs wrt v6r20 and waiting for full multi-VO implementation, possibly in v6r21 | ||
|} | |} | ||
+ | |||
+ | ==t2k.org== | ||
+ | Sophie King at Kings and Lukas Koch at RAL. | ||
+ | They know their way around the grid. | ||
+ | Daniela can take messages. | ||
==<span style="color:green">snoplus.snolab.ca</span>== | ==<span style="color:green">snoplus.snolab.ca</span>== | ||
Line 549: | Line 639: | ||
|Closed. Will be reopened if there is demand. | |Closed. Will be reopened if there is demand. | ||
|2017-06-06 | |2017-06-06 | ||
− | | | + | | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
|- | |- | ||
− | |VOI- | + | |VOI-SNO+-005 |
− | | | + | |SNO are making increasing use of resources at RAL Tape allocation increased to 100TB, CPU ~300HS06hrs per month |
|PG | |PG | ||
− | | | + | |2018-05-22 |
− | | | + | |Informational. |
− | | | + | |2018-05-22 |
| | | | ||
|- | |- | ||
− | |VOI- | + | |VOI-SNO+-006 |
− | | | + | |SNO need to migrate away from the LFC. They (Karin Giljie) have been put in contact with Alastair at the Tier-1 for help. |
|PG | |PG | ||
− | | | + | |2018-09-01 |
− | | | + | |Informational. |
− | | | + | |2018-10-16 |
| | | | ||
+ | |} | ||
− | + | ==SuperNEMO== | |
− | + | Paolo Franchini and Julia Sedgbeer at Imperial. | |
− | + | Daniela as the liaison (as I get asked anyway). | |
− | + | Supernemo has no plans to use grid resources at this time, except for transfers to the Imperial SE, after which they intend to use xrootd. | |
− | + | ||
− | + | ==DEAP3600== | |
− | + | ||
− | + | ||
− | + | ''PoC: Jeremy Coles (JC)'' | |
− | + | * Update requested February 2016. | |
− | + | * No response as of 21st March 2016 | |
− | + | * 24th May: Awaiting main local user at RHUL to begin activities. | |
− | + | * 23rd Aug: DEAP3600 will generate around 10TB of (calibrated) data per year for 5 years, starting this year I think. The original (much larger) raw data are backed up on tape in Canada, but the calibrated data are not. For reasons of backup and access, we were hoping it would it be possible to get these 50TB calibrated data stored at the Tier0 at RAL. | |
− | + | The model would be only to use RAL for custodial data storage and to copy data as needed to Tier2 sites such as RHUL for analysis. | |
− | + | There will also be around 60 TB (possible x 2 generations) of MC which will be kept only at a Tier2 because it can be regenerated in case it is lost. | |
− | + | * 25th Sept 17: No current activity. Query sent. | |
− | + | ||
==UKQCD== | ==UKQCD== |
Latest revision as of 12:14, 27 April 2020
This page is for monitoring the progress of new(ish) GridPP VOs.
- PoC - Point of Contact
Please consider archiving Closed/Completed issues, so that we can see what we need to work on. It is straight forward to create a page for your VO.
Contents
All new VOs
These tasks will need to be completed for all
Action ID | Action description | Owner | Target date | Status | Date closed | Notes |
---|---|---|---|---|---|---|
VOI-GEN-001 | Deploy test software to RVO CernVM-FS repositories. | Duncan, Daniela, Gareth, Alessandra, Ewan | 2015-05-26 | Closed | 2016-08-23 | New users in the Regional VOs will need to run test jobs using software in the RVO CernVM-FS repositories. This test software will need to be uploaded by the RVO admins. Instructions for doing this can be found here. Tested for vo.londongrid.ac.uk (--Daniela) |
VOI-GEN-002 | Write up the VO registration procedure | Tom | 2015-05-31 | Closed | 2016-08-23 | Guide started here - comments and feedback appreciated. Use gridpp guide. |
vo.DiRAC.ac.uk
PoC: Jens Jensen, Brian Davies
- 03/05/16: Durham have now moved 940TB in 6 months. Expect ~2.5PB in total from Durham.
- 26/09/17: The number of 8TB tapes being used by Cambridge/Durham/Edinburgh/Leicester are 45/149/41/16
- 26/09/17: Mark Wilkinson gave talk at GRIDPP39 https://indico.cern.ch/event/656544/contributions/2710500/attachments/1523457/2381245/DiRAC_GridPP_Meeting.pdf
All Closed actions can be found on the Vo.DiRAC.ac.uk_archived_actions page.
Action ID | Action description | Owner | Target date | Status | Date closed | Notes |
---|---|---|---|---|---|---|
VOI-DIRAC010 | Track Data transfer from further Sites | BD | 2016-11-29 | Open | Following vo.dirac.ac.uk transfer working group meeting. dirac have identified initial data from non Durham sites which need to be transferred. ( ~700TB per site this needs verifying.)
07/02/17 First test transfers from Cambridge and Leicester now succeeding. 23/05/2017 Edinburgh now transferring ~2TB/day. Durham testing recall from RAL to Durham. 25/09/2017 In last three months, Cambridge/Edinburgh/Leicester have transferred 49/114/2633 files; equating to volumes of 6.5/10.4/41.6 TB of data. Issue raised at GRIDPP39 for mechanism of how to update data needs to be investigated. 31/01/2018 Current Usage Durham/Cambridge/Edinburgh/Leicester are using 149/45/41/40 8TB tapes respectively each. Approx 2.2PB of space used Last date for when data was ingested form each site is 10-07-2017/17-08-2017/17-11-2017/04-12-2017 Errata: This may be flalse as timestamps on tape are "peculiar" Since CERN have switched to grafana monitoring tool, we have lost production of centralized plots for all small VOs. 27/02/2018 Polled DiRAC sites for any outstanding issues. Non reported. 20/08/2018 No reported problems. | |
VOI-DIRAC011 | Track Tape Volume usage at Tier 1 . | BD | 2018-04-25 | Open | Old RT ticket in tier 1 helpdesk to cover this: https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=177668 Adding here to track. | |
VOI-DIRAC012 | New page to show current FTS data transfers. | BD | 2018-04-25 | Open | Since CERN deleted dashboard pages which showed non WLCG VO usage on FTS servers, info of how transfers succeed is missing. Need to find a new method.
|
EUCLID
Wholly dealt with by IRIS.
GalDyn
- PoC:
Tom Whyntie (TW)Matt Doidge
- UCLan:
Adam Clarke (AC)Victor Debattista
Action ID | Action description | Owner | Target date | Status | Date closed | Notes |
---|---|---|---|---|---|---|
VOI-GAL-001 | Assist GalDyn users with CernVM creation and testing. | TW | 2015-02-18 | Closed | 2015-02-18 | GalDyn users have successfully instantiated CernVMs for accessing the grid. |
VOI-GAL-002 | Assist GalDyn users with running test jobs on the Imperial DIRAC instance. | TW | 2015-02-23 | Closed | 2015-02-23 | GalDyn users have successfully run test jobs on the Imperial DIRAC instance via a GridPP CernVM. |
VOI-GAL-003 | Assist GalDyn users with compiling user software on the CernVM. | TW | 2015-03-08 | Closed | 1016-02-15 | The code compiles and runs, but needs to be put in a grid/CernVM-FS-friendly format. |
VOI-GAL-004 | Create the GalDyn CernVM-FS repository | TW, CC | 2015-05-05 | Closed | 2015-05-05 | New CernVM-FS repository galdyn.egi.eu has been created on the RAL Stratum-1 for the GalDyn VO. |
VOI-GAL-005 | (Re)new grid certificate for AC | AC | 2015-02-22 | Closed | 2015-03-15 | UK CA managed to renew the old certificate. Work on hold - user preparing for PhD viva! |
VOI-GAL-006 | Creating an account for a UCLan student on the Lancaster cluster | TW/Robin Long (Lancaster) | 2016-09-15 | Defunct | The student (visiting from China) will pick up Adam's work on grid deployment for an upcoming paper. They will have a UCLan computing account but an account on the Lancaster cluster would further speed things up. Under discussion. 2016-11-16: TW emailed Victor D (group head) for an update. | |
VOI-GAL-007 | Recontact GalDyn once new student starts | Matt D | 2017-09-27 | Closed | 2018-10-30 | Victor D from GalDyn was contacted and Victor was interested in continuing the GridPP work. Waiting on a new student to start to take up the work. Will contact GalDyn towards the end of October to build some momentum. Feb 2018 update - Galdyn's work has been delayed, but would like to compliment their dirac workflow with gridpp resources. Closed after new postdoc started and GalDyn recontacted. |
VOI-GAL-008 | Discuss and define GalDyn's needs with using GridPP resources to suppliment DiRAC (the HPC) work. | Matt D/Jeremy | 2018-02-26 | Open | Victor has stated that he would like to use our resources to supplement their HPC (DiRAC) work. The orbit calculations described for this work are single core jobs. 27/4 - Victor contacted us asking for help getting started on submitting jobs. | |
VOI-GAL-009 | Keep communication with GalDyn Open | Matt D | 2018-10-30 | Open | Now a new postdoc has started (from the first) we await him being able to move the work onto Grid resources - keep pressure on and be ready to answer questions. |
LIGO
PoC: Catalin Condurache (CC)
LIGO: Paul Hopkins (PH)
Action ID | Action description | Owner | Target date | Status | Date closed | Notes |
---|---|---|---|---|---|---|
VOI-LIGO-001 | Create the LIGO CernVM-FS repository | CC | 2014-12-01 | Closed | 2015-02-15 | New CernVM-FS repository ligo.egi.eu has been created on the RAL Stratum-1 for the LIGO VO. |
VOI-LIGO-002 | Assist LIGO users with using Condor + nordugrid to access ARC-CE@RAL. | AL, CC | 2015-12 | Closed | 2016-02-12 | Test jobs submitted from LIGO Condor instance to ARC-CE service at RAL were successful. |
VOI-LIGO-003 | Plan to run proper analyses jobs using scientists involvement | PH, CC | 2016-02-12 | Open | [2016-05-24]Still chasing scientists to run analyses jobs. Some promises. | |
VOI-LIGO-004 | Get file storage working via the GridPP CernVM. | PH, CC | 2016-02-24 | Closed | 2016-03-08 | PH managed to get file transfers working with the GridPP CernVM using bridged networking and getting the VM registered on the university network. |
VOI-LIGO-005 | Enable the OSG VO on RAL CEs and batch system. | AL | 2017-05-06 | Closed | 2017-05-23 | Job successfully being submitted to RAL, but have requested that single-core pilots are used rather than multi-core pilots, as LIGO are only running single-core jobs. We don't need to do anything else however. |
VOI-LIGO-006 | Get access to LIGO data via secure CVMFS working on RAL workernoes | CC,AL | 2017-06 | Open |
LOFAR
PoC: George Ryall (GR), from April 2016 - Alex Dibbo (AD)
21/03/16: LOFAR should be in a posetion to perform an analysis run on a limited number of VMs with real data in the next few weeks. (GR)
25/05/16: Note that LOFAR is a VO supported under 'STFC' not GridPP. Communication with SCD cloud users is good. A recent issue with the cloud storage/CEPH has led to less recent activity.
31/01/18: The last update from them was that they thought they had enough to get running in January 2017. The last few times I have tried to get in touch I have received no response. They don’t appear to have done anything since last January 2017
LSST
The LSST UK page.
All Closed action items can be found in the LSST_archived_actions page.
PoC: Alessandra Forti (AF)
Other people: Joe Zuntz (JZ), Andy Washbrook (AW), Steve Jones (SJ), Catalin Condurache (CC), Daniela Bauer (DB), Marcus Ebert (ME), Kashif Mohammad (KM), Dan Traynor (DT), Gareth Roy (GR), Matt Doidge (MD), Peter Love (PL), Pavlo Svirin (PS), Rob Currie (RC)
Action ID | Action description | Owner | Target date | Status | Last update | Notes |
---|---|---|---|---|---|---|
VOI-LSST-028 | Help James Perry to run DC2 | AF, DB, SF, UE, RC | Closed. | 2018-06-01 | James is running DC2 jobs at a handful of UK sites after installing the software on CVMFS and built his jobs to run multicore. | |
VOI-LSST-029 | Help JP to increase the resources to run DC2 | AF, DB, SF, UE, RC | Closed. | 2018-08-17 | George Beckett asked for more resources on the 2018-07-20 because the LSST jobs are running quite well on the grid. Giving more resources to LSST has had some problems due to the way the jobs were broekered. Principal problems were dirac uses the closest storage concept so jobs were brokered only to sites with the data (Man, QM, IC), default OS is SL6 so none of the C7 resources were used, jobs are multicore and dirac doesn't target queues based on this. To solve the data problem ganga had to be modified to accept data LFNs in the input sandbox. Some sites like Manchester had to modify the queues ACLs to fix the flat brokering on all the queues and some others had to increase the LSST priority. Number of sites that run at least 2 jobs this week is now 8 among which RAL. | |
VOI-LSST-030 | Reconfigure sites to point ot SLAC VOMS rather than FNAL | AF | Open | 2018-10-22 | FNAL is trying to decomission its VOMS server and LSST as long term solution has decided to move to SLAC. Sites need to be reconfigured. I've added the information to the approved VO page. https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs#LSST |
LZ
In Production: GridPP LZ page.
DUNE
The GridPP DUNE page.
PoC: Andrew McNab (AM)
Others: Peter Clarke (PC), Stephen Jones (SJ), Raja Nandakumar (RN), Jaroslaw Nowak (JN), Andrew Washbrook (AW)
Action ID | Action description | Owner | Target date | Status | Last update | Notes |
---|---|---|---|---|---|---|
VOI-DUNE-001 | Review job submission to UK from FNAL | AM | 2018-05-31 | Completed | 2018-05-15 | Existing Imperial and Sheffield jobs are centrally managed MC and some MC data processing |
VOI-DUNE-002 | protoDUNE data storage estimating | AM/PC | 2018-05-31 | Completed | 2018-05-15 | DUNE centrally would like ~2PB in UK, which we believe to be feasible using GridPP+IRIS capacity |
VOI-DUNE-003 | Get DUNE jobs running on ARC | SJ/AM | 2018-05-31 | Completed | 2018-06-22 | Production jobs working at LIV, MAN, and RAL via ARC. |
VOI-DUNE-004 | Get DUNE storage access working | AM | 2018-05-31 | Completed | 2018-06-22 | Enabled LIV and MAN storage and tested from FNAL |
VOI-DUNE-005 | Recruit more sites for compute | AM | 2018-12-25 | Open | 2018-06-22 | DUNE needs more sites for MC, and CPU-only is sufficient (Currently, LIV, MAN, RAL, LANC, IC, ED) |
VOI-DUNE-006 | Provide 2PB of storage and 1500 processors for protoDUNE data taking | AM | 2018-12-25 | Completed | 2018-11-06 | Continue testing and recruiting sites willing to provide storage. ~1PB at Manchester. RAL and ED coming online. More needed during November to reach 2PB.
|
na62.vo.gridpp.ac.uk
In production:
NA62 Monitoring server.
They welcome more sites supporting them.
July 2018: NA62 wants to use Mainz. Tracked in GGUS 135805. Waiting for Mianz to configure their site.
October 2017: NA62 wants to use resources at CERN, which requires the GridPP DIRAC to submit to HTCondor. This requires major changes to DIRAC server and liaising with CERN to get NA62 enabled correctly. This is now done, as it turned out the problem wasn't HTCondor on DIRAC, but CERN's difficulties at installing a new VO.
PRaVDA
- PoC: Mark Slater/Matt Williams (MS/MW)
- End User: Tony Price (TP)
- Update requested 3rd Feb., 19th Feb. 2016 by TW. TP replied 2016-03-21 - they have been busy building the actual device!
- TP in the process of changing roles. Need to finalise the new end user.
- 20th Feb: Asked TP about status - currently processing data so need of simulations reduced. Will pick up again when more simulation required.
Action ID | Action description | Owner | Target date | Status | Date closed | Notes |
---|---|---|---|---|---|---|
VOI-PRA-001 | Get PRaVDA up and running with DIRAC and Ganga. | MS/MW | 2015-10-01 | Closed | 2016-03-21 | TP has successfully got simulations running using DIRAC and Ganga. |
VOI-PRA-002 | Issues with DIRAC, Ganga and LFN names when copying data back. | MS/MW | 2015-03-21 | Closed | 2015-08-01 | MS/MW assisting on the Ganga side. |
VOI-PRA-003 | TP changing roles. Need to make contact with new end user. | MS | 2016-05-23 | Closed | 2017-02-20 | End user did not make contact. TP still working with them so will use him as PoC for the moment |
VOI-PRA-004 | Waiting on data processing to be completed and more simulations to be required. | MS | 2017-02-20 | Closed | 2017-11-06 | |
VOI-PRA-005 | Got an update from TP. Pravda are still wanting to do more work on the grid but at present there is no manpower. Will check again in a few months. | MS | 2017-11-06 | In Progress |
SKA Regional Centre
- GridPP SKA regional centre information
- PoC: Alessandra Forti (AF)
- Previous contacts: Rohini Joshi (RJ), Andrew McNab (AM)
- VO: skatelescope.eu
Action ID | Action description | Owner | Target date | Status | Last update | Notes |
---|---|---|---|---|---|---|
VOI-SKA-001 | LOFAR tests for SKA with DIRAC | RJ/AM | 2017-12-31 | Open | 2017-10-10 | LOFAR application now running from cvmfs at LCG.UKI-NORTHGRID-MAN-HEP.uk, with input jobs matched to input data in GridPP DIRAC File Catalog, stored on DPM. Working on mass input of data from Groningen via grid jobs (wget with password then DIRAC data management commands) |
VOI-SKA-002 | LOFAR tests for SKA on DIRAC/OpenStack | AM | 2017-12-31 | Done | 2017-10-18 | GridPP DIRAC SAM tests for skatelescope.eu run at DataCentred. Manchester storage associated with DataCentred in GridPP DIRAC so it matches SKA/LOFAR jobs. |
VOI-SKA-003 | Add more sites with skatelescope.eu VO / GridPP DIRAC | AM | 2017-12-31 | Open | 2017-10-03 | QMUL joins Manchester, Imperial, Cambridge in passing SAM tests and providing storage. More volunteer sites welcome: need 10-20TB on grid storage that is set up in GridPP DIRAC. |
VOI-SKA-004 | File replication across 100Gb/s JBO to London link | AM | 2018-03-31 | Open | 2017-10-31 | Plan: set up endpoint machines as DIRAC SEs; do DIRAC file replications between them, over the 100Gb/s link registered in the DIRAC Replica Catalogue. Use DIRAC DMS for this directly at first, then use DMS to schedule RAL FTS for this. |
VOI-SKA-005 | Run DIRAC jobs in large (whole node?) VMs | AM/RJ | 2018-03-31 | Open | 2018-03-06 | Provide VMs which can run SKA DIRAC jobs with "lots" of memory. Probably >= 48 GB. |
VOI-SKA-006 | Activate/use skatelescope.eu in GGUS | AM/RJ | 2018-03-31 | Open | 2018-03-13 | Test ticket from GGUS team processed. RJ will use GGUS to communicate with sites, as part of AENEAS evaluation of infrastructure tools |
VOI-SKA-007 | Test of the Transformation System | DB/SF/RJ | 2018-12-31 | Open | 2018-07-03 | Test transformation system setup for SKA on dirac test server. Still hunting down bugs wrt v6r20 and waiting for full multi-VO implementation, possibly in v6r21 |
t2k.org
Sophie King at Kings and Lukas Koch at RAL. They know their way around the grid. Daniela can take messages.
snoplus.snolab.ca
In production
- PoC: Pete Gronbech (PG)
- SNO+: Karin Gilje (snoplus_vosupport - at - snolab.ca)
Action ID | Action description | Owner | Target date | Status | Last update | Notes |
---|---|---|---|---|---|---|
VOI-SNO+-001 | Check on progress via GridPP-Support list. | PG | 2016-02-17 | Closed | 2016-03-10 | See VOI-SNO+-003 - success, closing this for now. |
VOI-SNO+-002 | MM to join GridPP Storage meeting | PG | 2016-02-24 | Closed | 2016-02-24 | MM joined the meeting to discuss requirements and various options. See minutes. |
VOI-SNO+-003 | MM to transfer files out of the SNO+ cavern via an FTP server. | PG | 2016-02-17 | Closed | 2016-03-10 | Success after fantastic support/discussion on the GRIDPP-SUPPORT mailing list. |
VOI-SNO+-004 | Check on progress via GridPP-Support list (16thMay). | PG | 2016-05-31 | Closed. Will be reopened if there is demand. | 2017-06-06 | |
VOI-SNO+-005 | SNO are making increasing use of resources at RAL Tape allocation increased to 100TB, CPU ~300HS06hrs per month | PG | 2018-05-22 | Informational. | 2018-05-22 | |
VOI-SNO+-006 | SNO need to migrate away from the LFC. They (Karin Giljie) have been put in contact with Alastair at the Tier-1 for help. | PG | 2018-09-01 | Informational. | 2018-10-16 |
SuperNEMO
Paolo Franchini and Julia Sedgbeer at Imperial. Daniela as the liaison (as I get asked anyway). Supernemo has no plans to use grid resources at this time, except for transfers to the Imperial SE, after which they intend to use xrootd.
DEAP3600
PoC: Jeremy Coles (JC)
- Update requested February 2016.
- No response as of 21st March 2016
- 24th May: Awaiting main local user at RHUL to begin activities.
- 23rd Aug: DEAP3600 will generate around 10TB of (calibrated) data per year for 5 years, starting this year I think. The original (much larger) raw data are backed up on tape in Canada, but the calibrated data are not. For reasons of backup and access, we were hoping it would it be possible to get these 50TB calibrated data stored at the Tier0 at RAL.
The model would be only to use RAL for custodial data storage and to copy data as needed to Tier2 sites such as RHUL for analysis. There will also be around 60 TB (possible x 2 generations) of MC which will be kept only at a Tier2 because it can be regenerated in case it is lost.
- 25th Sept 17: No current activity. Query sent.
UKQCD
PoC: Jeremy Coles (JC)
- Update requested February 2016.
- Update 21st March: "Hoping to do more with the gridpp resources".
- Preliminary result in conference proceedings - "Investigating Some Technical Improvements to Glueball Calculations" e-Print: arXiv:1511.09303.
- 24th May: Will try and leverage some of the international lattice data grid stuff. Nothing immediate planned.
- 22nd August: No recent activity or planned activities.
- 25th Sept 17: No further work planned.