Difference between revisions of "GridPP VO Incubator"

From GridPP Wiki
Jump to: navigation, search
(LSST)
 
(305 intermediate revisions by 19 users not shown)
Line 1: Line 1:
 
This page is for monitoring the progress of new(ish) GridPP [[Virtual Organisation|VOs]].
 
This page is for monitoring the progress of new(ish) GridPP [[Virtual Organisation|VOs]].
  
* PoC - Point of Contact
+
* PoC - Point of Contact <br>
 +
Please consider archiving Closed/Completed issues, so that we can see what we need to work on. It is straight forward to create a page for your VO. <br>
 +
 
  
 
==All new VOs==
 
==All new VOs==
Line 24: Line 26:
 
|Duncan, Daniela, Gareth, Alessandra, Ewan
 
|Duncan, Daniela, Gareth, Alessandra, Ewan
 
|2015-05-26
 
|2015-05-26
|Open
+
|Closed
|N/A
+
|2016-08-23
|New users in the [[Regional VO]]s will need to run test jobs using software in the [[Regional VO|RVO]] CernVM-FS repositories. This test software will need to be uploaded by the [[Regional VO|RVO]] admins. Instructions for doing this can be found [[A_quick_guide_to_CVMFS|here]].
+
|New users in the [[Regional VO]]s will need to run test jobs using software in the [[Regional VO|RVO]] CernVM-FS repositories. This test software will need to be uploaded by the [[Regional VO|RVO]] admins. Instructions for doing this can be found [[A_quick_guide_to_CVMFS|here]]. Tested for vo.londongrid.ac.uk (--Daniela)
  
 
|-
 
|-
Line 33: Line 35:
 
|Tom
 
|Tom
 
|2015-05-31
 
|2015-05-31
|In progress
+
|Closed
|N/A
+
|2016-08-23
|Guide started [[Start_Here_-_Creating_a_new_VO|here]] - comments and feedback appreciated.
+
|Guide started [[Start_Here_-_Creating_a_new_VO|here]] - comments and feedback appreciated. Use gridpp guide.
  
 
|}
 
|}
  
==DEAP3600==
 
  
''PoC: Jeremy Coles (JC)''
+
==vo.DiRAC.ac.uk==
  
==DiRAC==
+
''PoC: Jens Jensen, Brian Davies''
  
''PoC: Jens Jensen, Brian Davies''
+
* 03/05/16: Durham have now moved 940TB in 6 months. Expect ~2.5PB in total from Durham.
 +
* 26/09/17: The number of 8TB tapes being used by Cambridge/Durham/Edinburgh/Leicester are 45/149/41/16
 +
* 26/09/17: Mark Wilkinson gave talk at GRIDPP39 https://indico.cern.ch/event/656544/contributions/2710500/attachments/1523457/2381245/DiRAC_GridPP_Meeting.pdf
  
 +
All '''Closed''' actions can be found on the [[Vo.DiRAC.ac.uk_archived_actions]] page.
  
 
{|border="1" cellpadding="1"
 
{|border="1" cellpadding="1"
Line 61: Line 65:
  
 
|-
 
|-
|VOI-DIRAC-001
+
|VOI-DIRAC010
|Set up VO in GridPP
+
|Track Data transfer from further Sites
|JJ
+
|BD
|2015-04-30
+
|2016-11-29
|Closed
+
|Open
|
+
 
|
 
|
 +
| Following vo.dirac.ac.uk transfer working group meeting. dirac have identified initial data from non Durham sites which need to be transferred. ( ~700TB per site  this needs verifying.)
 +
07/02/17 First test transfers from Cambridge and Leicester now succeeding.
  
|-
+
23/05/2017 Edinburgh now transferring ~2TB/day. Durham testing recall from RAL to Durham.
|VOI-DIRAC-002
+
|Register DiRAC with EGI
+
|JJ
+
|2015-05-31
+
|Closed
+
|
+
|[http://operations-portal.egi.eu/vo/search]
+
  
|-
+
25/09/2017 In last three months, Cambridge/Edinburgh/Leicester have transferred 49/114/2633 files; equating to volumes of 6.5/10.4/41.6 TB of data. Issue raised at GRIDPP39 for mechanism of how to update data needs to be investigated.
|VOI-DIRAC-003
+
|Write up DiRAC site setup document
+
|LH
+
|2015-08-31
+
|Closed
+
|
+
|Version 1.3 circulated to DIRAC-USERS
+
  
|-
+
31/01/2018 Current Usage Durham/Cambridge/Edinburgh/Leicester are using 149/45/41/40 8TB tapes respectively each. Approx 2.2PB of space used
|VOI-DIRAC-004
+
Last date for when data was ingested form each site is 10-07-2017/17-08-2017/17-11-2017/04-12-2017
|Re-evaluate data packaging method
+
Errata: This may be flalse as timestamps on tape are "peculiar"
|JJ
+
 
|2015-11-30
+
Since CERN have switched to grafana monitoring tool, we have lost production of centralized plots for all small VOs.
|In Progress
+
 
|
+
27/02/2018 Polled DiRAC sites for any outstanding issues. Non reported.
|Need a new method. Sam contributing.
+
20/08/2018 No reported problems.
  
 
|-
 
|-
|VOI-DIRAC-005
+
|VOI-DIRAC011
|Restart transfers with new method (VOI-DIRAC-004)
+
|Track Tape Volume usage at Tier 1 .
|LH
+
|BD
|2015-12-10
+
|2018-04-25
 
|Open
 
|Open
 
|
 
|
|
+
|Old RT ticket in tier 1 helpdesk to cover this: https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=177668 Adding here to track.
  
 
|-
 
|-
|VOI-DIRAC-006
+
|VOI-DIRAC012
|Get Leicester ready for transfer
+
|New page to show current FTS data transfers.
|JJ
+
|BD
|2015-11-30
+
|2018-04-25
|In Progress
+
|Open
 
|
 
|
|Leicester now have certificates again. Need to await solution to VO-DIRAC-004
+
| Since CERN deleted dashboard pages which showed non WLCG VO usage on FTS servers, info of how transfers succeed is missing. Need to find a new method.
 +
 
  
 
|}
 
|}
  
 
+
==EUCLID==
 +
Wholly dealt with by IRIS.
  
 
==GalDyn==
 
==GalDyn==
  
''PoC: Tom Whyntie (TW)''
+
* ''PoC: <strike>Tom Whyntie (TW)</strike> Matt Doidge''
 +
 
 +
* ''UCLan: <strike>Adam Clarke (AC)</strike> Victor Debattista''
  
 
{|border="1" cellpadding="1"
 
{|border="1" cellpadding="1"
Line 156: Line 151:
 
|Assist GalDyn users with compiling user software on the CernVM.
 
|Assist GalDyn users with compiling user software on the CernVM.
 
|TW
 
|TW
|2015-03-05
+
|2015-03-08
|In progress
+
|Closed
|
+
|1016-02-15
 
|The code compiles and runs, but needs to be put in a grid/CernVM-FS-friendly format.
 
|The code compiles and runs, but needs to be put in a grid/CernVM-FS-friendly format.
  
Line 170: Line 165:
 
|New CernVM-FS repository galdyn.egi.eu has been created on the RAL Stratum-1 for the GalDyn VO.
 
|New CernVM-FS repository galdyn.egi.eu has been created on the RAL Stratum-1 for the GalDyn VO.
  
|}
+
|-
 +
|VOI-GAL-005
 +
|(Re)new grid certificate for AC
 +
|AC
 +
|2015-02-22
 +
|Closed
 +
|2015-03-15
 +
|UK CA managed to renew the old certificate. ''Work on hold - user preparing for PhD viva!''
  
 +
|-
 +
|VOI-GAL-006
 +
|Creating an account for a UCLan student on the Lancaster cluster
 +
|TW/Robin Long (Lancaster)
 +
|2016-09-15
 +
|Defunct
 +
|
 +
|The student (visiting from China) will pick up Adam's work on grid deployment for an upcoming paper. They will have a UCLan computing account but an account on the Lancaster cluster would further speed things up. Under discussion. 2016-11-16: TW emailed Victor D (group head) for an update.
  
==LIGO==
+
|-
 +
|VOI-GAL-007
 +
|Recontact GalDyn once new student starts
 +
|Matt D
 +
|2017-09-27
 +
|Closed
 +
|2018-10-30
 +
|Victor D from GalDyn was contacted and Victor was interested in continuing the GridPP work. Waiting on a new student to start to take up the work. Will contact GalDyn towards the end of October to build some momentum. Feb 2018 update - Galdyn's work has been delayed, but would like to compliment their dirac workflow with gridpp resources. Closed after new postdoc started and GalDyn recontacted.
  
''PoC: Catalin Condurache? (CC)''
+
|-
 +
|VOI-GAL-008
 +
|Discuss and define GalDyn's needs with using GridPP resources to suppliment DiRAC (the HPC) work.
 +
|Matt D/Jeremy
 +
|2018-02-26
 +
|Open
 +
|
 +
|Victor has stated that he would like to use our resources to supplement their HPC (DiRAC) work. The orbit calculations described for this work are single core jobs. 27/4 - Victor contacted us asking for help getting started on submitting jobs.
  
==LOFAR==
+
|-
 +
|VOI-GAL-009
 +
|Keep communication with GalDyn Open
 +
|Matt D
 +
|2018-10-30
 +
|Open
 +
|
 +
|Now a new postdoc has started (from the first) we await him being able to move the work onto Grid resources - keep pressure on and be ready to answer questions.
  
''PoC: George Ryall (GR)''
+
|}
  
==LSST==
+
==LIGO==
 +
 
 +
''PoC: Catalin Condurache (CC)''<br>
 +
''LIGO: Paul Hopkins (PH)''<br>
  
''PoC: Alessandra Forti (AF)''<br>
 
''Other people: Joe Zuntz (JZ), Andy Washbrook (AW), Ewan McMahon (EM), Steve Jones (SJ), Catalin Condurache (CC), Daniela Bauer (DB), Marcus Ebert (ME)
 
  
 
{|border="1" cellpadding="1"
 
{|border="1" cellpadding="1"
Line 199: Line 231:
  
 
|-
 
|-
|VOI-LSST-001
+
|VOI-LIGO-001
|Ganga direct job submission using Northgrid
+
|Create the LIGO CernVM-FS repository
|AF, JZ
+
|CC
|2015-01-31
+
|2014-12-01
 
|Closed
 
|Closed
|2015-12-11
+
|2015-02-15
|Do direct job submission testing using northgrid infrastructure and ganga
+
|New CernVM-FS repository ligo.egi.eu has been created on the RAL Stratum-1 for the LIGO VO.
  
 
|-
 
|-
|VOI-LSST-002
+
|VOI-LIGO-002
|Get European users on the LSST VOMS server at FNAL
+
|Assist LIGO users with using Condor + nordugrid to access ARC-CE@RAL.
|AF, JZ
+
|AL, CC
|2015-02-28
+
|2015-12
 
|Closed
 
|Closed
|2015-12-11
+
|2016-02-12
 +
|Test jobs submitted from LIGO Condor instance to ARC-CE service at RAL were successful.
 +
 
 +
|-
 +
|VOI-LIGO-003
 +
|Plan to run proper analyses jobs using scientists involvement
 +
|PH, CC
 +
|2016-02-12
 +
|Open
 
|
 
|
 +
|[2016-05-24]Still chasing scientists to run analyses jobs. Some promises.
  
 
|-
 
|-
|VOI-LSST-003
+
|VOI-LIGO-004
|Enable LSST at sites
+
|Get file storage working via the GridPP CernVM.
|AF, AW, EM, SJ
+
|PH, CC
|2015-06-30
+
|2016-02-24
 
|Closed
 
|Closed
|2015-12-11
+
|2016-03-08
|Do direct job submission testing
+
|PH managed to get file transfers working with the GridPP CernVM using bridged networking and getting the VM registered on the university network.
  
 
|-
 
|-
|VOI-LSST-004
+
|VOI-LIGO-005
|Find which LSST CVMFS stratum0 is usable by us
+
|Enable the OSG VO on RAL CEs and batch system.
|AF
+
|AL
|2015-08-10
+
|2017-05-06
 
|Closed
 
|Closed
|2015-12-11
+
|2017-05-23
|3 instances, in France, OSG and FNAL. Chosen FNAL
+
|Job successfully being submitted to RAL, but have requested that single-core pilots are used rather than multi-core pilots, as LIGO are only running single-core jobs. We don't need to do anything else however.
  
 
|-
 
|-
|VOI-LSST-005
+
|VOI-LIGO-006
|Get the repository at FNAL and replicated at RAL
+
|Get access to LIGO data via secure CVMFS working on RAL workernoes
|AF, JZ, CC
+
|CC,AL
|2015-08-25
+
|2017-06
 +
|Open
 +
|
 +
|
 +
 
 +
|}
 +
 
 +
==LOFAR==
 +
 
 +
''PoC: George Ryall (GR), from April 2016 - Alex Dibbo (AD)''<br>
 +
21/03/16: LOFAR should be in a posetion to perform an analysis run on a limited number of VMs with real data in the next few weeks. ''(GR)''
 +
25/05/16: Note that LOFAR is a VO supported under 'STFC' not GridPP. Communication with SCD cloud users is good. A recent issue with the cloud storage/CEPH has led to less recent activity.
 +
31/01/18: The last update from them was that they thought they had enough to get running in January 2017. The last few times I have tried to get in touch I have received no response. They don’t appear to have done anything since last January 2017
 +
 
 +
==LSST==
 +
 
 +
The [https://www.gridpp.ac.uk/wiki/LSST_UK LSST UK] page. <br>
 +
All '''Closed''' action items can be found in the [[LSST_archived_actions]] page.
 +
 
 +
''PoC: Alessandra Forti (AF)''<br>
 +
''Other people: Joe Zuntz (JZ), Andy Washbrook (AW), Steve Jones (SJ), Catalin Condurache (CC), Daniela Bauer (DB), Marcus Ebert (ME), Kashif Mohammad (KM), Dan Traynor (DT), Gareth Roy (GR), Matt Doidge (MD), Peter Love (PL), Pavlo Svirin (PS), Rob Currie (RC)
 +
 
 +
{|border="1" cellpadding="1"
 +
|+
 +
 
 +
|-style="background:#7C8AAF;color:white"
 +
!Action ID
 +
!Action description
 +
!Owner
 +
!Target date
 +
!Status
 +
!Last update
 +
!Notes
 +
 
 +
|-
 +
| VOI-LSST-028
 +
| Help James Perry to run DC2
 +
| AF, DB, SF, UE, RC
 +
|
 +
| Closed.
 +
| 2018-06-01
 +
| James is running DC2 jobs at a handful of UK sites after installing the software on CVMFS and built his jobs to run multicore.
 +
 
 +
|-
 +
| VOI-LSST-029
 +
| Help JP to increase the resources to run DC2
 +
| AF, DB, SF, UE, RC
 +
|
 +
| Closed.
 +
| 2018-08-17
 +
| George Beckett asked for more resources on the 2018-07-20 because the LSST jobs are running quite well on the grid. Giving more resources to LSST has had some problems due to the way the jobs were broekered. Principal problems were dirac uses the closest storage concept so jobs were brokered only to sites with the data (Man, QM, IC), default OS is SL6 so none of the C7 resources were used, jobs are multicore and dirac doesn't target queues based on this. To solve the data problem ganga had to be modified to accept data LFNs in the input sandbox. Some sites like Manchester had to modify the queues ACLs to fix the flat brokering on all the queues and some others had to increase the LSST priority. Number of sites that run at least 2 jobs this week is now 8 among which RAL.
 +
 
 +
|-
 +
| VOI-LSST-030
 +
| Reconfigure sites to point ot SLAC VOMS rather than FNAL
 +
| AF
 +
|
 +
| Open
 +
| 2018-10-22
 +
| FNAL is trying to decomission its VOMS server and LSST as long term solution has decided to move to SLAC. Sites need to be reconfigured. I've added the information to the approved VO page. https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs#LSST
 +
 
 +
|}
 +
 
 +
==<span style="color: green"> LZ </span>==
 +
<span style="color: green">In Production:</span> GridPP [https://www.gridpp.ac.uk/wiki/LZ LZ] page.
 +
 
 +
==DUNE==
 +
 
 +
The [[DUNE|GridPP DUNE]] page. <br>
 +
 
 +
PoC: Andrew McNab (AM)<br>
 +
Others: Peter Clarke (PC), Stephen Jones (SJ), Raja Nandakumar (RN), Jaroslaw Nowak (JN), Andrew Washbrook (AW)
 +
 +
{|border="1" cellpadding="1"
 +
|+
 +
 
 +
|-style="background:#7C8AAF;color:white"
 +
!Action ID
 +
!Action description
 +
!Owner
 +
!Target date
 +
!Status
 +
!Last update
 +
!Notes
 +
 
 +
|-
 +
| VOI-DUNE-001
 +
| Review job submission to UK from FNAL
 +
| AM
 +
| 2018-05-31
 +
| Completed
 +
| 2018-05-15
 +
| Existing Imperial and Sheffield jobs are centrally managed MC and some MC data processing
 +
 
 +
|-
 +
| VOI-DUNE-002
 +
| protoDUNE data storage estimating
 +
| AM/PC
 +
| 2018-05-31
 +
| Completed
 +
| 2018-05-15
 +
| DUNE centrally would like ~2PB in UK, which we believe to be feasible using GridPP+IRIS capacity
 +
 
 +
|-
 +
| VOI-DUNE-003
 +
| Get DUNE jobs running on ARC
 +
| SJ/AM
 +
| 2018-05-31
 +
| Completed
 +
| 2018-06-22
 +
| Production jobs working at LIV, MAN, and RAL via ARC.
 +
 
 +
|-
 +
| VOI-DUNE-004
 +
| Get DUNE storage access working
 +
| AM
 +
| 2018-05-31
 +
| Completed
 +
| 2018-06-22
 +
| Enabled LIV and MAN storage and tested from FNAL
 +
 
 +
|-
 +
| VOI-DUNE-005
 +
| Recruit more sites for compute
 +
| AM
 +
| 2018-12-25
 +
| Open
 +
| 2018-06-22
 +
| DUNE needs more sites for MC, and CPU-only is sufficient (Currently, LIV, MAN, RAL, LANC, IC, ED)
 +
 
 +
|-
 +
| VOI-DUNE-006
 +
| Provide 2PB of storage and 1500 processors for protoDUNE data taking
 +
| AM
 +
| 2018-12-25
 +
| Completed
 +
| 2018-11-06
 +
| Continue testing and recruiting sites willing to provide storage. ~1PB at Manchester. RAL and ED coming online. More needed during November to reach 2PB.
 +
 
 +
 
 +
 
 +
|}
 +
 
 +
==<span style="color:green">na62.vo.gridpp.ac.uk==
 +
<span style="color:green">In production:</span>
 +
[https://na62.gla.ac.uk/index.php?task=stats&view=sites NA62 Monitoring server.] <br>
 +
They welcome more sites supporting them. <br>
 +
July 2018: NA62 wants to use Mainz. Tracked in GGUS 135805. Waiting for Mianz to configure their site. <br>
 +
October 2017: NA62 wants to use resources at CERN, which requires the GridPP DIRAC to submit to HTCondor. This requires major changes to DIRAC server and liaising with CERN to get NA62 enabled correctly. This is now done, as it turned out the problem wasn't HTCondor on DIRAC, but CERN's difficulties at installing a new VO.
 +
 
 +
==PRaVDA==
 +
 
 +
* ''PoC: Mark Slater/Matt Williams (MS/MW)''
 +
 
 +
* ''End User: Tony Price (TP)''
 +
 
 +
* Update requested 3rd Feb., 19th Feb. 2016 by TW. TP replied 2016-03-21 - they have been busy building the actual device!
 +
 
 +
* TP in the process of changing roles. Need to finalise the new end user.
 +
 
 +
* 20th Feb: Asked TP about status - currently processing data so need of simulations reduced. Will pick up again when more simulation required.
 +
 
 +
{|border="1" cellpadding="1"
 +
|+
 +
 
 +
|-style="background:#7C8AAF;color:white"
 +
!Action ID
 +
!Action description
 +
!Owner
 +
!Target date
 +
!Status
 +
!Date closed
 +
!Notes
 +
 
 +
|-
 +
|VOI-PRA-001
 +
|Get PRaVDA up and running with DIRAC and Ganga.
 +
|MS/MW
 +
|2015-10-01
 
|Closed
 
|Closed
|2015-12-11
+
|2016-03-21
|Repository automatically mounted at EGI sites as part of OSG EGI agreement, but was not replicated on any EGI stratum1. https://ggus.eu/index.php?mode=ticket_info&ticket_id=115335
+
|TP has successfully got simulations running using DIRAC and Ganga.
  
 +
|-
 +
|VOI-PRA-002
 +
|Issues with DIRAC, Ganga and LFN names when copying data back.
 +
|MS/MW
 +
|2015-03-21
 +
|Closed
 +
|2015-08-01
 +
|MS/MW assisting on the Ganga side.
  
 
|-
 
|-
|VOI-LSST-006
+
|VOI-PRA-003
|Run jobs using LSST CVMFS using direct job submission and ganga
+
|TP changing roles. Need to make contact with new end user.
|JZ
+
|MS
|2015-09-30
+
|2016-05-23
 
|Closed
 
|Closed
|2015-12-11
+
|2017-02-20
|Joe uploaded the software and used it to run jobs using northgrid
+
|End user did not make contact. TP still working with them so will use him as PoC for the moment
  
 
|-
 
|-
|VOI-LSST-007
+
|VOI-PRA-004
|Enable LSST on Dirac
+
|Waiting on data processing to be completed and more simulations to be required.
|AF, DB, AW, EM, SJ
+
|MS
|2015-09-30
+
|2017-02-20
 
|Closed
 
|Closed
|2015-12-11
+
|2017-11-06
|Got the Dirac pilot DN assigned to the pilot role on VOMS, enabled the VO on Dirac, enabled pilot at sites, tested submission and fixed misconfigured sites. JIRA https://its.cern.ch/jira/browse/GRIDPP-22 <br>
+
|
https://ggus.eu/index.php?mode=ticket_info&ticket_id=117585 <br> https://ggus.eu/index.php?mode=ticket_info&ticket_id=117586
+
 
 
|-
 
|-
|VOI-LSST-008
+
|VOI-PRA-005
|Test Ganga Dirac setup
+
|Got an update from TP. Pravda are still wanting to do more work on the grid but at present there is no manpower. Will check again in a few months.
|AF
+
|MS
 +
|2017-11-06
 +
|In Progress
 
|
 
|
|In progress
+
|
|2015-12-11
+
|}
|https://its.cern.ch/jira/browse/GRIDPP-22 https://github.com/ganga-devs/ganga/issues/45
+
  
 +
==SKA Regional Centre==
 +
* [[SKA Regional Centre|GridPP SKA regional centre information]]
 +
* PoC: Alessandra Forti (AF)
 +
* Previous contacts: Rohini Joshi (RJ), Andrew McNab (AM)
 +
* VO: skatelescope.eu
 +
 +
{|border="1" cellpadding="1"
 +
|+
 +
 +
|-style="background:#7C8AAF;color:white"
 +
!Action ID
 +
!Action description
 +
!Owner
 +
!Target date
 +
!Status
 +
!Last update
 +
!Notes
  
 
|-
 
|-
|VOI-LSST-009
+
|VOI-SKA-001
|voms-proxy-init EMI-3 not working
+
|LOFAR tests for SKA with DIRAC
|AF, DB
+
|RJ/AM
|
+
|2017-12-31
|In progress
+
|Open
|2015-12-11
+
|2017-10-10
|There is a ticket open. https://ggus.eu/index.php?mode=ticket_info&ticket_id=117587 Last action is on FNAL to upgrade to the latest VOMS version available in OSG
+
|LOFAR application now running from cvmfs at LCG.UKI-NORTHGRID-MAN-HEP.uk, with input jobs matched to input data in GridPP DIRAC File Catalog, stored on DPM. Working on mass input of data from Groningen via grid jobs (wget with password then DIRAC data management commands)
  
 
|-
 
|-
|VOI-LSST-010
+
|VOI-SKA-002
|Update the Operations portal with correct VOMS info
+
|LOFAR tests for SKA on DIRAC/OpenStack
|AF
+
|AM
|
+
|2017-12-31
|In progress
+
|Done
|2015-12-11
+
|2017-10-18
|Helene Cordier not a manager anymore. LSST US managers looking into it, problems with certificates though as they haven't updated the card in a long time.
+
|GridPP DIRAC SAM tests for skatelescope.eu run at DataCentred. Manchester storage associated with DataCentred in GridPP DIRAC so it matches SKA/LOFAR jobs.
  
 
|-
 
|-
|VOI-LSST-011
+
|VOI-SKA-003
|Long lived proxies
+
|Add more sites with skatelescope.eu VO / GridPP DIRAC
|AF
+
|AM
|
+
|2017-12-31
|In progress
+
|Open
|2015-12-11
+
|2017-10-03
|Understand how LHC experiments do it, understand how dirac does it, test jobs longer than a day with dirac, talk to FNAL about longer proxies. Ticket with FNAL [https://fermi.service-now.com/nav_to.do?uri=sc_req_item.do%3Fsys_id=22ae3abe6f270200000131012e3ee40d%26sysparm_stack=sc_req_item_list.do%3Fsysparm_query=active=true  RITM0302478] need authentication.
+
|QMUL joins Manchester, Imperial, Cambridge in passing SAM tests and providing storage. More volunteer sites welcome: need 10-20TB on grid storage that is set up in GridPP DIRAC.
  
 
|-
 
|-
|VOI-LSST-012
+
|VOI-SKA-004
|Check with NERSC how to use gridftp, authentication and authorisation mechanisms
+
|File replication across 100Gb/s JBO to London link
|JZ, ME, AF
+
|AM
|
+
|2018-03-31
|In progress
+
|Open
|2015-12-11
+
|2017-10-31
|Some data are at NERSC which seem to support gridftp transfers. Current method used by Joe to copy data to the UI with scp and then gfal-copy to the storage is slow and painful.  
+
|Plan: set up endpoint machines as DIRAC SEs; do DIRAC file replications between them, over the 100Gb/s link registered in the DIRAC Replica Catalogue. Use DIRAC DMS for this directly at first, then use DMS to schedule RAL FTS for this.
  
 
|-
 
|-
|VOI-LSST-013
+
|VOI-SKA-005
|Test gridtp transfers with NERSC
+
|Run DIRAC jobs in large (whole node?) VMs
|JZ, ME
+
|AM/RJ
|
+
|2018-03-31
 
|Open
 
|Open
|2015-12-11
+
|2018-03-06
|Once VOI-LSST-012 is settled. Proceed to test the transfers to a grid storage and if all goes well transfer all the data needed for an analysis. In the meantime joe will continue with the slow method.
+
|Provide VMs which can run SKA DIRAC jobs with "lots" of memory. Probably >= 48 GB.
 +
 
 +
|-
 +
|VOI-SKA-006
 +
|Activate/use skatelescope.eu in GGUS
 +
|AM/RJ
 +
|2018-03-31
 +
|Open
 +
|2018-03-13
 +
|Test ticket from GGUS team processed. RJ will use GGUS to communicate with sites, as part of AENEAS evaluation of infrastructure tools
 +
 
 +
|-
 +
|VOI-SKA-007
 +
|Test of the Transformation System
 +
|DB/SF/RJ
 +
|2018-12-31
 +
|Open
 +
|2018-07-03
 +
|Test transformation system setup for SKA on dirac test server. Still hunting down bugs wrt v6r20 and waiting for full multi-VO implementation, possibly in v6r21
  
 
|}
 
|}
  
==LZ==
+
==t2k.org==
 +
Sophie King at Kings and Lukas Koch at RAL.
 +
They know their way around the grid.
 +
Daniela can take messages.
  
''PoC: David Colling (DC)''
+
==<span style="color:green">snoplus.snolab.ca</span>==
 +
<span style="color:green">In production</span> <br>
 +
* ''PoC: Pete Gronbech'' (PG)
  
==PRaVDA==
+
* ''SNO+: Karin Gilje'' (snoplus_vosupport - at - snolab.ca)
  
''PoC: Mark Slater/ Matt Williams (MS/MW)''
+
{|border="1" cellpadding="1"
 +
|+
 +
 
 +
|-style="background:#7C8AAF;color:white"
 +
!Action ID
 +
!Action description
 +
!Owner
 +
!Target date
 +
!Status
 +
!Last update
 +
!Notes
 +
 
 +
|-
 +
|VOI-SNO+-001
 +
|Check on progress via GridPP-Support list.
 +
|PG
 +
|2016-02-17
 +
|Closed
 +
|2016-03-10
 +
|See VOI-SNO+-003 - success, closing this for now.
 +
 
 +
|-
 +
|VOI-SNO+-002
 +
|MM to join GridPP Storage meeting
 +
|PG
 +
|2016-02-24
 +
|Closed
 +
|2016-02-24
 +
|MM joined the meeting to discuss requirements and various options. See minutes.
 +
 
 +
|-
 +
|VOI-SNO+-003
 +
|MM to transfer files out of the SNO+ cavern via an FTP server.
 +
|PG
 +
|2016-02-17
 +
|Closed
 +
|2016-03-10
 +
|Success after fantastic support/discussion on the GRIDPP-SUPPORT mailing list.
 +
 
 +
|-
 +
|VOI-SNO+-004
 +
|Check on progress via GridPP-Support list (16thMay).
 +
|PG
 +
|2016-05-31
 +
|Closed. Will be reopened if there is demand.
 +
|2017-06-06
 +
|
 +
 
 +
|-
 +
|VOI-SNO+-005
 +
|SNO are making increasing use of resources at RAL Tape allocation increased to 100TB, CPU ~300HS06hrs per month
 +
|PG
 +
|2018-05-22
 +
|Informational.
 +
|2018-05-22
 +
|
 +
 
 +
|-
 +
|VOI-SNO+-006
 +
|SNO need to migrate away from the LFC. They (Karin Giljie) have been put in contact with Alastair at the Tier-1 for help.
 +
|PG
 +
|2018-09-01
 +
|Informational.
 +
|2018-10-16
 +
|
 +
|}
 +
 
 +
==SuperNEMO==
 +
Paolo Franchini  and Julia Sedgbeer at Imperial.
 +
Daniela as the liaison (as I get asked anyway).
 +
Supernemo has no plans to use grid resources at this time, except for transfers to the Imperial SE, after which they intend to use xrootd.
 +
 
 +
==DEAP3600==
 +
 
 +
''PoC: Jeremy Coles (JC)''
 +
* Update requested February 2016.
 +
* No response as of 21st March 2016
 +
* 24th May: Awaiting main local user at RHUL to begin activities.
 +
* 23rd Aug: DEAP3600 will generate around 10TB of (calibrated) data per year for 5 years, starting this year I think. The original (much larger) raw data are backed up on tape in Canada, but the calibrated data are not. For reasons of backup and access, we were hoping it would it be possible to get these 50TB calibrated data stored at the Tier0 at RAL.
 +
The model would be only to use RAL for custodial data storage and to copy data as needed to Tier2 sites such as RHUL for analysis.
 +
There will also be around 60 TB (possible x 2 generations) of MC which will be kept only at a Tier2 because it can be regenerated in case it is lost.
 +
* 25th Sept 17: No current activity. Query sent.
  
 
==UKQCD==
 
==UKQCD==
  
 
''PoC: Jeremy Coles (JC)''
 
''PoC: Jeremy Coles (JC)''
 +
* Update requested February 2016.
 +
* Update 21st March: "Hoping to do more with the gridpp resources".
 +
** Preliminary result in conference proceedings - "Investigating Some Technical Improvements to Glueball Calculations" e-Print: arXiv:1511.09303.
 +
* 24th May: Will try and leverage some of the international lattice data grid stuff. Nothing immediate planned.
 +
* 22nd August: No recent activity or planned activities.
 +
* 25th Sept 17: No further work planned.

Latest revision as of 12:14, 27 April 2020

This page is for monitoring the progress of new(ish) GridPP VOs.

  • PoC - Point of Contact

Please consider archiving Closed/Completed issues, so that we can see what we need to work on. It is straight forward to create a page for your VO.


All new VOs

These tasks will need to be completed for all

Action ID Action description Owner Target date Status Date closed Notes
VOI-GEN-001 Deploy test software to RVO CernVM-FS repositories. Duncan, Daniela, Gareth, Alessandra, Ewan 2015-05-26 Closed 2016-08-23 New users in the Regional VOs will need to run test jobs using software in the RVO CernVM-FS repositories. This test software will need to be uploaded by the RVO admins. Instructions for doing this can be found here. Tested for vo.londongrid.ac.uk (--Daniela)
VOI-GEN-002 Write up the VO registration procedure Tom 2015-05-31 Closed 2016-08-23 Guide started here - comments and feedback appreciated. Use gridpp guide.


vo.DiRAC.ac.uk

PoC: Jens Jensen, Brian Davies

All Closed actions can be found on the Vo.DiRAC.ac.uk_archived_actions page.

Action ID Action description Owner Target date Status Date closed Notes
VOI-DIRAC010 Track Data transfer from further Sites BD 2016-11-29 Open Following vo.dirac.ac.uk transfer working group meeting. dirac have identified initial data from non Durham sites which need to be transferred. ( ~700TB per site this needs verifying.)

07/02/17 First test transfers from Cambridge and Leicester now succeeding.

23/05/2017 Edinburgh now transferring ~2TB/day. Durham testing recall from RAL to Durham.

25/09/2017 In last three months, Cambridge/Edinburgh/Leicester have transferred 49/114/2633 files; equating to volumes of 6.5/10.4/41.6 TB of data. Issue raised at GRIDPP39 for mechanism of how to update data needs to be investigated.

31/01/2018 Current Usage Durham/Cambridge/Edinburgh/Leicester are using 149/45/41/40 8TB tapes respectively each. Approx 2.2PB of space used Last date for when data was ingested form each site is 10-07-2017/17-08-2017/17-11-2017/04-12-2017 Errata: This may be flalse as timestamps on tape are "peculiar"

Since CERN have switched to grafana monitoring tool, we have lost production of centralized plots for all small VOs.

27/02/2018 Polled DiRAC sites for any outstanding issues. Non reported. 20/08/2018 No reported problems.

VOI-DIRAC011 Track Tape Volume usage at Tier 1 . BD 2018-04-25 Open Old RT ticket in tier 1 helpdesk to cover this: https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=177668 Adding here to track.
VOI-DIRAC012 New page to show current FTS data transfers. BD 2018-04-25 Open Since CERN deleted dashboard pages which showed non WLCG VO usage on FTS servers, info of how transfers succeed is missing. Need to find a new method.


EUCLID

Wholly dealt with by IRIS.

GalDyn

  • PoC: Tom Whyntie (TW) Matt Doidge
  • UCLan: Adam Clarke (AC) Victor Debattista
Action ID Action description Owner Target date Status Date closed Notes
VOI-GAL-001 Assist GalDyn users with CernVM creation and testing. TW 2015-02-18 Closed 2015-02-18 GalDyn users have successfully instantiated CernVMs for accessing the grid.
VOI-GAL-002 Assist GalDyn users with running test jobs on the Imperial DIRAC instance. TW 2015-02-23 Closed 2015-02-23 GalDyn users have successfully run test jobs on the Imperial DIRAC instance via a GridPP CernVM.
VOI-GAL-003 Assist GalDyn users with compiling user software on the CernVM. TW 2015-03-08 Closed 1016-02-15 The code compiles and runs, but needs to be put in a grid/CernVM-FS-friendly format.
VOI-GAL-004 Create the GalDyn CernVM-FS repository TW, CC 2015-05-05 Closed 2015-05-05 New CernVM-FS repository galdyn.egi.eu has been created on the RAL Stratum-1 for the GalDyn VO.
VOI-GAL-005 (Re)new grid certificate for AC AC 2015-02-22 Closed 2015-03-15 UK CA managed to renew the old certificate. Work on hold - user preparing for PhD viva!
VOI-GAL-006 Creating an account for a UCLan student on the Lancaster cluster TW/Robin Long (Lancaster) 2016-09-15 Defunct The student (visiting from China) will pick up Adam's work on grid deployment for an upcoming paper. They will have a UCLan computing account but an account on the Lancaster cluster would further speed things up. Under discussion. 2016-11-16: TW emailed Victor D (group head) for an update.
VOI-GAL-007 Recontact GalDyn once new student starts Matt D 2017-09-27 Closed 2018-10-30 Victor D from GalDyn was contacted and Victor was interested in continuing the GridPP work. Waiting on a new student to start to take up the work. Will contact GalDyn towards the end of October to build some momentum. Feb 2018 update - Galdyn's work has been delayed, but would like to compliment their dirac workflow with gridpp resources. Closed after new postdoc started and GalDyn recontacted.
VOI-GAL-008 Discuss and define GalDyn's needs with using GridPP resources to suppliment DiRAC (the HPC) work. Matt D/Jeremy 2018-02-26 Open Victor has stated that he would like to use our resources to supplement their HPC (DiRAC) work. The orbit calculations described for this work are single core jobs. 27/4 - Victor contacted us asking for help getting started on submitting jobs.
VOI-GAL-009 Keep communication with GalDyn Open Matt D 2018-10-30 Open Now a new postdoc has started (from the first) we await him being able to move the work onto Grid resources - keep pressure on and be ready to answer questions.

LIGO

PoC: Catalin Condurache (CC)
LIGO: Paul Hopkins (PH)


Action ID Action description Owner Target date Status Date closed Notes
VOI-LIGO-001 Create the LIGO CernVM-FS repository CC 2014-12-01 Closed 2015-02-15 New CernVM-FS repository ligo.egi.eu has been created on the RAL Stratum-1 for the LIGO VO.
VOI-LIGO-002 Assist LIGO users with using Condor + nordugrid to access ARC-CE@RAL. AL, CC 2015-12 Closed 2016-02-12 Test jobs submitted from LIGO Condor instance to ARC-CE service at RAL were successful.
VOI-LIGO-003 Plan to run proper analyses jobs using scientists involvement PH, CC 2016-02-12 Open [2016-05-24]Still chasing scientists to run analyses jobs. Some promises.
VOI-LIGO-004 Get file storage working via the GridPP CernVM. PH, CC 2016-02-24 Closed 2016-03-08 PH managed to get file transfers working with the GridPP CernVM using bridged networking and getting the VM registered on the university network.
VOI-LIGO-005 Enable the OSG VO on RAL CEs and batch system. AL 2017-05-06 Closed 2017-05-23 Job successfully being submitted to RAL, but have requested that single-core pilots are used rather than multi-core pilots, as LIGO are only running single-core jobs. We don't need to do anything else however.
VOI-LIGO-006 Get access to LIGO data via secure CVMFS working on RAL workernoes CC,AL 2017-06 Open

LOFAR

PoC: George Ryall (GR), from April 2016 - Alex Dibbo (AD)
21/03/16: LOFAR should be in a posetion to perform an analysis run on a limited number of VMs with real data in the next few weeks. (GR) 25/05/16: Note that LOFAR is a VO supported under 'STFC' not GridPP. Communication with SCD cloud users is good. A recent issue with the cloud storage/CEPH has led to less recent activity. 31/01/18: The last update from them was that they thought they had enough to get running in January 2017. The last few times I have tried to get in touch I have received no response. They don’t appear to have done anything since last January 2017

LSST

The LSST UK page.
All Closed action items can be found in the LSST_archived_actions page.

PoC: Alessandra Forti (AF)
Other people: Joe Zuntz (JZ), Andy Washbrook (AW), Steve Jones (SJ), Catalin Condurache (CC), Daniela Bauer (DB), Marcus Ebert (ME), Kashif Mohammad (KM), Dan Traynor (DT), Gareth Roy (GR), Matt Doidge (MD), Peter Love (PL), Pavlo Svirin (PS), Rob Currie (RC)

Action ID Action description Owner Target date Status Last update Notes
VOI-LSST-028 Help James Perry to run DC2 AF, DB, SF, UE, RC Closed. 2018-06-01 James is running DC2 jobs at a handful of UK sites after installing the software on CVMFS and built his jobs to run multicore.
VOI-LSST-029 Help JP to increase the resources to run DC2 AF, DB, SF, UE, RC Closed. 2018-08-17 George Beckett asked for more resources on the 2018-07-20 because the LSST jobs are running quite well on the grid. Giving more resources to LSST has had some problems due to the way the jobs were broekered. Principal problems were dirac uses the closest storage concept so jobs were brokered only to sites with the data (Man, QM, IC), default OS is SL6 so none of the C7 resources were used, jobs are multicore and dirac doesn't target queues based on this. To solve the data problem ganga had to be modified to accept data LFNs in the input sandbox. Some sites like Manchester had to modify the queues ACLs to fix the flat brokering on all the queues and some others had to increase the LSST priority. Number of sites that run at least 2 jobs this week is now 8 among which RAL.
VOI-LSST-030 Reconfigure sites to point ot SLAC VOMS rather than FNAL AF Open 2018-10-22 FNAL is trying to decomission its VOMS server and LSST as long term solution has decided to move to SLAC. Sites need to be reconfigured. I've added the information to the approved VO page. https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs#LSST

LZ

In Production: GridPP LZ page.

DUNE

The GridPP DUNE page.

PoC: Andrew McNab (AM)
Others: Peter Clarke (PC), Stephen Jones (SJ), Raja Nandakumar (RN), Jaroslaw Nowak (JN), Andrew Washbrook (AW)

Action ID Action description Owner Target date Status Last update Notes
VOI-DUNE-001 Review job submission to UK from FNAL AM 2018-05-31 Completed 2018-05-15 Existing Imperial and Sheffield jobs are centrally managed MC and some MC data processing
VOI-DUNE-002 protoDUNE data storage estimating AM/PC 2018-05-31 Completed 2018-05-15 DUNE centrally would like ~2PB in UK, which we believe to be feasible using GridPP+IRIS capacity
VOI-DUNE-003 Get DUNE jobs running on ARC SJ/AM 2018-05-31 Completed 2018-06-22 Production jobs working at LIV, MAN, and RAL via ARC.
VOI-DUNE-004 Get DUNE storage access working AM 2018-05-31 Completed 2018-06-22 Enabled LIV and MAN storage and tested from FNAL
VOI-DUNE-005 Recruit more sites for compute AM 2018-12-25 Open 2018-06-22 DUNE needs more sites for MC, and CPU-only is sufficient (Currently, LIV, MAN, RAL, LANC, IC, ED)
VOI-DUNE-006 Provide 2PB of storage and 1500 processors for protoDUNE data taking AM 2018-12-25 Completed 2018-11-06 Continue testing and recruiting sites willing to provide storage. ~1PB at Manchester. RAL and ED coming online. More needed during November to reach 2PB.


na62.vo.gridpp.ac.uk

In production: NA62 Monitoring server.
They welcome more sites supporting them.
July 2018: NA62 wants to use Mainz. Tracked in GGUS 135805. Waiting for Mianz to configure their site.
October 2017: NA62 wants to use resources at CERN, which requires the GridPP DIRAC to submit to HTCondor. This requires major changes to DIRAC server and liaising with CERN to get NA62 enabled correctly. This is now done, as it turned out the problem wasn't HTCondor on DIRAC, but CERN's difficulties at installing a new VO.

PRaVDA

  • PoC: Mark Slater/Matt Williams (MS/MW)
  • End User: Tony Price (TP)
  • Update requested 3rd Feb., 19th Feb. 2016 by TW. TP replied 2016-03-21 - they have been busy building the actual device!
  • TP in the process of changing roles. Need to finalise the new end user.
  • 20th Feb: Asked TP about status - currently processing data so need of simulations reduced. Will pick up again when more simulation required.
Action ID Action description Owner Target date Status Date closed Notes
VOI-PRA-001 Get PRaVDA up and running with DIRAC and Ganga. MS/MW 2015-10-01 Closed 2016-03-21 TP has successfully got simulations running using DIRAC and Ganga.
VOI-PRA-002 Issues with DIRAC, Ganga and LFN names when copying data back. MS/MW 2015-03-21 Closed 2015-08-01 MS/MW assisting on the Ganga side.
VOI-PRA-003 TP changing roles. Need to make contact with new end user. MS 2016-05-23 Closed 2017-02-20 End user did not make contact. TP still working with them so will use him as PoC for the moment
VOI-PRA-004 Waiting on data processing to be completed and more simulations to be required. MS 2017-02-20 Closed 2017-11-06
VOI-PRA-005 Got an update from TP. Pravda are still wanting to do more work on the grid but at present there is no manpower. Will check again in a few months. MS 2017-11-06 In Progress

SKA Regional Centre

Action ID Action description Owner Target date Status Last update Notes
VOI-SKA-001 LOFAR tests for SKA with DIRAC RJ/AM 2017-12-31 Open 2017-10-10 LOFAR application now running from cvmfs at LCG.UKI-NORTHGRID-MAN-HEP.uk, with input jobs matched to input data in GridPP DIRAC File Catalog, stored on DPM. Working on mass input of data from Groningen via grid jobs (wget with password then DIRAC data management commands)
VOI-SKA-002 LOFAR tests for SKA on DIRAC/OpenStack AM 2017-12-31 Done 2017-10-18 GridPP DIRAC SAM tests for skatelescope.eu run at DataCentred. Manchester storage associated with DataCentred in GridPP DIRAC so it matches SKA/LOFAR jobs.
VOI-SKA-003 Add more sites with skatelescope.eu VO / GridPP DIRAC AM 2017-12-31 Open 2017-10-03 QMUL joins Manchester, Imperial, Cambridge in passing SAM tests and providing storage. More volunteer sites welcome: need 10-20TB on grid storage that is set up in GridPP DIRAC.
VOI-SKA-004 File replication across 100Gb/s JBO to London link AM 2018-03-31 Open 2017-10-31 Plan: set up endpoint machines as DIRAC SEs; do DIRAC file replications between them, over the 100Gb/s link registered in the DIRAC Replica Catalogue. Use DIRAC DMS for this directly at first, then use DMS to schedule RAL FTS for this.
VOI-SKA-005 Run DIRAC jobs in large (whole node?) VMs AM/RJ 2018-03-31 Open 2018-03-06 Provide VMs which can run SKA DIRAC jobs with "lots" of memory. Probably >= 48 GB.
VOI-SKA-006 Activate/use skatelescope.eu in GGUS AM/RJ 2018-03-31 Open 2018-03-13 Test ticket from GGUS team processed. RJ will use GGUS to communicate with sites, as part of AENEAS evaluation of infrastructure tools
VOI-SKA-007 Test of the Transformation System DB/SF/RJ 2018-12-31 Open 2018-07-03 Test transformation system setup for SKA on dirac test server. Still hunting down bugs wrt v6r20 and waiting for full multi-VO implementation, possibly in v6r21

t2k.org

Sophie King at Kings and Lukas Koch at RAL. They know their way around the grid. Daniela can take messages.

snoplus.snolab.ca

In production

  • PoC: Pete Gronbech (PG)
  • SNO+: Karin Gilje (snoplus_vosupport - at - snolab.ca)
Action ID Action description Owner Target date Status Last update Notes
VOI-SNO+-001 Check on progress via GridPP-Support list. PG 2016-02-17 Closed 2016-03-10 See VOI-SNO+-003 - success, closing this for now.
VOI-SNO+-002 MM to join GridPP Storage meeting PG 2016-02-24 Closed 2016-02-24 MM joined the meeting to discuss requirements and various options. See minutes.
VOI-SNO+-003 MM to transfer files out of the SNO+ cavern via an FTP server. PG 2016-02-17 Closed 2016-03-10 Success after fantastic support/discussion on the GRIDPP-SUPPORT mailing list.
VOI-SNO+-004 Check on progress via GridPP-Support list (16thMay). PG 2016-05-31 Closed. Will be reopened if there is demand. 2017-06-06
VOI-SNO+-005 SNO are making increasing use of resources at RAL Tape allocation increased to 100TB, CPU ~300HS06hrs per month PG 2018-05-22 Informational. 2018-05-22
VOI-SNO+-006 SNO need to migrate away from the LFC. They (Karin Giljie) have been put in contact with Alastair at the Tier-1 for help. PG 2018-09-01 Informational. 2018-10-16

SuperNEMO

Paolo Franchini and Julia Sedgbeer at Imperial. Daniela as the liaison (as I get asked anyway). Supernemo has no plans to use grid resources at this time, except for transfers to the Imperial SE, after which they intend to use xrootd.

DEAP3600

PoC: Jeremy Coles (JC)

  • Update requested February 2016.
  • No response as of 21st March 2016
  • 24th May: Awaiting main local user at RHUL to begin activities.
  • 23rd Aug: DEAP3600 will generate around 10TB of (calibrated) data per year for 5 years, starting this year I think. The original (much larger) raw data are backed up on tape in Canada, but the calibrated data are not. For reasons of backup and access, we were hoping it would it be possible to get these 50TB calibrated data stored at the Tier0 at RAL.

The model would be only to use RAL for custodial data storage and to copy data as needed to Tier2 sites such as RHUL for analysis. There will also be around 60 TB (possible x 2 generations) of MC which will be kept only at a Tier2 because it can be regenerated in case it is lost.

  • 25th Sept 17: No current activity. Query sent.

UKQCD

PoC: Jeremy Coles (JC)

  • Update requested February 2016.
  • Update 21st March: "Hoping to do more with the gridpp resources".
    • Preliminary result in conference proceedings - "Investigating Some Technical Improvements to Glueball Calculations" e-Print: arXiv:1511.09303.
  • 24th May: Will try and leverage some of the international lattice data grid stuff. Nothing immediate planned.
  • 22nd August: No recent activity or planned activities.
  • 25th Sept 17: No further work planned.