Difference between revisions of "GridPP VO Incubator"

From GridPP Wiki
Jump to: navigation, search
(SuperNEMO: Added tasks from update email)
 
(234 intermediate revisions by 17 users not shown)
Line 1: Line 1:
 
This page is for monitoring the progress of new(ish) GridPP [[Virtual Organisation|VOs]].
 
This page is for monitoring the progress of new(ish) GridPP [[Virtual Organisation|VOs]].
  
* PoC - Point of Contact
+
* PoC - Point of Contact <br>
 +
Please consider archiving Closed/Completed issues, so that we can see what we need to work on. It is straight forward to create a page for your VO. <br>
 +
 
  
 
==All new VOs==
 
==All new VOs==
Line 24: Line 26:
 
|Duncan, Daniela, Gareth, Alessandra, Ewan
 
|Duncan, Daniela, Gareth, Alessandra, Ewan
 
|2015-05-26
 
|2015-05-26
|Open
+
|Closed
|N/A
+
|2016-08-23
|New users in the [[Regional VO]]s will need to run test jobs using software in the [[Regional VO|RVO]] CernVM-FS repositories. This test software will need to be uploaded by the [[Regional VO|RVO]] admins. Instructions for doing this can be found [[A_quick_guide_to_CVMFS|here]].
+
|New users in the [[Regional VO]]s will need to run test jobs using software in the [[Regional VO|RVO]] CernVM-FS repositories. This test software will need to be uploaded by the [[Regional VO|RVO]] admins. Instructions for doing this can be found [[A_quick_guide_to_CVMFS|here]]. Tested for vo.londongrid.ac.uk (--Daniela)
  
 
|-
 
|-
Line 33: Line 35:
 
|Tom
 
|Tom
 
|2015-05-31
 
|2015-05-31
|In progress
+
|Closed
|N/A
+
|2016-08-23
|Guide started [[Start_Here_-_Creating_a_new_VO|here]] - comments and feedback appreciated.
+
|Guide started [[Start_Here_-_Creating_a_new_VO|here]] - comments and feedback appreciated. Use gridpp guide.
  
 
|}
 
|}
  
==DEAP3600==
 
 
''PoC: Jeremy Coles (JC)''
 
* Update requested February 2016.
 
* No response as of 21st March 2016
 
  
 
==vo.DiRAC.ac.uk==
 
==vo.DiRAC.ac.uk==
Line 49: Line 46:
 
''PoC: Jens Jensen, Brian Davies''
 
''PoC: Jens Jensen, Brian Davies''
  
 +
* 03/05/16: Durham have now moved 940TB in 6 months. Expect ~2.5PB in total from Durham.
 +
* 26/09/17: The number of 8TB tapes being used by Cambridge/Durham/Edinburgh/Leicester are 45/149/41/16
 +
* 26/09/17: Mark Wilkinson gave talk at GRIDPP39 https://indico.cern.ch/event/656544/contributions/2710500/attachments/1523457/2381245/DiRAC_GridPP_Meeting.pdf
 +
 +
All '''Closed''' actions can be found on the [[Vo.DiRAC.ac.uk_archived_actions]] page.
  
 
{|border="1" cellpadding="1"
 
{|border="1" cellpadding="1"
Line 63: Line 65:
  
 
|-
 
|-
|VOI-DIRAC-001
+
|VOI-DIRAC010
|Set up VO in GridPP
+
|Track Data transfer from further Sites
|JJ
+
|BD
|2015-04-30
+
|2016-11-29
|Closed
+
|Open
|
+
 
|
 
|
 +
| Following vo.dirac.ac.uk transfer working group meeting. dirac have identified initial data from non Durham sites which need to be transferred. ( ~700TB per site  this needs verifying.)
 +
07/02/17 First test transfers from Cambridge and Leicester now succeeding.
  
|-
+
23/05/2017 Edinburgh now transferring ~2TB/day. Durham testing recall from RAL to Durham.
|VOI-DIRAC-002
+
|Register DiRAC with EGI
+
|JJ
+
|2015-05-31
+
|Closed
+
|
+
|[http://operations-portal.egi.eu/vo/search]
+
  
|-
+
25/09/2017 In last three months, Cambridge/Edinburgh/Leicester have transferred 49/114/2633 files; equating to volumes of 6.5/10.4/41.6 TB of data. Issue raised at GRIDPP39 for mechanism of how to update data needs to be investigated.
|VOI-DIRAC-003
+
|Write up DiRAC site setup document
+
|LH
+
|2015-08-31
+
|Closed
+
|
+
|Version 1.3 circulated to DIRAC-USERS
+
  
|-
+
31/01/2018 Current Usage Durham/Cambridge/Edinburgh/Leicester are using 149/45/41/40 8TB tapes respectively each. Approx 2.2PB of space used
|VOI-DIRAC-004
+
Last date for when data was ingested form each site is 10-07-2017/17-08-2017/17-11-2017/04-12-2017
|Re-evaluate data packaging method
+
Errata: This may be flalse as timestamps on tape are "peculiar"
|JJ
+
|2015-11-30
+
|In Progress
+
|
+
|Need a new method. Sam contributing.
+
  
|-
+
Since CERN have switched to grafana monitoring tool, we have lost production of centralized plots for all small VOs.
|VOI-DIRAC-005
+
|Restart transfers with new method (VOI-DIRAC-004)
+
|LH
+
|2015-12-10
+
|In Progress
+
|
+
|Successful large data transfer over new year 2016, http://gridpp-storage.blogspot.co.uk/2016/01/update-on-vodiracacuk-data-mopvement.html slight error in script (leading to some failures) being updated
+
  
|-
+
27/02/2018 Polled DiRAC sites for any outstanding issues. Non reported.
|VOI-DIRAC-006
+
20/08/2018 No reported problems.
|Get Leicester ready for transfer
+
|JJ
+
|2015-11-30
+
|In Progress
+
|
+
|Leicester now have certificates again. Need to await solution to VO-DIRAC-004
+
  
 
|-
 
|-
|VOI-DIRAC-007
+
|VOI-DIRAC011
|Request or requirement for ACL between sites.
+
|Track Tape Volume usage at Tier 1 .
|BD  
+
|BD
|2015-11-30
+
|2018-04-25
|In Progress
+
|
+
|Awaiting to here if it is a request or a requirement to keep data access seperate bewteen DIRAC sites. I fneeded new voms roles and castor setup may need to be enabled.
+
 
+
 
+
 
+
 
+
|}
+
 
+
==EUCLID==
+
 
+
''PoC: Andrew Lahiff (AL)''
+
 
+
{|border="1" cellpadding="1"
+
|+
+
 
+
|-style="background:#7C8AAF;color:white"
+
!Action ID
+
!Action description
+
!Owner
+
!Target date
+
!Status
+
!Date closed
+
!Notes
+
 
+
|-
+
|VOI-EUC-001
+
|Enable /cvmfs/euclid.in2p3.fr on RAL worker nodes
+
|AL
+
|2016-02-24
+
 
|Open
 
|Open
 
|
 
|
|
+
|Old RT ticket in tier 1 helpdesk to cover this: https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=177668 Adding here to track.
  
 
|-
 
|-
|VOI-EUC-002
+
|VOI-DIRAC012
|Setup /cvmfs/euclid-uk.egi.eu repository
+
|New page to show current FTS data transfers.
|CC
+
|BD
|2016-02-26
+
|2018-04-25
 
|Open
 
|Open
 
|
 
|
|
+
| Since CERN deleted dashboard pages which showed non WLCG VO usage on FTS servers, info of how transfers succeed is missing. Need to find a new method.
  
|-
 
|VOI-EUC-003
 
|Setup accounts on RAL UIs
 
|AL
 
|2016-02-26
 
|Open
 
|
 
|Everyone except for Tom have checked that they can access the UIs
 
  
 
|}
 
|}
 +
 +
==EUCLID==
 +
Wholly dealt with by IRIS.
  
 
==GalDyn==
 
==GalDyn==
  
* ''PoC: Tom Whyntie (TW)''
+
* ''PoC: <strike>Tom Whyntie (TW)</strike> Matt Doidge''
  
* ''UCLan: Adam Clarke (AC)''
+
* ''UCLan: <strike>Adam Clarke (AC)</strike> Victor Debattista''
  
 
{|border="1" cellpadding="1"
 
{|border="1" cellpadding="1"
Line 237: Line 173:
 
|2015-03-15
 
|2015-03-15
 
|UK CA managed to renew the old certificate. ''Work on hold - user preparing for PhD viva!''
 
|UK CA managed to renew the old certificate. ''Work on hold - user preparing for PhD viva!''
 +
 +
|-
 +
|VOI-GAL-006
 +
|Creating an account for a UCLan student on the Lancaster cluster
 +
|TW/Robin Long (Lancaster)
 +
|2016-09-15
 +
|Defunct
 +
|
 +
|The student (visiting from China) will pick up Adam's work on grid deployment for an upcoming paper. They will have a UCLan computing account but an account on the Lancaster cluster would further speed things up. Under discussion. 2016-11-16: TW emailed Victor D (group head) for an update.
 +
 +
|-
 +
|VOI-GAL-007
 +
|Recontact GalDyn once new student starts
 +
|Matt D
 +
|2017-09-27
 +
|Closed
 +
|2018-10-30
 +
|Victor D from GalDyn was contacted and Victor was interested in continuing the GridPP work. Waiting on a new student to start to take up the work. Will contact GalDyn towards the end of October to build some momentum. Feb 2018 update - Galdyn's work has been delayed, but would like to compliment their dirac workflow with gridpp resources. Closed after new postdoc started and GalDyn recontacted.
 +
 +
|-
 +
|VOI-GAL-008
 +
|Discuss and define GalDyn's needs with using GridPP resources to suppliment DiRAC (the HPC) work.
 +
|Matt D/Jeremy
 +
|2018-02-26
 +
|Open
 +
|
 +
|Victor has stated that he would like to use our resources to supplement their HPC (DiRAC) work. The orbit calculations described for this work are single core jobs. 27/4 - Victor contacted us asking for help getting started on submitting jobs.
 +
 +
|-
 +
|VOI-GAL-009
 +
|Keep communication with GalDyn Open
 +
|Matt D
 +
|2018-10-30
 +
|Open
 +
|
 +
|Now a new postdoc has started (from the first) we await him being able to move the work onto Grid resources - keep pressure on and be ready to answer questions.
  
 
|}
 
|}
Line 244: Line 216:
 
''PoC: Catalin Condurache (CC)''<br>
 
''PoC: Catalin Condurache (CC)''<br>
 
''LIGO: Paul Hopkins (PH)''<br>
 
''LIGO: Paul Hopkins (PH)''<br>
''Other people: Andrew Lahiff (AL)
+
 
  
 
{|border="1" cellpadding="1"
 
{|border="1" cellpadding="1"
Line 281: Line 253:
 
|PH, CC
 
|PH, CC
 
|2016-02-12
 
|2016-02-12
 +
|Open
 +
|
 +
|[2016-05-24]Still chasing scientists to run analyses jobs. Some promises.
 +
 +
|-
 +
|VOI-LIGO-004
 +
|Get file storage working via the GridPP CernVM.
 +
|PH, CC
 +
|2016-02-24
 +
|Closed
 +
|2016-03-08
 +
|PH managed to get file transfers working with the GridPP CernVM using bridged networking and getting the VM registered on the university network.
 +
 +
|-
 +
|VOI-LIGO-005
 +
|Enable the OSG VO on RAL CEs and batch system.
 +
|AL
 +
|2017-05-06
 +
|Closed
 +
|2017-05-23
 +
|Job successfully being submitted to RAL, but have requested that single-core pilots are used rather than multi-core pilots, as LIGO are only running single-core jobs. We don't need to do anything else however.
 +
 +
|-
 +
|VOI-LIGO-006
 +
|Get access to LIGO data via secure CVMFS working on RAL workernoes
 +
|CC,AL
 +
|2017-06
 
|Open
 
|Open
 
|
 
|
Line 291: Line 290:
 
''PoC: George Ryall (GR), from April 2016 - Alex Dibbo (AD)''<br>
 
''PoC: George Ryall (GR), from April 2016 - Alex Dibbo (AD)''<br>
 
21/03/16: LOFAR should be in a posetion to perform an analysis run on a limited number of VMs with real data in the next few weeks. ''(GR)''
 
21/03/16: LOFAR should be in a posetion to perform an analysis run on a limited number of VMs with real data in the next few weeks. ''(GR)''
 +
25/05/16: Note that LOFAR is a VO supported under 'STFC' not GridPP. Communication with SCD cloud users is good. A recent issue with the cloud storage/CEPH has led to less recent activity.
 +
31/01/18: The last update from them was that they thought they had enough to get running in January 2017. The last few times I have tried to get in touch I have received no response. They don’t appear to have done anything since last January 2017
  
 
==LSST==
 
==LSST==
 +
 +
The [https://www.gridpp.ac.uk/wiki/LSST_UK LSST UK] page. <br>
 +
All '''Closed''' action items can be found in the [[LSST_archived_actions]] page.
  
 
''PoC: Alessandra Forti (AF)''<br>
 
''PoC: Alessandra Forti (AF)''<br>
''Other people: Joe Zuntz (JZ), Andy Washbrook (AW), Ewan McMahon (EM), Steve Jones (SJ), Catalin Condurache (CC), Daniela Bauer (DB), Marcus Ebert (ME), Kashif Mohammad (KM)
+
''Other people: Joe Zuntz (JZ), Andy Washbrook (AW), Steve Jones (SJ), Catalin Condurache (CC), Daniela Bauer (DB), Marcus Ebert (ME), Kashif Mohammad (KM), Dan Traynor (DT), Gareth Roy (GR), Matt Doidge (MD), Peter Love (PL), Pavlo Svirin (PS), Rob Currie (RC)
  
 
{|border="1" cellpadding="1"
 
{|border="1" cellpadding="1"
Line 310: Line 314:
  
 
|-
 
|-
|VOI-LSST-001
+
| VOI-LSST-028
|Ganga direct job submission using Northgrid
+
| Help James Perry to run DC2
|AF, JZ
+
| AF, DB, SF, UE, RC
|2015-01-31
+
|
|Closed
+
| Closed.
|2015-12-11
+
| 2018-06-01
|Do direct job submission testing using northgrid infrastructure and ganga
+
| James is running DC2 jobs at a handful of UK sites after installing the software on CVMFS and built his jobs to run multicore.
  
 
|-
 
|-
|VOI-LSST-002
+
| VOI-LSST-029
|Get European users on the LSST VOMS server at FNAL
+
| Help JP to increase the resources to run DC2
|AF, JZ
+
| AF, DB, SF, UE, RC
|2015-02-28
+
|Closed
+
|2015-12-11
+
 
|
 
|
 +
| Closed.
 +
| 2018-08-17
 +
| George Beckett asked for more resources on the 2018-07-20 because the LSST jobs are running quite well on the grid. Giving more resources to LSST has had some problems due to the way the jobs were broekered. Principal problems were dirac uses the closest storage concept so jobs were brokered only to sites with the data (Man, QM, IC), default OS is SL6 so none of the C7 resources were used, jobs are multicore and dirac doesn't target queues based on this. To solve the data problem ganga had to be modified to accept data LFNs in the input sandbox. Some sites like Manchester had to modify the queues ACLs to fix the flat brokering on all the queues and some others had to increase the LSST priority. Number of sites that run at least 2 jobs this week is now 8 among which RAL.
  
 
|-
 
|-
|VOI-LSST-003
+
| VOI-LSST-030
|Enable LSST at sites
+
| Reconfigure sites to point ot SLAC VOMS rather than FNAL
|AF, AW, EM, SJ
+
| AF
|2015-06-30
+
|
|Closed
+
| Open
|2015-12-11
+
| 2018-10-22
|Get the correct configuration. Sites get the info from Operations portal which has some obsolete information about VOMS. We should ask to fix it.  
+
| FNAL is trying to decomission its VOMS server and LSST as long term solution has decided to move to SLAC. Sites need to be reconfigured. I've added the information to the approved VO page. https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs#LSST
  
|-
+
|}
|VOI-LSST-004
+
|Find which LSST CVMFS stratum0 is usable by us
+
|AF
+
|2015-08-10
+
|Closed
+
|2015-12-11
+
|3 instances, in France, OSG and FNAL. Chosen FNAL
+
  
|-
+
==<span style="color: green"> LZ </span>==
|VOI-LSST-005
+
<span style="color: green">In Production:</span> GridPP [https://www.gridpp.ac.uk/wiki/LZ LZ] page.
|Get the repository at FNAL and replicated at RAL
+
|AF, CC
+
|2015-08-25
+
|Closed
+
|2015-12-11
+
|Repository automatically mounted at EGI sites as part of OSG EGI agreement, but was not replicated on any EGI stratum1. https://ggus.eu/index.php?mode=ticket_info&ticket_id=115335
+
  
 +
==DUNE==
  
|-
+
The [[DUNE|GridPP DUNE]] page. <br>
|VOI-LSST-006
+
|Run jobs using LSST CVMFS using direct job submission and ganga
+
|JZ
+
|2015-09-30
+
|Closed
+
|2015-12-11
+
|Joe uploaded the software and used it to run jobs with direct job submission using lsst VO
+
  
|-
+
PoC: Andrew McNab (AM)<br>
|VOI-LSST-007
+
Others: Peter Clarke (PC), Stephen Jones (SJ), Raja Nandakumar (RN), Jaroslaw Nowak (JN), Andrew Washbrook (AW)
|Enable LSST on Dirac
+
|AF, DB, AW, EM, SJ
+
{|border="1" cellpadding="1"
|2015-09-30
+
|+
|Closed
+
|2015-12-11
+
|Got the Dirac pilot DN assigned to the pilot role on VOMS, enabled the VO on Dirac, enabled pilot at sites, tested submission and fixed misconfigured sites. JIRA<br>1) https://its.cern.ch/jira/browse/GRIDPP-22 <br>2) https://ggus.eu/index.php?mode=ticket_info&ticket_id=117585 <br>3) https://ggus.eu/index.php?mode=ticket_info&ticket_id=117586
+
|-
+
|VOI-LSST-008
+
|Test Ganga Dirac setup
+
|AF
+
|
+
|Closed
+
|2015-12-11
+
|1) https://its.cern.ch/jira/browse/GRIDPP-22<br>2) https://github.com/ganga-devs/ganga/issues/45<br>3) https://its.cern.ch/jira/browse/GRIDPP-29<br>
+
since gfal_util doesn't work instead of debugging it I've started to look into the dirac file catalogue clients, which have better compatiblity chances on top of having the file catalogue. See VO-LSST-014
+
  
|-
+
|-style="background:#7C8AAF;color:white"
|VOI-LSST-009
+
!Action ID
|voms-proxy-init EMI-3 not working
+
!Action description
|AF, DB
+
!Owner
|
+
!Target date
|Closed
+
!Status
|2015-01-11
+
!Last update
|FNAL upgraded the VOMS to a EMI-3 clients compatible version. Ticket https://ggus.eu/index.php?mode=ticket_info&ticket_id=117587 closed.
+
!Notes
|-
+
|VOI-LSST-010
+
|Update the Operations portal with correct VOMS info
+
|AF
+
|
+
|Closed
+
|2015-01-05
+
|LSST US managers added voms1 and voms2 to the ops portal. http://operations-portal.egi.eu/vo/view/voname/lsst
+
  
 
|-
 
|-
|VOI-LSST-011
+
| VOI-DUNE-001
|Long lived proxies
+
| Review job submission to UK from FNAL
|AF
+
| AM
|
+
| 2018-05-31
|In progress
+
| Completed
|2016-02-22
+
| 2018-05-15
|Ticket with FNAL [https://fermi.service-now.com/nav_to.do?uri=sc_req_item.do%3Fsys_id=22ae3abe6f270200000131012e3ee40d%26sysparm_stack=sc_req_item_list.do%3Fsysparm_query=active=true  RITM0302478]. The mechanisms of renewal and how LHC experiments do it is now understood. Using dirac may help, but this needs to be tested. See VO-LSST-18
+
| Existing Imperial and Sheffield jobs are centrally managed MC and some MC data processing
  
 
|-
 
|-
|VOI-LSST-012
+
| VOI-DUNE-002
|Check with NERSC how to use gridftp, authentication and authorisation mechanisms
+
| protoDUNE data storage estimating
|JZ, ME, AF
+
| AM/PC
|
+
| 2018-05-31
|Closed
+
| Completed
|2015-01-11
+
| 2018-05-15
|NERSC account required to do the transfers. Marcus has now an account.
+
| DUNE centrally would like ~2PB in UK, which we believe to be feasible using GridPP+IRIS capacity
|-
+
|VOI-LSST-013
+
|Test gridtp transfers with NERSC
+
|JZ, ME
+
|
+
|Closed
+
|2015-01-25
+
|Data copied from NERSC with globus-url-copy
+
  
 
|-
 
|-
|VOI-LSST-014
+
| VOI-DUNE-003
|Investigate Dirac file catalogue usage
+
| Get DUNE jobs running on ARC
|AF, JZ
+
| SJ/AM
|
+
| 2018-05-31
|Closed
+
| Completed
|2016-02-22
+
| 2018-06-22
|https://its.cern.ch/jira/browse/GRIDPP-30<br>More info in<br>https://groups.google.com/forum/#!topic/diracgrid-forum/sclcLrQBPFY.<br>Files transferred to Manchester with correct directory naming scheme is working. See VOI-LSST-015.
+
| Production jobs working at LIV, MAN, and RAL via ARC.
  
 
|-
 
|-
|VOI-LSST-015
+
| VOI-DUNE-004
|Spread the LSST data on different sites
+
| Get DUNE storage access working
|AF
+
| AM
|
+
| 2018-05-31
|In progress
+
| Completed
|2016-02-09
+
| 2018-06-22
|https://its.cern.ch/jira/browse/GRIDPP-31<br>All data on a test SE instance for 20,000 foreseen jobs. Currently only 3 SEs (Ed, Man, Liv) enabled and mostly on older hardware. Replication will also allow to solve the registration problem by copying to a subdirectory <storage_path>/lsst/lsst. First leg of the distribution done. All files copied to Manchester. 
+
| Enabled LIV and MAN storage and tested from FNAL
  
 
|-
 
|-
|VOI-LSST-016
+
| VOI-DUNE-005
|Move to use gridpp VO because GridPP management requested to run an analysis with any "means possible"
+
| Recruit more sites for compute
|AF, JZ, RF, DB, KM
+
| AM
|
+
| 2018-12-25
|In progress
+
| Open
|2016-02-16
+
| 2018-06-22
|Requires direct job submission and longer proxies.<br>https://ggus.eu/index.php?mode=ticket_info&ticket_id=119265<br>https://ggus.eu/index.php?mode=ticket_info&ticket_id=119322<br>https://ggus.eu/index.php?mode=ticket_info&ticket_id=119323.<br>Joe has submitted his jobs and the majority run. There
+
| DUNE needs more sites for MC, and CPU-only is sufficient (Currently, LIV, MAN, RAL, LANC, IC, ED)
were problems in Bristol and Cambridge which are under investigation. Joe was recommended to remove
+
them from the list for now and resubmit the jobs. With this
+
  
 
|-
 
|-
| VOI-LSST-17
+
| VOI-DUNE-006
| Edinburgh LSST data access
+
| Provide 2PB of storage and 1500 processors for protoDUNE data taking
| ME
+
| AM
|  
+
| 2018-12-25
| Closed
+
| Completed
| 2016-02-08
+
| 2018-11-06
| The local time out problem with the data access was solved and data is available now for jobs and transfers without timing out
+
| Continue testing and recruiting sites willing to provide storage. ~1PB at Manchester. RAL and ED coming online. More needed during November to reach 2PB.
  
|-
 
| VOI-LSST-18
 
| Adapt Joe's ganga and bash scripts to submit to dirac and use the dirac file catalog clients instead of gfal_utils.
 
| AF
 
|
 
| In progress
 
| 2016-02-23
 
| the current scripts use gfal which is incompatible with dirac job submission. This was a stumbling block that forced us to go back to use gridpp and direct job submission. The scripts are modified and run successfully two jobs at UKI-SOUTHGRID-RALPP (ARC). I've also tested with a larger sample of 165 which is currently still running in RALPP,Liverpool and Manchester. Jobs run also in Lancs but failed there (to be investigated). 165 jobs run is almost finished.
 
  
|-
 
| VOI-LSST-19
 
| Run a larger sample of jobs (effectively 2500)
 
| AF, JZ
 
|
 
| In progress
 
| 2016-02-25
 
| https://its.cern.ch/jira/browse/GRIDPP-33
 
 
|-
 
| VOI-LLST-20
 
| Implement JZ workflow in plain dirac cli
 
| ME
 
|
 
| In Progress
 
| 2016-03-01
 
| dirac cli is implemented in scripts and usable for job submission and getting OutputSandbox using a given input file list, a test running over 1000 input files was successful, needs to be tested for overall analysis usage by JZ
 
  
 
|}
 
|}
  
==LZ==
+
==<span style="color:green">na62.vo.gridpp.ac.uk==
 
+
<span style="color:green">In production:</span>
''PoC: David Colling (DC)''
+
[https://na62.gla.ac.uk/index.php?task=stats&view=sites NA62 Monitoring server.] <br>
 
+
They welcome more sites supporting them. <br>
 
+
July 2018: NA62 wants to use Mainz. Tracked in GGUS 135805. Waiting for Mianz to configure their site. <br>
All Monte Carlo for the TDR was generated using Dirac and shell scripts. Only a few extra TDR jobs being run to fix holes. After TDR next step is to have full gaudi simulations. <br>
+
October 2017: NA62 wants to use resources at CERN, which requires the GridPP DIRAC to submit to HTCondor. This requires major changes to DIRAC server and liaising with CERN to get NA62 enabled correctly. This is now done, as it turned out the problem wasn't HTCondor on DIRAC, but CERN's difficulties at installing a new VO.
LZ has been successfully running large scale simulations at Imperial.<br>
+
Currently the following CEs are enabled for LZ, but anything apart from Imperial needs testing (and Sheffield needs pilot roles): <br>
+
ceprod05.grid.hep.ph.ic.ac.uk:8443/cream-sge-grid.q,
+
ceprod06.grid.hep.ph.ic.ac.uk:8443/cream-sge-grid.q,
+
ceprod07.grid.hep.ph.ic.ac.uk:8443/cream-sge-grid.q,
+
ceprod08.grid.hep.ph.ic.ac.uk:8443/cream-sge-grid.q <br>
+
dc2-grid-21.brunel.ac.uk:2811/nordugrid-Condor-default <br>
+
heplnv146.pp.rl.ac.uk:2811/nordugrid-Condor-grid,
+
heplnv147.pp.rl.ac.uk:2811/nordugrid-Condor-grid <br>
+
lcgce1.shef.ac.uk:8443/cream-pbs-lz <br>
+
  
 
==PRaVDA==
 
==PRaVDA==
Line 516: Line 436:
  
 
* Update requested 3rd Feb., 19th Feb. 2016 by TW. TP replied 2016-03-21 - they have been busy building the actual device!
 
* Update requested 3rd Feb., 19th Feb. 2016 by TW. TP replied 2016-03-21 - they have been busy building the actual device!
 +
 +
* TP in the process of changing roles. Need to finalise the new end user.
 +
 +
* 20th Feb: Asked TP about status - currently processing data so need of simulations reduced. Will pick up again when more simulation required.
  
 
{|border="1" cellpadding="1"
 
{|border="1" cellpadding="1"
Line 543: Line 467:
 
|MS/MW
 
|MS/MW
 
|2015-03-21
 
|2015-03-21
|In progress
+
|Closed
|
+
|2015-08-01
 
|MS/MW assisting on the Ganga side.
 
|MS/MW assisting on the Ganga side.
  
 +
|-
 +
|VOI-PRA-003
 +
|TP changing roles. Need to make contact with new end user.
 +
|MS
 +
|2016-05-23
 +
|Closed
 +
|2017-02-20
 +
|End user did not make contact. TP still working with them so will use him as PoC for the moment
 +
 +
|-
 +
|VOI-PRA-004
 +
|Waiting on data processing to be completed and more simulations to be required.
 +
|MS
 +
|2017-02-20
 +
|Closed
 +
|2017-11-06
 +
|
 +
 +
|-
 +
|VOI-PRA-005
 +
|Got an update from TP. Pravda are still wanting to do more work on the grid but at present there is no manpower. Will check again in a few months.
 +
|MS
 +
|2017-11-06
 +
|In Progress
 +
|
 +
|
 
|}
 
|}
  
==SNO+==
+
==SKA Regional Centre==
 +
* [[SKA Regional Centre|GridPP SKA regional centre information]]
 +
* PoC: Alessandra Forti (AF)
 +
* Previous contacts: Rohini Joshi (RJ), Andrew McNab (AM)
 +
* VO: skatelescope.eu
 +
 
 +
{|border="1" cellpadding="1"
 +
|+
 +
 
 +
|-style="background:#7C8AAF;color:white"
 +
!Action ID
 +
!Action description
 +
!Owner
 +
!Target date
 +
!Status
 +
!Last update
 +
!Notes
 +
 
 +
|-
 +
|VOI-SKA-001
 +
|LOFAR tests for SKA with DIRAC
 +
|RJ/AM
 +
|2017-12-31
 +
|Open
 +
|2017-10-10
 +
|LOFAR application now running from cvmfs at LCG.UKI-NORTHGRID-MAN-HEP.uk, with input jobs matched to input data in GridPP DIRAC File Catalog, stored on DPM. Working on mass input of data from Groningen via grid jobs (wget with password then DIRAC data management commands)
 +
 
 +
|-
 +
|VOI-SKA-002
 +
|LOFAR tests for SKA on DIRAC/OpenStack
 +
|AM
 +
|2017-12-31
 +
|Done
 +
|2017-10-18
 +
|GridPP DIRAC SAM tests for skatelescope.eu run at DataCentred. Manchester storage associated with DataCentred in GridPP DIRAC so it matches SKA/LOFAR jobs.
 +
 
 +
|-
 +
|VOI-SKA-003
 +
|Add more sites with skatelescope.eu VO / GridPP DIRAC
 +
|AM
 +
|2017-12-31
 +
|Open
 +
|2017-10-03
 +
|QMUL joins Manchester, Imperial, Cambridge in passing SAM tests and providing storage. More volunteer sites welcome: need 10-20TB on grid storage that is set up in GridPP DIRAC.
 +
 
 +
|-
 +
|VOI-SKA-004
 +
|File replication across 100Gb/s JBO to London link
 +
|AM
 +
|2018-03-31
 +
|Open
 +
|2017-10-31
 +
|Plan: set up endpoint machines as DIRAC SEs; do DIRAC file replications between them, over the 100Gb/s link registered in the DIRAC Replica Catalogue. Use DIRAC DMS for this directly at first, then use DMS to schedule RAL FTS for this.
 +
 
 +
|-
 +
|VOI-SKA-005
 +
|Run DIRAC jobs in large (whole node?) VMs
 +
|AM/RJ
 +
|2018-03-31
 +
|Open
 +
|2018-03-06
 +
|Provide VMs which can run SKA DIRAC jobs with "lots" of memory. Probably >= 48 GB.
 +
 
 +
|-
 +
|VOI-SKA-006
 +
|Activate/use skatelescope.eu in GGUS
 +
|AM/RJ
 +
|2018-03-31
 +
|Open
 +
|2018-03-13
 +
|Test ticket from GGUS team processed. RJ will use GGUS to communicate with sites, as part of AENEAS evaluation of infrastructure tools
 +
 
 +
|-
 +
|VOI-SKA-007
 +
|Test of the Transformation System
 +
|DB/SF/RJ
 +
|2018-12-31
 +
|Open
 +
|2018-07-03
 +
|Test transformation system setup for SKA on dirac test server. Still hunting down bugs wrt v6r20 and waiting for full multi-VO implementation, possibly in v6r21
 +
 
 +
|}
 +
 
 +
==t2k.org==
 +
Sophie King at Kings and Lukas Koch at RAL.
 +
They know their way around the grid.
 +
Daniela can take messages.
 +
 
 +
==<span style="color:green">snoplus.snolab.ca</span>==
 +
<span style="color:green">In production</span> <br>
 
* ''PoC: Pete Gronbech'' (PG)
 
* ''PoC: Pete Gronbech'' (PG)
  
* ''SNO+: Matthew Mottram'' (MM)
+
* ''SNO+: Karin Gilje'' (snoplus_vosupport - at - snolab.ca)
  
 
{|border="1" cellpadding="1"
 
{|border="1" cellpadding="1"
Line 592: Line 631:
 
|2016-03-10
 
|2016-03-10
 
|Success after fantastic support/discussion on the GRIDPP-SUPPORT mailing list.
 
|Success after fantastic support/discussion on the GRIDPP-SUPPORT mailing list.
 
|}
 
 
==SuperNEMO==
 
 
* ''PoC: Pete Gronbech'' (PG)
 
 
* ''SuperNEMO: Ben Morgan'' (BM)
 
 
{|border="1" cellpadding="1"
 
|+
 
 
|-style="background:#7C8AAF;color:white"
 
!Action ID
 
!Action description
 
!Owner
 
!Target date
 
!Status
 
!Last update
 
!Notes
 
  
 
|-
 
|-
|VOI-SuperNEMO-001
+
|VOI-SNO+-004
|Check on progress via GridPP-Support list
+
|Check on progress via GridPP-Support list (16thMay).
 
|PG
 
|PG
|2016-02-17
+
|2016-05-31
|On going
+
|Closed. Will be reopened if there is demand.
|2016-02-17
+
|2017-06-06
 
|
 
|
  
 
|-
 
|-
|VOI-SuperNEMO-002
+
|VOI-SNO+-005
|Resurrect the [http://operations-portal.egi.eu/vo/view/voname/supernemo.vo.eu-egee.org SuperNEMO VO].
+
|SNO are making increasing use of resources at RAL Tape allocation increased to 100TB, CPU ~300HS06hrs per month
 
|PG
 
|PG
|2016-02-17
+
|2018-05-22
|Open
+
|Informational.
|2016-03-22
+
|2018-05-22
 
|
 
|
  
 
|-
 
|-
|VOI-SuperNEMO-003
+
|VOI-SNO+-006
|Establish the feasibility of using the Ganga interface to the Atlas Metadata Interface (AMI)
+
|SNO need to migrate away from the LFC. They (Karin Giljie) have been put in contact with Alastair at the Tier-1 for help.
|PG, MS
+
|PG
|2016-02-24
+
|2018-09-01
|On going
+
|Informational.
|2016-03-22
+
|2018-10-16
 
|
 
|
 +
|}
  
|}
+
==SuperNEMO==
 +
Paolo Franchini  and Julia Sedgbeer at Imperial.
 +
Daniela as the liaison (as I get asked anyway).
 +
Supernemo has no plans to use grid resources at this time, except for transfers to the Imperial SE, after which they intend to use xrootd.
 +
 
 +
==DEAP3600==
 +
 
 +
''PoC: Jeremy Coles (JC)''
 +
* Update requested February 2016.
 +
* No response as of 21st March 2016
 +
* 24th May: Awaiting main local user at RHUL to begin activities.
 +
* 23rd Aug: DEAP3600 will generate around 10TB of (calibrated) data per year for 5 years, starting this year I think. The original (much larger) raw data are backed up on tape in Canada, but the calibrated data are not. For reasons of backup and access, we were hoping it would it be possible to get these 50TB calibrated data stored at the Tier0 at RAL.
 +
The model would be only to use RAL for custodial data storage and to copy data as needed to Tier2 sites such as RHUL for analysis.
 +
There will also be around 60 TB (possible x 2 generations) of MC which will be kept only at a Tier2 because it can be regenerated in case it is lost.
 +
* 25th Sept 17: No current activity. Query sent.
  
 
==UKQCD==
 
==UKQCD==
Line 648: Line 682:
 
* Update 21st March: "Hoping to do more with the gridpp resources".
 
* Update 21st March: "Hoping to do more with the gridpp resources".
 
** Preliminary result in conference proceedings - "Investigating Some Technical Improvements to Glueball Calculations" e-Print: arXiv:1511.09303.
 
** Preliminary result in conference proceedings - "Investigating Some Technical Improvements to Glueball Calculations" e-Print: arXiv:1511.09303.
 +
* 24th May: Will try and leverage some of the international lattice data grid stuff. Nothing immediate planned.
 +
* 22nd August: No recent activity or planned activities.
 +
* 25th Sept 17: No further work planned.

Latest revision as of 12:14, 27 April 2020

This page is for monitoring the progress of new(ish) GridPP VOs.

  • PoC - Point of Contact

Please consider archiving Closed/Completed issues, so that we can see what we need to work on. It is straight forward to create a page for your VO.


All new VOs

These tasks will need to be completed for all

Action ID Action description Owner Target date Status Date closed Notes
VOI-GEN-001 Deploy test software to RVO CernVM-FS repositories. Duncan, Daniela, Gareth, Alessandra, Ewan 2015-05-26 Closed 2016-08-23 New users in the Regional VOs will need to run test jobs using software in the RVO CernVM-FS repositories. This test software will need to be uploaded by the RVO admins. Instructions for doing this can be found here. Tested for vo.londongrid.ac.uk (--Daniela)
VOI-GEN-002 Write up the VO registration procedure Tom 2015-05-31 Closed 2016-08-23 Guide started here - comments and feedback appreciated. Use gridpp guide.


vo.DiRAC.ac.uk

PoC: Jens Jensen, Brian Davies

All Closed actions can be found on the Vo.DiRAC.ac.uk_archived_actions page.

Action ID Action description Owner Target date Status Date closed Notes
VOI-DIRAC010 Track Data transfer from further Sites BD 2016-11-29 Open Following vo.dirac.ac.uk transfer working group meeting. dirac have identified initial data from non Durham sites which need to be transferred. ( ~700TB per site this needs verifying.)

07/02/17 First test transfers from Cambridge and Leicester now succeeding.

23/05/2017 Edinburgh now transferring ~2TB/day. Durham testing recall from RAL to Durham.

25/09/2017 In last three months, Cambridge/Edinburgh/Leicester have transferred 49/114/2633 files; equating to volumes of 6.5/10.4/41.6 TB of data. Issue raised at GRIDPP39 for mechanism of how to update data needs to be investigated.

31/01/2018 Current Usage Durham/Cambridge/Edinburgh/Leicester are using 149/45/41/40 8TB tapes respectively each. Approx 2.2PB of space used Last date for when data was ingested form each site is 10-07-2017/17-08-2017/17-11-2017/04-12-2017 Errata: This may be flalse as timestamps on tape are "peculiar"

Since CERN have switched to grafana monitoring tool, we have lost production of centralized plots for all small VOs.

27/02/2018 Polled DiRAC sites for any outstanding issues. Non reported. 20/08/2018 No reported problems.

VOI-DIRAC011 Track Tape Volume usage at Tier 1 . BD 2018-04-25 Open Old RT ticket in tier 1 helpdesk to cover this: https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=177668 Adding here to track.
VOI-DIRAC012 New page to show current FTS data transfers. BD 2018-04-25 Open Since CERN deleted dashboard pages which showed non WLCG VO usage on FTS servers, info of how transfers succeed is missing. Need to find a new method.


EUCLID

Wholly dealt with by IRIS.

GalDyn

  • PoC: Tom Whyntie (TW) Matt Doidge
  • UCLan: Adam Clarke (AC) Victor Debattista
Action ID Action description Owner Target date Status Date closed Notes
VOI-GAL-001 Assist GalDyn users with CernVM creation and testing. TW 2015-02-18 Closed 2015-02-18 GalDyn users have successfully instantiated CernVMs for accessing the grid.
VOI-GAL-002 Assist GalDyn users with running test jobs on the Imperial DIRAC instance. TW 2015-02-23 Closed 2015-02-23 GalDyn users have successfully run test jobs on the Imperial DIRAC instance via a GridPP CernVM.
VOI-GAL-003 Assist GalDyn users with compiling user software on the CernVM. TW 2015-03-08 Closed 1016-02-15 The code compiles and runs, but needs to be put in a grid/CernVM-FS-friendly format.
VOI-GAL-004 Create the GalDyn CernVM-FS repository TW, CC 2015-05-05 Closed 2015-05-05 New CernVM-FS repository galdyn.egi.eu has been created on the RAL Stratum-1 for the GalDyn VO.
VOI-GAL-005 (Re)new grid certificate for AC AC 2015-02-22 Closed 2015-03-15 UK CA managed to renew the old certificate. Work on hold - user preparing for PhD viva!
VOI-GAL-006 Creating an account for a UCLan student on the Lancaster cluster TW/Robin Long (Lancaster) 2016-09-15 Defunct The student (visiting from China) will pick up Adam's work on grid deployment for an upcoming paper. They will have a UCLan computing account but an account on the Lancaster cluster would further speed things up. Under discussion. 2016-11-16: TW emailed Victor D (group head) for an update.
VOI-GAL-007 Recontact GalDyn once new student starts Matt D 2017-09-27 Closed 2018-10-30 Victor D from GalDyn was contacted and Victor was interested in continuing the GridPP work. Waiting on a new student to start to take up the work. Will contact GalDyn towards the end of October to build some momentum. Feb 2018 update - Galdyn's work has been delayed, but would like to compliment their dirac workflow with gridpp resources. Closed after new postdoc started and GalDyn recontacted.
VOI-GAL-008 Discuss and define GalDyn's needs with using GridPP resources to suppliment DiRAC (the HPC) work. Matt D/Jeremy 2018-02-26 Open Victor has stated that he would like to use our resources to supplement their HPC (DiRAC) work. The orbit calculations described for this work are single core jobs. 27/4 - Victor contacted us asking for help getting started on submitting jobs.
VOI-GAL-009 Keep communication with GalDyn Open Matt D 2018-10-30 Open Now a new postdoc has started (from the first) we await him being able to move the work onto Grid resources - keep pressure on and be ready to answer questions.

LIGO

PoC: Catalin Condurache (CC)
LIGO: Paul Hopkins (PH)


Action ID Action description Owner Target date Status Date closed Notes
VOI-LIGO-001 Create the LIGO CernVM-FS repository CC 2014-12-01 Closed 2015-02-15 New CernVM-FS repository ligo.egi.eu has been created on the RAL Stratum-1 for the LIGO VO.
VOI-LIGO-002 Assist LIGO users with using Condor + nordugrid to access ARC-CE@RAL. AL, CC 2015-12 Closed 2016-02-12 Test jobs submitted from LIGO Condor instance to ARC-CE service at RAL were successful.
VOI-LIGO-003 Plan to run proper analyses jobs using scientists involvement PH, CC 2016-02-12 Open [2016-05-24]Still chasing scientists to run analyses jobs. Some promises.
VOI-LIGO-004 Get file storage working via the GridPP CernVM. PH, CC 2016-02-24 Closed 2016-03-08 PH managed to get file transfers working with the GridPP CernVM using bridged networking and getting the VM registered on the university network.
VOI-LIGO-005 Enable the OSG VO on RAL CEs and batch system. AL 2017-05-06 Closed 2017-05-23 Job successfully being submitted to RAL, but have requested that single-core pilots are used rather than multi-core pilots, as LIGO are only running single-core jobs. We don't need to do anything else however.
VOI-LIGO-006 Get access to LIGO data via secure CVMFS working on RAL workernoes CC,AL 2017-06 Open

LOFAR

PoC: George Ryall (GR), from April 2016 - Alex Dibbo (AD)
21/03/16: LOFAR should be in a posetion to perform an analysis run on a limited number of VMs with real data in the next few weeks. (GR) 25/05/16: Note that LOFAR is a VO supported under 'STFC' not GridPP. Communication with SCD cloud users is good. A recent issue with the cloud storage/CEPH has led to less recent activity. 31/01/18: The last update from them was that they thought they had enough to get running in January 2017. The last few times I have tried to get in touch I have received no response. They don’t appear to have done anything since last January 2017

LSST

The LSST UK page.
All Closed action items can be found in the LSST_archived_actions page.

PoC: Alessandra Forti (AF)
Other people: Joe Zuntz (JZ), Andy Washbrook (AW), Steve Jones (SJ), Catalin Condurache (CC), Daniela Bauer (DB), Marcus Ebert (ME), Kashif Mohammad (KM), Dan Traynor (DT), Gareth Roy (GR), Matt Doidge (MD), Peter Love (PL), Pavlo Svirin (PS), Rob Currie (RC)

Action ID Action description Owner Target date Status Last update Notes
VOI-LSST-028 Help James Perry to run DC2 AF, DB, SF, UE, RC Closed. 2018-06-01 James is running DC2 jobs at a handful of UK sites after installing the software on CVMFS and built his jobs to run multicore.
VOI-LSST-029 Help JP to increase the resources to run DC2 AF, DB, SF, UE, RC Closed. 2018-08-17 George Beckett asked for more resources on the 2018-07-20 because the LSST jobs are running quite well on the grid. Giving more resources to LSST has had some problems due to the way the jobs were broekered. Principal problems were dirac uses the closest storage concept so jobs were brokered only to sites with the data (Man, QM, IC), default OS is SL6 so none of the C7 resources were used, jobs are multicore and dirac doesn't target queues based on this. To solve the data problem ganga had to be modified to accept data LFNs in the input sandbox. Some sites like Manchester had to modify the queues ACLs to fix the flat brokering on all the queues and some others had to increase the LSST priority. Number of sites that run at least 2 jobs this week is now 8 among which RAL.
VOI-LSST-030 Reconfigure sites to point ot SLAC VOMS rather than FNAL AF Open 2018-10-22 FNAL is trying to decomission its VOMS server and LSST as long term solution has decided to move to SLAC. Sites need to be reconfigured. I've added the information to the approved VO page. https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs#LSST

LZ

In Production: GridPP LZ page.

DUNE

The GridPP DUNE page.

PoC: Andrew McNab (AM)
Others: Peter Clarke (PC), Stephen Jones (SJ), Raja Nandakumar (RN), Jaroslaw Nowak (JN), Andrew Washbrook (AW)

Action ID Action description Owner Target date Status Last update Notes
VOI-DUNE-001 Review job submission to UK from FNAL AM 2018-05-31 Completed 2018-05-15 Existing Imperial and Sheffield jobs are centrally managed MC and some MC data processing
VOI-DUNE-002 protoDUNE data storage estimating AM/PC 2018-05-31 Completed 2018-05-15 DUNE centrally would like ~2PB in UK, which we believe to be feasible using GridPP+IRIS capacity
VOI-DUNE-003 Get DUNE jobs running on ARC SJ/AM 2018-05-31 Completed 2018-06-22 Production jobs working at LIV, MAN, and RAL via ARC.
VOI-DUNE-004 Get DUNE storage access working AM 2018-05-31 Completed 2018-06-22 Enabled LIV and MAN storage and tested from FNAL
VOI-DUNE-005 Recruit more sites for compute AM 2018-12-25 Open 2018-06-22 DUNE needs more sites for MC, and CPU-only is sufficient (Currently, LIV, MAN, RAL, LANC, IC, ED)
VOI-DUNE-006 Provide 2PB of storage and 1500 processors for protoDUNE data taking AM 2018-12-25 Completed 2018-11-06 Continue testing and recruiting sites willing to provide storage. ~1PB at Manchester. RAL and ED coming online. More needed during November to reach 2PB.


na62.vo.gridpp.ac.uk

In production: NA62 Monitoring server.
They welcome more sites supporting them.
July 2018: NA62 wants to use Mainz. Tracked in GGUS 135805. Waiting for Mianz to configure their site.
October 2017: NA62 wants to use resources at CERN, which requires the GridPP DIRAC to submit to HTCondor. This requires major changes to DIRAC server and liaising with CERN to get NA62 enabled correctly. This is now done, as it turned out the problem wasn't HTCondor on DIRAC, but CERN's difficulties at installing a new VO.

PRaVDA

  • PoC: Mark Slater/Matt Williams (MS/MW)
  • End User: Tony Price (TP)
  • Update requested 3rd Feb., 19th Feb. 2016 by TW. TP replied 2016-03-21 - they have been busy building the actual device!
  • TP in the process of changing roles. Need to finalise the new end user.
  • 20th Feb: Asked TP about status - currently processing data so need of simulations reduced. Will pick up again when more simulation required.
Action ID Action description Owner Target date Status Date closed Notes
VOI-PRA-001 Get PRaVDA up and running with DIRAC and Ganga. MS/MW 2015-10-01 Closed 2016-03-21 TP has successfully got simulations running using DIRAC and Ganga.
VOI-PRA-002 Issues with DIRAC, Ganga and LFN names when copying data back. MS/MW 2015-03-21 Closed 2015-08-01 MS/MW assisting on the Ganga side.
VOI-PRA-003 TP changing roles. Need to make contact with new end user. MS 2016-05-23 Closed 2017-02-20 End user did not make contact. TP still working with them so will use him as PoC for the moment
VOI-PRA-004 Waiting on data processing to be completed and more simulations to be required. MS 2017-02-20 Closed 2017-11-06
VOI-PRA-005 Got an update from TP. Pravda are still wanting to do more work on the grid but at present there is no manpower. Will check again in a few months. MS 2017-11-06 In Progress

SKA Regional Centre

Action ID Action description Owner Target date Status Last update Notes
VOI-SKA-001 LOFAR tests for SKA with DIRAC RJ/AM 2017-12-31 Open 2017-10-10 LOFAR application now running from cvmfs at LCG.UKI-NORTHGRID-MAN-HEP.uk, with input jobs matched to input data in GridPP DIRAC File Catalog, stored on DPM. Working on mass input of data from Groningen via grid jobs (wget with password then DIRAC data management commands)
VOI-SKA-002 LOFAR tests for SKA on DIRAC/OpenStack AM 2017-12-31 Done 2017-10-18 GridPP DIRAC SAM tests for skatelescope.eu run at DataCentred. Manchester storage associated with DataCentred in GridPP DIRAC so it matches SKA/LOFAR jobs.
VOI-SKA-003 Add more sites with skatelescope.eu VO / GridPP DIRAC AM 2017-12-31 Open 2017-10-03 QMUL joins Manchester, Imperial, Cambridge in passing SAM tests and providing storage. More volunteer sites welcome: need 10-20TB on grid storage that is set up in GridPP DIRAC.
VOI-SKA-004 File replication across 100Gb/s JBO to London link AM 2018-03-31 Open 2017-10-31 Plan: set up endpoint machines as DIRAC SEs; do DIRAC file replications between them, over the 100Gb/s link registered in the DIRAC Replica Catalogue. Use DIRAC DMS for this directly at first, then use DMS to schedule RAL FTS for this.
VOI-SKA-005 Run DIRAC jobs in large (whole node?) VMs AM/RJ 2018-03-31 Open 2018-03-06 Provide VMs which can run SKA DIRAC jobs with "lots" of memory. Probably >= 48 GB.
VOI-SKA-006 Activate/use skatelescope.eu in GGUS AM/RJ 2018-03-31 Open 2018-03-13 Test ticket from GGUS team processed. RJ will use GGUS to communicate with sites, as part of AENEAS evaluation of infrastructure tools
VOI-SKA-007 Test of the Transformation System DB/SF/RJ 2018-12-31 Open 2018-07-03 Test transformation system setup for SKA on dirac test server. Still hunting down bugs wrt v6r20 and waiting for full multi-VO implementation, possibly in v6r21

t2k.org

Sophie King at Kings and Lukas Koch at RAL. They know their way around the grid. Daniela can take messages.

snoplus.snolab.ca

In production

  • PoC: Pete Gronbech (PG)
  • SNO+: Karin Gilje (snoplus_vosupport - at - snolab.ca)
Action ID Action description Owner Target date Status Last update Notes
VOI-SNO+-001 Check on progress via GridPP-Support list. PG 2016-02-17 Closed 2016-03-10 See VOI-SNO+-003 - success, closing this for now.
VOI-SNO+-002 MM to join GridPP Storage meeting PG 2016-02-24 Closed 2016-02-24 MM joined the meeting to discuss requirements and various options. See minutes.
VOI-SNO+-003 MM to transfer files out of the SNO+ cavern via an FTP server. PG 2016-02-17 Closed 2016-03-10 Success after fantastic support/discussion on the GRIDPP-SUPPORT mailing list.
VOI-SNO+-004 Check on progress via GridPP-Support list (16thMay). PG 2016-05-31 Closed. Will be reopened if there is demand. 2017-06-06
VOI-SNO+-005 SNO are making increasing use of resources at RAL Tape allocation increased to 100TB, CPU ~300HS06hrs per month PG 2018-05-22 Informational. 2018-05-22
VOI-SNO+-006 SNO need to migrate away from the LFC. They (Karin Giljie) have been put in contact with Alastair at the Tier-1 for help. PG 2018-09-01 Informational. 2018-10-16

SuperNEMO

Paolo Franchini and Julia Sedgbeer at Imperial. Daniela as the liaison (as I get asked anyway). Supernemo has no plans to use grid resources at this time, except for transfers to the Imperial SE, after which they intend to use xrootd.

DEAP3600

PoC: Jeremy Coles (JC)

  • Update requested February 2016.
  • No response as of 21st March 2016
  • 24th May: Awaiting main local user at RHUL to begin activities.
  • 23rd Aug: DEAP3600 will generate around 10TB of (calibrated) data per year for 5 years, starting this year I think. The original (much larger) raw data are backed up on tape in Canada, but the calibrated data are not. For reasons of backup and access, we were hoping it would it be possible to get these 50TB calibrated data stored at the Tier0 at RAL.

The model would be only to use RAL for custodial data storage and to copy data as needed to Tier2 sites such as RHUL for analysis. There will also be around 60 TB (possible x 2 generations) of MC which will be kept only at a Tier2 because it can be regenerated in case it is lost.

  • 25th Sept 17: No current activity. Query sent.

UKQCD

PoC: Jeremy Coles (JC)

  • Update requested February 2016.
  • Update 21st March: "Hoping to do more with the gridpp resources".
    • Preliminary result in conference proceedings - "Investigating Some Technical Improvements to Glueball Calculations" e-Print: arXiv:1511.09303.
  • 24th May: Will try and leverage some of the international lattice data grid stuff. Nothing immediate planned.
  • 22nd August: No recent activity or planned activities.
  • 25th Sept 17: No further work planned.