Difference between revisions of "Tier1 Operations Report 2019-11-27"

From GridPP Wiki
Jump to: navigation, search
(Created page with "==RAL Tier1 Operations Report for 27th November 2019== __NOTOC__ ====== ====== <!-- ************************************************************* -----> <!-- ***********Star...")
(No difference)

Revision as of 13:09, 27 November 2019

RAL Tier1 Operations Report for 27th November 2019

Review of Issues during the week 20th November 2019 to the 26th November 2019.
  • r


Current operational status and issues
Notable Changes made since the last meeting.
  • NTR
Entries in GOC DB starting since the last report.
Service ID Scheduled? Outage/At Risk Start End Duration Reason
Declared in the GOC DB
Service ID Scheduled? Outage/At Risk Start End Duration Reason
- - - - - - - -
  • No ongoing downtime
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.


Listing by category:


Open GGUS Tickets
Ticket-ID Type VO Site Priority Responsible Unit Status Last Update Subject Scope
144024 USER cms RAL-LCG2 very urgent NGI_UK waiting for reply 2019-11-15 15:39:00 File Read Issues where files are located at RAL WLCG
144015 USER other RAL-LCG2 less urgent NGI_UK in progress 2019-11-20 10:08:00 Stalled LSST jobs at RAL EGI
143762 TEAM lhcb RAL-LCG2 urgent NGI_UK in progress 2019-10-23 14:12:00 Stop using sl6 queues at RAL WLCG
143669 USER snoplus.snolab.ca RAL-LCG2 urgent NGI_UK on hold 2019-11-18 09:13:00 SNO+ LFC to DFC migration EGI
143323 TEAM lhcb RAL-LCG2 top priority NGI_UK on hold 2019-11-18 14:16:00 File deletion at RAL ECHO WLCG
142350 TEAM lhcb RAL-LCG2 top priority NGI_UK on hold 2019-11-18 14:59:00 Proble accessing some LHCb files at RAL WLCG



GGUS Tickets Closed Last week
Ticket-ID Type VO Site Priority Responsible Unit Status Last Update Subject Scope
143917 USER cms RAL-LCG2 urgent NGI_UK closed 2019-11-19 23:59:00 Transfers failing to T1_UK_RAL_Disk WLCG
143876 USER cms RAL-LCG2 urgent NGI_UK closed 2019-11-15 23:59:00 T1_UK_RAL HammerCloud cannot reach files via xrootd WLCG
143838 TEAM atlas RAL-LCG2 less urgent NGI_UK closed 2019-11-15 23:59:00 RAL-LCG2: TRANSFER an end-of-file was reached globus_xio: An end of file occurred WLCG
143834 USER cms RAL-LCG2 urgent NGI_UK closed 2019-11-13 23:59:00 transfers failing to T1_UK_RAL_Disk WLCG
143645 TEAM lhcb RAL-LCG2 top priority NGI_UK verified 2019-11-19 08:30:00 Jobs Failed to access files at RAL-LCG2 WLCG


Availability Report


Day Atlas CMS LHCB Alice Comments
2019-11-13 100 100 100 100
2019-11-14 100 100 100 98
2019-11-15 100 100 100 100
2019-11-16 100 100 100 100
2019-11-17 100 100 100 100
2019-11-18 100 99 100 100
2019-11-19 100 100 100 100
Hammercloud Test Report
Target Availability for each site is 97.0%
Day Atlas HC CMS HC Comment
2019-11-13 100 98
2019-11-14 100 99
2019-11-15 100 98
2019-11-16 100 97
2019-11-17 100 98
2019-11-18 96 98
2019-11-19 100 98

Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud

Notes from Meeting.

Teir-1 Liaison 20/11/2019

Attendee's RobA Marcello, Jose, Darren, Brian, Raja, Alistair

QUESTION Raja, when was the xrootd reboots done - during the last liaison meeting. There was an elog entry.

ACTION DM : GGUS#144015, orig issue resolved. Create new RT ticket regards optimising processing. Close the orig ticket.

ACTION DM : GGUS#143762. Possible to tailor the queues per VO, however the risk outweighs the gain. We might break something we haven't anticipated i.e. accounting. This ticket to be closed as we aren't going to "resolve". SL6 to be supported up until end of Nov 2020.

ACTION AD : GGUS#143669 not progress

ACTION Raja : GGUS#143323 : Add some statistics to tickets showing improvement (or not!)

GGUS#142350 : Double read issue, no apparent improvement. GGUS#143645 (solved), however ticket listed 3 possible solutions, the working solution(s), has not been identified.

QUESTION Raja, why are some WN's more susceptible that other, no one seems to have an answer. Possibly more jobs running on the node, the more likely something will fail. Failure rate can be as much as 20%.

DIRAC used because of late binding functionality, this seems the be kicking LHCb.

ACTION Raja Further investigation required to differentiate between user and production jobs.

ACTION Marcello to ensure the caches are working on the gateways.

DUNE ran test jobs at RAL, loaded 1500 jobs, we got 10. However, there are not enough DUNE jobs to go around! RAL is processing them correctly.

ACTION Raja : How does DUNE allocate jobs to sites