Difference between revisions of "Tier1 Operations Report 2019-07-10"

From GridPP Wiki
Jump to: navigation, search
()
()
 
Line 11: Line 11:
 
|}
 
|}
  
* Item1
+
* The only issue of note to report was CMS reporting significant AM/Hammer-cloud tests and job failures. 
 +
 +
After investigation it appears as result of AAA and HC tests that were in error for a number of days during the week. This was traced back to a large number of CMS production jobs (“premix” jobs ), that seemed to require significant I/O activity. (Possibly also an effect of merge jobs TBC)
  
* Item2
+
The strain that was put on RAL disk Storage by this load had several effects: A significant number of the Production jobs were failing (~50 or 60 %), HC tests were failing, and  AAA SAM tests were also failing.
 +
 
 +
For the avoidance of doubt and to aid clarity, the reason the Premix jobs were affecting the SAM tests and HC jobs so much was the following chain of events.
 +
1) Premix job would run on a worker node with heavy IO over xroot from ECHO
 +
2) Some event, possible the "slow request" interrupts the read.
 +
3) The job backs off from reading directly from ECHO and tries AAA
 +
4) AAA locates the file at RAL and the job carries on its heavy IO via the AAA proxies.
 +
5) This happens to enough jobs that it knocks the AAA proxies over
 +
6) SAM and HC errors ensue.
 +
If it is possible to break the chain at link 4, this would make the system more robust.
  
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->

Latest revision as of 10:27, 10 July 2019

RAL Tier1 Operations Report for 10th July 2019

Review of Issues during the week 26th June 2019 to the 3rd July 2019.
  • The only issue of note to report was CMS reporting significant AM/Hammer-cloud tests and job failures.

After investigation it appears as result of AAA and HC tests that were in error for a number of days during the week. This was traced back to a large number of CMS production jobs (“premix” jobs ), that seemed to require significant I/O activity. (Possibly also an effect of merge jobs TBC)

The strain that was put on RAL disk Storage by this load had several effects: A significant number of the Production jobs were failing (~50 or 60 %), HC tests were failing, and AAA SAM tests were also failing.

For the avoidance of doubt and to aid clarity, the reason the Premix jobs were affecting the SAM tests and HC jobs so much was the following chain of events. 1) Premix job would run on a worker node with heavy IO over xroot from ECHO 2) Some event, possible the "slow request" interrupts the read. 3) The job backs off from reading directly from ECHO and tries AAA 4) AAA locates the file at RAL and the job carries on its heavy IO via the AAA proxies. 5) This happens to enough jobs that it knocks the AAA proxies over 6) SAM and HC errors ensue. If it is possible to break the chain at link 4, this would make the system more robust.


Current operational status and issues
Notable Changes made since the last meeting.
  • NTR
Entries in GOC DB starting since the last report.
Service ID Scheduled? Outage/At Risk Start End Duration Reason
- - - - - - - -
Declared in the GOC DB
Service ID Scheduled? Outage/At Risk Start End Duration Reason
- - - - - - - -
  • No ongoing downtime
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • DNS servers will be rolled out within the Tier1 network.
Open

GGUS Tickets (Snapshot taken during morning of the meeting).

Ticket-ID Type VO Site Priority Responsible Unit Status Last Update Subject Scope
142155 USER cms RAL-LCG2 urgent NGI_UK in progress 2019-07-09 20:07:00 Transfers are failing from UK to KIPT WLCG
142127 TEAM lhcb RAL-LCG2 urgent NGI_UK in progress 2019-07-09 15:53:00 2 files cannot be staged WLCG
140447 USER dteam RAL-LCG2 less urgent NGI_UK on hold 2019-07-09 15:03:00 packet loss outbound from RAL-LCG2 over IPv6 EGI
140220 USER mice RAL-LCG2 less urgent NGI_UK waiting for reply 2019-07-04 17:55:00 mice LFC to DFC transition EGI


GGUS Tickets Closed Last week
Ticket-ID Type VO Site Priority Responsible Unit Status Last Update Subject Scope
141990 USER cms RAL-LCG2 urgent NGI_UK solved 2019-07-09 14:11:00 Intermittent HC failures at T1_UK_RAL WLCG
141968 USER cms RAL-LCG2 very urgent NGI_UK solved 2019-07-04 14:20:00 SAM (CE) and Hammer Cloud Failures at T1_UK_RAL WLCG
141771 USER cms RAL-LCG2 urgent NGI_UK closed 2019-07-08 23:59:00 file read error at T1_UK_RAL WLCG
140870 USER t2k.org RAL-LCG2 less urgent NGI_UK solved 2019-07-09 14:18:00 Files vanished from RAL tape? EGI
139672 USER other RAL-LCG2 urgent NGI_UK solved 2019-07-09 11:57:00 No LIGO pilots running at RAL EGI

Availability Report

Day Atlas CMS LHCB Alice Comments
2019-07-03 100 100 100 100
2019-07-04 100 100 100 100
2019-07-05 100 100 100 100
2019-07-06 100 100 100 100
2019-07-07 100 100 100 100
2019-07-08 100 100 100 100
2019-07-09 100 100 69 71
Hammercloud Test Report
Target Availability for each site is 97.0%
Day Atlas HC CMS HC Comment
2019-06-03 100 n/a
2019-06-04 100 n/a
2019-06-05 100 n/a
2019-06-06 100 n/a
2019-06-07 100 n/a
2019-07-09 100 45
2019-07-09 100 45



Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud

Notes from Meeting.