Difference between revisions of "Tier1 Operations Report 2019-07-10"

From GridPP Wiki
Jump to: navigation, search
(Created page with "==RAL Tier1 Operations Report for 3rd July 2019== __NOTOC__ ====== ====== <!-- ************************************************************* -----> <!-- ***********Start Rev...")
 
()
 
(10 intermediate revisions by one user not shown)
Line 1: Line 1:
==RAL Tier1 Operations Report for 3rd July 2019==
+
==RAL Tier1 Operations Report for 10th July 2019==
  
 
__NOTOC__
 
__NOTOC__
Line 11: Line 11:
 
|}
 
|}
  
* Frontier Service went down over weekendTier-1’s interpretation is that ATLAS were probably doing “something silly” with their US HPCs.  The Production Manager to find out when the disk replacement for the Oracle database took place and if it went smoothly.
+
* The only issue of note to report was CMS reporting significant AM/Hammer-cloud tests and job failures.   
 +
 +
After investigation it appears as result of AAA and HC tests that were in error for a number of days during the week. This was traced back to a large number of CMS production jobs (“premix” jobs ), that seemed to require significant I/O activity. (Possibly also an effect of merge jobs TBC)
  
* CMS job failures: In the last two weeks several periods of CMS SRM failures. Investigation on going: Hammer Cloud jobs have been 25% failure rate recently.  New monitoring doesn’t allow us to split HC work from normal production.  HC failures could be due to problems accessing data at other sites (e.g. not our fault), or could be a problem at RAL.
+
The strain that was put on RAL disk Storage by this load had several effects: A significant number of the Production jobs were failing (~50 or 60 %), HC tests were failing, and AAA SAM tests were also failing.
45% job failure rate at RAL for production work.  Significantly higher than other Tier-1s, not just log collect, many normal production jobs. Error code is related to file reads. Echo caching?!
+
  
 +
For the avoidance of doubt and to aid clarity, the reason the Premix jobs were affecting the SAM tests and HC jobs so much was the following chain of events.
 +
1) Premix job would run on a worker node with heavy IO over xroot from ECHO
 +
2) Some event, possible the "slow request" interrupts the read.
 +
3) The job backs off from reading directly from ECHO and tries AAA
 +
4) AAA locates the file at RAL and the job carries on its heavy IO via the AAA proxies.
 +
5) This happens to enough jobs that it knocks the AAA proxies over
 +
6) SAM and HC errors ensue.
 +
If it is possible to break the chain at link 4, this would make the system more robust.
  
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
Line 123: Line 132:
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- **********************Start GGUS Tickets************************** ----->
 
<!-- **********************Start GGUS Tickets************************** ----->
 
+
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;"
 +
|-
 +
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Open
 +
GGUS Tickets (Snapshot taken during morning of the meeting). 
 +
|}
 +
{| border=1 align=center
 +
|- bgcolor="#7c8aaf"
 +
! Ticket-ID
 +
! Type
 +
! VO
 +
! Site
 +
! Priority
 +
! Responsible Unit
 +
! Status
 +
! Last Update
 +
! Subject
 +
! Scope
 +
|-
 +
| 142155
 +
| USER
 +
| cms
 +
| RAL-LCG2
 +
| urgent
 +
| NGI_UK
 +
| in progress
 +
| 2019-07-09 20:07:00
 +
| Transfers are failing from UK to KIPT
 +
| WLCG
 +
|-
 +
| 142127
 +
| TEAM
 +
| lhcb
 +
| RAL-LCG2
 +
| urgent
 +
| NGI_UK
 +
| in progress
 +
| 2019-07-09 15:53:00
 +
| 2 files cannot be staged
 +
| WLCG
 +
|-
 +
| 140447
 +
| USER
 +
| dteam
 +
| RAL-LCG2
 +
| less urgent
 +
| NGI_UK
 +
| on hold
 +
| 2019-07-09 15:03:00
 +
| packet loss outbound from RAL-LCG2 over IPv6
 +
| EGI
 +
|-
 +
| 140220
 +
| USER
 +
| mice
 +
| RAL-LCG2
 +
| less urgent
 +
| NGI_UK
 +
| waiting for reply
 +
| 2019-07-04 17:55:00
 +
| mice LFC to DFC transition
 +
| EGI
 +
|}
  
  
Line 139: Line 209:
 
|}
 
|}
  
 
+
{| border=1 align=center
 +
|- bgcolor="#7c8aaf"
 +
! Ticket-ID
 +
! Type
 +
! VO
 +
! Site
 +
! Priority
 +
! Responsible Unit
 +
! Status
 +
! Last Update
 +
! Subject
 +
! Scope
 +
|-
 +
| 141990
 +
| USER
 +
| cms
 +
| RAL-LCG2
 +
| urgent
 +
| NGI_UK
 +
| solved
 +
| 2019-07-09 14:11:00
 +
| Intermittent HC failures at T1_UK_RAL
 +
| WLCG
 +
|-
 +
| 141968
 +
| USER
 +
| cms
 +
| RAL-LCG2
 +
| very urgent
 +
| NGI_UK
 +
| solved
 +
| 2019-07-04 14:20:00
 +
| SAM (CE) and Hammer Cloud Failures at T1_UK_RAL
 +
| WLCG
 +
|-
 +
| 141771
 +
| USER
 +
| cms
 +
| RAL-LCG2
 +
| urgent
 +
| NGI_UK
 +
| closed
 +
| 2019-07-08 23:59:00
 +
| file read error at T1_UK_RAL
 +
| WLCG
 +
|-
 +
| 140870
 +
| USER
 +
| t2k.org
 +
| RAL-LCG2
 +
| less urgent
 +
| NGI_UK
 +
| solved
 +
| 2019-07-09 14:18:00
 +
| Files vanished from RAL tape?
 +
| EGI
 +
|-
 +
| 139672
 +
| USER
 +
| other
 +
| RAL-LCG2
 +
| urgent
 +
| NGI_UK
 +
| solved
 +
| 2019-07-09 11:57:00
 +
| No LIGO pilots running at RAL
 +
| EGI
 +
|}
 
<!-- **********************End Availability Report************************** ----->
 
<!-- **********************End Availability Report************************** ----->
 
<!-- *********************************************************************** ----->
 
<!-- *********************************************************************** ----->
Line 153: Line 290:
 
Availability Report
 
Availability Report
 
|}
 
|}
 
+
{| border=1 align=center
 
+
|- bgcolor="#7c8aaf"
 +
! Day
 +
! Atlas
 +
! CMS
 +
! LHCB
 +
! Alice
 +
! Comments
 +
|-
 +
| 2019-07-03
 +
| 100
 +
| 100
 +
| 100
 +
| 100
 +
|
 +
|-
 +
| 2019-07-04
 +
| 100
 +
| 100
 +
| 100
 +
| 100
 +
|
 +
|-
 +
| 2019-07-05
 +
| 100
 +
| 100
 +
| 100
 +
| 100
 +
|
 +
|-
 +
| 2019-07-06
 +
| 100
 +
| 100
 +
| 100
 +
| 100
 +
|
 +
|-
 +
| 2019-07-07
 +
| 100
 +
| 100
 +
| 100
 +
| 100
 +
|
 +
|-
 +
| 2019-07-08
 +
| 100
 +
| 100
 +
| 100
 +
| 100
 +
|
 +
|-
 +
| 2019-07-09
 +
| 100
 +
| 100
 +
| 69
 +
| 71
 +
|
 +
|}
  
 
====== ======
 
====== ======
 
<!-- ************************************************************************* ----->
 
<!-- ************************************************************************* ----->
 
<!-- **********************Start Hammercloud Test Report************************** ----->
 
<!-- **********************Start Hammercloud Test Report************************** ----->
 +
 +
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;"
 +
|-
 +
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Hammercloud Test Report
 +
|}
 +
 +
{| border=1 align=center
 +
| Target Availability for each site is 97.0%
 +
|}
 +
{| border=1 align=center
 +
|- bgcolor="#7c8aaf"
 +
! Day !! Atlas HC !! CMS HC !! Comment
 +
|-
 +
| 2019-06-03 || 100 || n/a ||
 +
|-
 +
| 2019-06-04 || 100 || n/a||
 +
|-
 +
| 2019-06-05 || 100 || n/a ||
 +
|-
 +
| 2019-06-06 || 100 || n/a ||
 +
|-
 +
| 2019-06-07 || 100 || n/a ||
 +
|-
 +
| 2019-07-09|| 100 || 45 ||
 +
|-
 +
| 2019-07-09 || 100 || 45 ||
 +
|-
 +
|}
  
  

Latest revision as of 10:27, 10 July 2019

RAL Tier1 Operations Report for 10th July 2019

Review of Issues during the week 26th June 2019 to the 3rd July 2019.
  • The only issue of note to report was CMS reporting significant AM/Hammer-cloud tests and job failures.

After investigation it appears as result of AAA and HC tests that were in error for a number of days during the week. This was traced back to a large number of CMS production jobs (“premix” jobs ), that seemed to require significant I/O activity. (Possibly also an effect of merge jobs TBC)

The strain that was put on RAL disk Storage by this load had several effects: A significant number of the Production jobs were failing (~50 or 60 %), HC tests were failing, and AAA SAM tests were also failing.

For the avoidance of doubt and to aid clarity, the reason the Premix jobs were affecting the SAM tests and HC jobs so much was the following chain of events. 1) Premix job would run on a worker node with heavy IO over xroot from ECHO 2) Some event, possible the "slow request" interrupts the read. 3) The job backs off from reading directly from ECHO and tries AAA 4) AAA locates the file at RAL and the job carries on its heavy IO via the AAA proxies. 5) This happens to enough jobs that it knocks the AAA proxies over 6) SAM and HC errors ensue. If it is possible to break the chain at link 4, this would make the system more robust.


Current operational status and issues
Notable Changes made since the last meeting.
  • NTR
Entries in GOC DB starting since the last report.
Service ID Scheduled? Outage/At Risk Start End Duration Reason
- - - - - - - -
Declared in the GOC DB
Service ID Scheduled? Outage/At Risk Start End Duration Reason
- - - - - - - -
  • No ongoing downtime
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • DNS servers will be rolled out within the Tier1 network.
Open

GGUS Tickets (Snapshot taken during morning of the meeting).

Ticket-ID Type VO Site Priority Responsible Unit Status Last Update Subject Scope
142155 USER cms RAL-LCG2 urgent NGI_UK in progress 2019-07-09 20:07:00 Transfers are failing from UK to KIPT WLCG
142127 TEAM lhcb RAL-LCG2 urgent NGI_UK in progress 2019-07-09 15:53:00 2 files cannot be staged WLCG
140447 USER dteam RAL-LCG2 less urgent NGI_UK on hold 2019-07-09 15:03:00 packet loss outbound from RAL-LCG2 over IPv6 EGI
140220 USER mice RAL-LCG2 less urgent NGI_UK waiting for reply 2019-07-04 17:55:00 mice LFC to DFC transition EGI


GGUS Tickets Closed Last week
Ticket-ID Type VO Site Priority Responsible Unit Status Last Update Subject Scope
141990 USER cms RAL-LCG2 urgent NGI_UK solved 2019-07-09 14:11:00 Intermittent HC failures at T1_UK_RAL WLCG
141968 USER cms RAL-LCG2 very urgent NGI_UK solved 2019-07-04 14:20:00 SAM (CE) and Hammer Cloud Failures at T1_UK_RAL WLCG
141771 USER cms RAL-LCG2 urgent NGI_UK closed 2019-07-08 23:59:00 file read error at T1_UK_RAL WLCG
140870 USER t2k.org RAL-LCG2 less urgent NGI_UK solved 2019-07-09 14:18:00 Files vanished from RAL tape? EGI
139672 USER other RAL-LCG2 urgent NGI_UK solved 2019-07-09 11:57:00 No LIGO pilots running at RAL EGI

Availability Report

Day Atlas CMS LHCB Alice Comments
2019-07-03 100 100 100 100
2019-07-04 100 100 100 100
2019-07-05 100 100 100 100
2019-07-06 100 100 100 100
2019-07-07 100 100 100 100
2019-07-08 100 100 100 100
2019-07-09 100 100 69 71
Hammercloud Test Report
Target Availability for each site is 97.0%
Day Atlas HC CMS HC Comment
2019-06-03 100 n/a
2019-06-04 100 n/a
2019-06-05 100 n/a
2019-06-06 100 n/a
2019-06-07 100 n/a
2019-07-09 100 45
2019-07-09 100 45



Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud

Notes from Meeting.