Difference between revisions of "Tier1 Operations Report 2019-04-08"

From GridPP Wiki
Jump to: navigation, search
()
()
Line 180: Line 180:
 
GGUS Tickets (Snapshot taken during morning of the meeting).   
 
GGUS Tickets (Snapshot taken during morning of the meeting).   
 
|}
 
|}
 +
 
{| border=1 align=center
 
{| border=1 align=center
 
|- bgcolor="#7c8aaf"
 
|- bgcolor="#7c8aaf"
Line 192: Line 193:
 
! Scope
 
! Scope
 
! Solution
 
! Solution
|-
 
| style="background-color: lightgreen;" | 140599
 
| lhcb
 
| in progress
 
| very urgent
 
| 05/04/2019
 
| 05/04/2019
 
| File Access
 
| Data access problem at RAL-LCG2
 
| WLCG
 
|
 
 
|-
 
|-
 
| style="background-color: lightgreen;" | 140589
 
| style="background-color: lightgreen;" | 140589
Line 259: Line 249:
 
|  
 
|  
 
|-
 
|-
| style="background-color: red;" | 138665
+
| style="background-color: lightgreen;" | 138033
| mice
+
| on hold
+
| urgent
+
| 04/12/2018
+
| 30/01/2019
+
| Middleware
+
| Problem accessing LFC at RAL
+
| EGI
+
|
+
|-
+
| style="background-color: red;" | 138033
+
 
| atlas
 
| atlas
| reopened
+
| in progress
 
| urgent
 
| urgent
 
| 01/11/2018
 
| 01/11/2018
| 06/04/2019
+
| 09/04/2019
 
| Other
 
| Other
 
| singularity jobs failing at RAL
 
| singularity jobs failing at RAL

Revision as of 13:50, 9 April 2019

RAL Tier1 Operations Report for 8th April 2019

Review of Issues during the week 1st April 2019 to the 8th April 2019.
  • We are seeing high outbound packet loss (~1%), over IPv6. Investigations ongoing.
  • On Thursday 4th April, around half the containers (which run a job inside) restarted. This killed the job inside them, we do not understand the cause of this yet although it was on a particular subset of machines.
  • On Friday 5th April, gdss700 (LHCb) had a double drive failure and needed to be removed from production while it was being rebuilt. Will hopefully be returned to production later today. Still waiting on LHCb for DIRAc update to migrate to Echo. [UPDATE 9/4/19] - This server is now almost 3 drives out, all out. We are currently working on data recovery.
Current operational status and issues
Resolved Castor Disk Server Issues
Machine VO DiskPool dxtx Comments
- - - - -
Ongoing Castor Disk Server Issues
Machine VO DiskPool dxtx Comments
gdss700 LHCb lhcb d1t0 Server very poorly, almost a case of 3 drives out, all out.
Limits on concurrent batch system jobs.
  • ALICE - 1000
Notable Changes made since the last meeting.
  • NTR
Entries in GOC DB starting since the last report.
Service ID Scheduled? Outage/At Risk Start End Duration Reason
- - - - - - - -
Declared in the GOC DB
Service ID Scheduled? Outage/At Risk Start End Duration Reason
- - - - - - - -
  • No ongoing downtime
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • DNS servers will be rolled out within the Tier1 network.
Open

GGUS Tickets (Snapshot taken during morning of the meeting).

Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope Solution
140589 lhcb in progress very urgent 04/04/2019 04/04/2019 Local Batch System Pilots killed at RAL-LCG2 WLCG
140577 lhcb in progress less urgent 04/04/2019 09/04/2019 File Access LHCb disk only files requested with the wrong service class EGI
140447 dteam in progress less urgent 27/03/2019 02/04/2019 Network problem packet loss outbound from RAL-LCG2 over IPv6 EGI
140220 mice in progress less urgent 15/03/2019 08/04/2019 Other mice LFC to DFC transition EGI
139672 other in progress urgent 13/02/2019 08/04/2019 Middleware No LIGO pilots running at RAL EGI
138033 atlas in progress urgent 01/11/2018 09/04/2019 Other singularity jobs failing at RAL EGI
GGUS Tickets Closed Last week
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope Solution
140521 ops verified less urgent 01/04/2019 04/04/2019 Operations [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer@gridftp.echo.stfc.ac.uk EGI The Ceph team have added the problem DN to the local over-ride for the grid map files.

The test has cleared.

140511 cms solved urgent 01/04/2019 02/04/2019 CMS_Facilities T1_UK_RAL SAM job run out of date WLCG Issue is related to SAM dashboard.
140283 atlas closed less urgent 19/03/2019 03/04/2019 File Transfer UK RAL-LCG2: DESTINATION SRM_PUT_TURL error WLCG Corrected ownership of the CASTOR dir to atlas001:atlas
140278 cms closed urgent 19/03/2019 05/04/2019 CMS_Data Transfers Transfers failing from FNAL_Buffer to RAL_Disk WLCG we had a problem with ECHO mistakenly returning the correct size and checksum for files which failed transfers and were corrupt which resulted in them being marked as good by PhEDEx and then lying latent until they were accessed or transferred out.

Transfers recovered.

140210 atlas closed top priority 14/03/2019 02/04/2019 File Transfer Cannot access some files WLCG These files were lost from RAL-LCG2-ECHO_DATADISK, probably due to an FTS bug which should now be fixed. They were identified here because they were needed by the production team. The ticket submitter has now marked them lost in Rucio.

Independently, we have been analysing the results of the Rucio consistency checker. It identified 163 files lost from this disk. The 5 files listed in this ticket are among the 163. We will mark the remaining files lost in Rucio.

139990 cms closed urgent 01/03/2019 03/04/2019 CMS_AAA WAN Access T1_UK_RAL xrootd segfaulted WLCG The issue is intermittent and occasional. This segfault is most likely a known problem, which will be addressed in xrootd version 4.9, along with other fixes. RAL will install this when available.

Availability Report

Day Atlas Atlas-Echo CMS LHCB Alice OPS Comments
2019-04-02 100 100 92 100 100 100
2019-04-03 100 100 100 100 100 100
2019-04-04 100 100 100 100 100 100
2019-04-05 100 100 100 100 100 100
2019-04-06 100 100 100 100 100 84
2019-04-07 100 100 100 100 100 100
2019-04-08 100 100 100 100 100 100
2019-04-09 100 100 100 100 100 100
Hammercloud Test Report
Target Availability for each site is 97.0% Red <90% Orange <97%
Day Atlas HC CMS HC Comment
2019-03-26 100 99
2019-03-27 100 96
2019-03-28 100 100
2019-03-29 100 100
2019-03-30 100 99
2019-03-31 100 99
2019-04-01 100 100
2019-04-02 100 100

Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud

Notes from Meeting.