Tier1 Operations Report 2019-02-05

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 5th February 2019

Review of Issues during the week 28th January 2019 to the 5th February 2019.
  • CPU Efficiencies have improved for CMS (>80%), although it is still fluctuating a lot. ATLAS has still 60 - 70% efficiency, Atlas liaison has investigated, the fluctuations look like mostly a result of a different mix of types of jobs, especially failed jobs and group production, which have a lower efficiency. The overall efficiency is similar, maybe a bit better than this time last year.
Current operational status and issues
  • The system drive in a disk server for LHCb failed on Thursday afternoon. This was a 14 generation machine (dual purpose for Ceph). The operating system was put on the SSD (to leave other disks for capacity), which is attached to the underside of the motherboard… Fabric is going to perform open heart surgery today to install another disk.
  • The disk buffer in front of our new Castor tape instance almost filled up. Currently we don’t know the exact cause but on the 25th January (after several months of working perfectly), the Garbage collection daemon stopped working quickly and properly (clearing only a few files an hour). We have manually been wiping files from the tape buffer to keep space clear while we understand the problem.
  • While we were investigating the full buffer we found that NA62 has been writing files to the “disk” endpoint on wlcgTape. This endpoint does not get written to tape and was designed for a small number of functional test files (e.g. SAM tests which get copied in and immediately deleted). There are ~197k files using up 11TB of space that as it currently stands will never be migrated to tape (and if they are not used will be deleted eventually). Most of these files have been written in the last few weeks. Tier-1 manager has started an urgent conversation with NA62 to find out how important these files are.
  • ARC-CE04 has stopped working again. We are not sure if this is related to the number of LHCb jobs submitted to this CE or not. We have rolled out an updated version of the software to arc-ce05 for testing, which should fix the problem but at the very least it will mean that the ARC developers will need to look at the error. Unfortunately it is likely to break backward compatibility with some VOs. It would be desirable if we could get LHCb to submit their jobs more evenly across our CEs (currently the ratio is 0:25:25:50 for ARC-CE04[1-4] respectively).
Resolved Castor Disk Server Issues
Machine VO DiskPool dxtx Comments
- - - - -
Ongoing Castor Disk Server Issues
Machine VO DiskPool dxtx Comments
gdss811 LHCb lhcbDst d1t0 -
Limits on concurrent batch system jobs.
  • ALICE - 1000
Notable Changes made since the last meeting.
  • NTR
Entries in GOC DB starting since the last report.
Service ID Scheduled? Outage/At Risk Start End Duration Reason
- - - - - - - -
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
- - - - - - -
  • No ongoing downtime
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • DNS servers will be rolled out within the Tier1 network.
Open

GGUS Tickets (Snapshot taken during morning of the meeting).

Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
139477 ops in progress less urgent 01/02/2019 04/02/2019 Operations [Rod Dashboard] Issue detected : org.nordugrid.ARC-CE-submit-ops@arc-ce04.gridpp.rl.ac.uk EGI
139476 mice in progress less urgent 01/02/2019 04/02/2019 Other LFC dump EGI
139306 dteam in progress less urgent 24/01/2019 29/01/2019 Monitoring perfsonar hosts need updating EGI
138891 ops on hold less urgent 17/12/2018 05/02/2019 Operations [Rod Dashboard] Issue detected : egi.eu.lowAvailability-/RAL-LCG2@RAL-LCG2_Availability EGI
138665 mice on hold urgent 04/12/2018 30/01/2019 Middleware Problem accessing LFC at RAL EGI
138500 cms on hold urgent 26/11/2018 30/01/2019 CMS_Data Transfers Transfers failing from T2_PL_Swierk to RAL WLCG
138361 t2k.org in progress less urgent 19/11/2018 31/01/2019 Other RAL-LCG2: t2k.org LFC to DFC transition EGI
138033 atlas in progress urgent 01/11/2018 31/01/2019 Other singularity jobs failing at RAL EGI
137897 enmr.eu on hold urgent 23/10/2018 31/01/2019 Workload Management enmr.eu accounting at RAL EGI
GGUS Tickets Closed Last week
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
139538 cms solved urgent 05/02/2019 05/02/2019 CMS_Data Transfers Some transfers failing to RAL - SRM_AUTHORIZATION_FAILURE WLCG
139414 lhcb verified very urgent 30/01/2019 05/02/2019 Other Jobs Failed with Segmentation fault at RAL-LCG2 WLCG
139405 ops verified less urgent 30/01/2019 05/02/2019 Operations [Rod Dashboard] Issue detected : org.bdii.GLUE2-Validate@site-bdii.gridpp.rl.ac.uk EGI
139404 none verified top priority 30/01/2019 01/02/2019 Other This TEST ALARM has been raised for testing GGUS alarm work flow after a new GGUS release. WLCG
139380 cms solved urgent 29/01/2019 31/01/2019 CMS_Facilities T1_UK_RAL failing SAM tests inside Singularity WLCG
139375 atlas solved urgent 29/01/2019 04/02/2019 Other RAL-LCG2 transfers fail with "the server responded with an error 500" WLCG
139328 cms solved urgent 25/01/2019 29/01/2019 CMS_Facilities T1_UK_RAL SRM tests failing WLCG
139312 cms solved urgent 25/01/2019 29/01/2019 CMS_Data Transfers Corrupted files at RAL_Buffer? WLCG
139245 cms solved urgent 21/01/2019 04/02/2019 CMS_Data Transfers Transfers failing from CNAF_Disk to RAL_Buffer WLCG

Availability Report

Day Atlas Atlas-Echo CMS LHCB Alice OPS Comments
2019-01-29 100 100 97 100 100 -1
2019-01-30 100 100 98 100 100 100
2019-01-31 100 100 100 100 100 100
2019-02-01 100 100 100 100 100 100
2019-02-02 100 100 98 100 100 100
2019-02-03 100 100 100 100 100 100
2019-02-04 100 100 100 100 100 100
2019-02-05 100 100 100 100 100 100
2019-02-06 100 100 100 100 100 100
Hammercloud Test Report
Target Availability for each site is 97.0% Red <90% Orange <97%
Day Atlas HC CMS HC Comment
2019-01-23 100 98
2019-01-24 100 98
2019-01-25 100 98
2019-01-26 100 91
2019-01-27 100 97
2019-01-28 100 93
2019-01-29 100 98

Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud

Notes from Meeting.