Tier1 Operations Report 2018-08-13

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 13th August 2018

Review of Issues during the week 6th August to the 13th August 2018.
  • Tier-1 at RAL have had an unscheduled outage for Echo since Friday evening (10/8/18). The initial cause of this issue is thought to be a combination of client I/O and cluster I/O (moving data onto new hardware), which is causing an excessive amount of memory usage. This in turn is causing machines to start swapping heavily, which is placing even more load on the cluster. This has necessitated pausing all I/O operations on the cluster to try and reduce memory usage while we are restoring the cluster to a healthy state, and putting measures in place to reduce the memory usage on the storage nodes.
  • While this investigation and work is carried out the ongoing downtime has been extended until 12:00 16/08/2018
  • CMS jobs experiencing periodic failures when accessing files. Problem appears to be network related. This was tracked down to Docker containers losing all connectivity. Also causing some ATLAS jobs to fail with “lost heartbeat” errors. Problem identified requiring some machines to be re-installed. Batch farm at half capacity while this is being done.
  • CMS AAA - service is heavily loaded (lots of popular data on Echo), causing machines to go down, or slow to the point SAM tests start failing. Restarting machines helps (on a cron). Working continues on adding throttling.
Current operational status and issues
  • Requirement to purchase new SIM (an associated contract), to replace the failed EE SIM in the SMS system.
Resolved Castor Disk Server Issues
Machine VO DiskPool dxtx Comments
gdss690 LHCb lhcbDst d1t0 Back in production RO..
Ongoing Castor Disk Server Issues
Machine VO DiskPool dxtx Comments
gdss747 Atlas atlasStripInput d1t0 Currently in intervention.
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • None.
Entries in GOC DB starting since the last report.
Service ID Scheduled? Outage/At Risk Start End Duration Reason
ECHO 25871 UNSCHEDULED OUTAGE OUTAGE 10/08/2018 20:00 16/08/2018 12:00 7 days Major problems with the ECHO instance at RAL
Declared in the GOC DB
Service ID Scheduled? Outage/At Risk Start End Duration Reason
LFC 25853 SCHEDULED OUTAGE OUTAGE 15/08/2018 08:00 15/08/2018 17:00 1 days Major upgrade of systems
ECHO 25871 UNSCHEDULED OUTAGE OUTAGE 10/08/2018 20:00 16/08/2018 12:00 7 days Major problems with the ECHO instance at RAL
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • Castor:
    • Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
    • Move to generic Castor headnodes.
  • Internal
    • DNS servers will be rolled out within the Tier1 network.
Open

GGUS Tickets (Snapshot taken during morning of the meeting).

Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
136701 lhcb in progress very urgent 14/08/2018 14/08/2018 File Transfer background of transfer errors WLCG
136665 cms in progress urgent 11/08/2018 14/08/2018 CMS_SAM tests T1_UK_RAL is down > 12h WLCG
136563 cms in progress urgent 06/08/2018 06/08/2018 CMS_Data Transfers Possibly corrupted files at RAL WLCG
136366 mice waiting for reply less urgent 25/07/2018 08/08/2018 Local Batch System Remove MICE Queue from RAL T1 Batch EGI
136358 cms on hold urgent 25/07/2018 13/08/2018 CMS_Facilities T1_UK_RAL WN-xrootd-access failure WLCG
136199 lhcb in progress very urgent 18/07/2018 07/08/2018 File Transfer Lots of submitted transfers on RAL FTS WLCG
136028 cms in progress top priority 10/07/2018 13/08/2018 CMS_AAA WAN Access Issues reading files at T1_UK_RAL_Disk WLCG
124876 ops in progress less urgent 07/11/2016 23/07/2018 Operations [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk EGI
GGUS Tickets Closed Last week
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
136537 ops verified less urgent 03/08/2018 06/08/2018 Operations [Rod Dashboard] Issue detected : org.bdii.GLUE2-Validate@site-bdii.gridpp.rl.ac.uk EGI
136359 none closed top priority 25/07/2018 08/08/2018 Other This TEST ALARM has been raised for testing GGUS alarm work flow after a new GGUS release. WLCG
136270 cms closed urgent 22/07/2018 07/08/2018 CMS_Facilities T1_UK_RAL xrootd access failures WLCG
134685 dteam solved less urgent 23/04/2018 08/08/2018 Middleware please upgrade perfsonar host(s) at RAL-LCG2 to CentOS7 EGI

Availability Report

Target Availability for each site is 97.0% Red <90% Orange <97%
Day Atlas Atlas-Echo CMS LHCB Alice OPS Comments
2018-08-06 100 100 99 100 100 100
2018-08-07 100 100 94 100 100 100
2018-08-08 100 100 89 100 100 100
2018-08-09 100 100 93 100 100 100
2018-08-10 100 100 95 100 100 100
2018-08-11 100 100 31 100 100 100 Caused by Echo outage
2018-08-12 100 100 7 100 100 100 Caused by Echo outage
2018-08-13 100 100 0 100 100 100 Caused by Echo outage
Hammercloud Test Report
Target Availability for each site is 97.0% Red <90% Orange <97%
Day Atlas HC CMS HC Comment
2018-08-06 100 100
2018-08-07 100 99
2018-08-08 91 98
2018-08-09 100 93
2018-08-10 98 87
2018-08-11 67 91
2018-08-12 100 97
2018-08-13 0 100

Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud

Notes from Meeting.
  • Major incidents:

• Echo outage. Rob reported on this. I have not noted details here as they are recorded elsewhere. There was a discussion with the way that LHCb loaded files into FTS to transfer into Echo. This led to bursty access. In discussion it was thought the best way to manage this would be to limit the maximum number of channels in the FTS to smooth the transfer rate. (Although it was also noted that LHCb are using both the CERN and RAL FTS systems to do this). • Batch farm. The capacity has been significantly reduced (around 50%) while many nodes were reinstalled to correct a filesystem configuration problem that could affect docker containers.

  • It was noted that the migration of the LHC from an Oracle backend database to MySQL was underway.
  • LHCb – Some issues seen when using ARC-CE01. As ARC CEs being rebuilt it is not worth pursuing the specific problem in ARC-CE01. Will drop this CE from LHCb usage. Tier1 to remove the LHCb tags from this in both the GOCDB and BDII.
  • DUNE: As regards copying files to us from the Tier2s. This needs an update to DPM (which most T2s use). However, Atlas have found a bug in the update so that is on hold at the sites.