Tier1 Operations Report 2017-08-30

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 30th August 2017

Review of Issues during the week 23rd to 30th August 2017.
  • There have been problems with Echo since last weekend. Although now resolved there was a loss of around 22,000 Atlas files (of which 3285 files were unique). One CEPH "placement group" was lost. This is most likely due to some low level disk errors not being handled cleanly. While the problem was ongoing it also revealed a problem with the Echo gateways (including those on the worker nodes) which created many threads - blocking activity.
  • Four files were reported lost to CMS from a tape. During a CMS mass recall the monitoring noted that one tape was generating errors and causing tape drives to go down. The tape was put into repack. After various attempts at repacking and reading files direct from tape, there were four files left unreadable.
Resolved Disk Server Issues
  • GDSS753 (AtlasDataDisk - D1T0) failed in the early evening of Wednesday 23rd Aug. It was returned to serice on Friday morning (25th). Two disk drives were replaced.
Current operational status and issues
  • We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. Our investigations are ongoing.
  • There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. This is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • Further services migrated from our old Windows HyperV2008 infrastructure to hypervisors running HyperV2012 as we prepare to fully decommission the HyperV2008 systems.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-superb.gridpp.rl.ac.uk, SCHEDULED OUTAGE 20/07/2017 16:00 30/08/2017 13:00 40 days, 21 hours SuperB no longer supported on Castor storage. Retiring endpoint.
srm-hone.gridpp.rl.ac.uk, SCHEDULED OUTAGE 20/07/2017 16:00 30/08/2017 13:00 40 days, 21 hours H1 no longer supported on Castor storage. Retiring endpoint.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface was disabled on Monday (17th July).
  • Re-distribute the data in Echo onto the 2015 capacity hardware. (Ongoing)

Listing by category:

  • Castor:
    • Move to generic Castor headnodes.
  • Echo:
    • Re-distribute the data in Echo onto the remaining 2015 capacity hardware.
  • Networking
    • Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, all squids and the CVMFS Stratum-1 servers).
  • Services
    • The production FTS will be updated now the requirement to support the deprecated SOAP interface has been removed.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk, UNSCHEDULED OUTAGE 25/08/2017 12:00 25/08/2017 15:00 3 hours continued problems with Echo instance at RAL
gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk, UNSCHEDULED OUTAGE 24/08/2017 11:23 25/08/2017 12:00 1 day, 37 minutes Ceph team need to take the Echo service offline while dealing with ongoing issues
gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk, UNSCHEDULED WARNING 23/08/2017 11:00 25/08/2017 16:00 2 days, 5 hours Ongoing problems with Ceph, service is still degraded
gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk, UNSCHEDULED WARNING 22/08/2017 10:03 23/08/2017 10:00 23 hours and 57 minutes Ongoing problems with Ceph, service is degraded
srm-hone.gridpp.rl.ac.uk, SCHEDULED OUTAGE 20/07/2017 16:00 30/08/2017 13:00 40 days, 21 hours H1 no longer supported on Castor storage. Retiring endpoint.
srm-superb.gridpp.rl.ac.uk, SCHEDULED OUTAGE 20/07/2017 16:00 30/08/2017 13:00 40 days, 21 hours SuperB no longer supported on Castor storage. Retiring endpoint.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
130207 Green Urgent In Progress 2017-08-24 2017-08-25 MICE Timeouts when copying MICE reco data to CASTOR
130197 Green Urgent Waiting for Reply 2017-08-23 2017-08-24 SNO+ LFC registration failing
130193 Green Urgent In Progress 2017-08-23 2017-08-24 CMS Staging from RAL tape systems
128991 Green Less Urgent On Hold 2017-06-16 2017-08-24 Solid solidexperiment.org CASTOR tape support
127597 Red Urgent On Hold 2017-04-07 2017-06-14 CMS Check networking and xrootd RAL-CERN performance
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2017-07-06 CASTOR at RAL not publishing GLUE 2.
Availability Report
Day OPS Alice Atlas CMS LHCb Atlas Echo Comment
23/08/17 100 100 98 100 100 100 One SRM test failure (Unable to issue PrepareToPut request to Castor).
24/08/17 96.9 100 100 100 100 100
25/08/17 100 100 100 100 100 100
26/08/17 100 100 100 100 100 100
27/08/17 100 100 100 100 100 100
28/08/17 100 100 100 100 100 100
29/08/17 100 100 100 100 100 100
Hammercloud Test Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud

Day Atlas HC Atlas HC Echo CMS HC Comment
23/08/17 100 29 99
24/08/17 100 0 99
25/08/17 100 57 100
26/08/17 100 100 100
27/08/17 100 100 100
28/08/17 100 100 100
29/08/17 100 99 100
Notes from Meeting.
  • None yet