Tier1 Operations Report 2016-05-11

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 11th May 2016

Review of Issues during the week 4th to 11th May 2016.
  • At 16:30 on Monday 9th May the air conditioning to the machine room failed. Both chillers and pumps stopped. In response the Tier1 stopped any new batch work from starting and paused all running batch jobs. This, along with actions from other users of the machine room, stabilized the temperatures. After around 30 minutes the air conditioning (pumps & chillers) was restarted and temperatures fell back down. Following this the running batch jobs were un-paused. In order to minimise load through the night no new batch jobs were started until the following morning. See blog post at: http://www.gridpp.rl.ac.uk/blog/2016/05/10/r89-water-pump-outage/
  • There have been problems with the Tier1 tape robot. Since last week there has been a problem with one of the two "elevators" within the robot. At the end of Monday afternoon the second elevator also stopped working. Some tape mounts were then failing. This problem became more severe at 17:45 yesterday (Tuesday 10th) when the robot stopped working completely. This is the situation at the time of this meeting.
Resolved Disk Server Issues
  • GDSS620 (GenTape - D0T1) was reporting problems and was taken out of production on the 19th April. It was returned to service yesterday (10th May). It had all disks and RAID card swapped and put through one week of acceptance testing.
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites. A recent modification has improved, but not completed fixed this.
  • The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
  • The draining of Castor disk servers for Atlas using the inbuilt Castor tool is very slow. We need to drain a few old servers to provide spares - this being part of the deployment plan and server lifecycle. A workaround for this has now been found.
Ongoing Disk Server Issues
  • The two disk servers listed below have failed more than once and are being put through a week long acceptance test.
  • GDSS635 (AtlasTape - D0T1) failed in the early hours of the 20th April.
  • GDSS619 (GenTape - D0T1) failed on 26th April only a few hours after being put back into service after a previous failure.
Notable Changes made since the last meeting.
  • Tuesday 3rd May: Upgraded FTS3 "test" instance to fts 3.4.3. Production FTS being updated this morning (4th May).
  • The migration of Atlas data from "C" to "D" tapes is underway. (Around 300 tapes done, 1000 to go).
  • Modified Atlas xrootd monitoring configuration as per request from ATLAS central
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
lcgwms06.gridpp.rl.ac.uk, SCHEDULED OUTAGE 01/06/2016 11:00 30/06/2016 11:00 29 days, Server lcgwms06.gridpp.rl.ac.uk Decommissioning
All SRMs (except disk-only) UNSCHEDULED WARNING 10/05/2016 19:00 11/05/2016 19:00 24 hours Problem still ongoing with tape robot affecting some tape mounts. (Engineer returning tomorrow)
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • The Castor 2.1.15 update is pending. Testing has shown a database related performance issue which is being followed up. We await successful resolution of that problem and completion of testing before scheduling. Following advice from the developers we will not upgrade the SRMs before the Castor 2.1.15 upgrade.
  • Decommissioning of "GEN Scratch" storage in Castor. (Formally announced by EGI broadcast).
  • Decommissioning of lcgwms06. This will leave two WMS systems remaining in service.

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Update to Castor version 2.1.15.
    • Migration of data from T10KC to T10KD tapes (Affects Atlas & LHCb data).
  • Networking:
    • Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 2*40Gbit.
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, LFC)
  • Grid Services
    • Once the use of the Load Balancer (HAProxy) has been proven for the test FTS service it will be extended to other services.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
All Castor SRMs (except disk only) UNSCHEDULED WARNING 10/05/2016 19:00 11/05/2016 19:00 24 hours Problem still ongoing with tape robot affecting some tape mounts. (Engineer returning tomorrow)
All Castor SRMs (except disk only) UNSCHEDULED WARNING 10/05/2016 12:00 10/05/2016 19:00 7 hours Ongoing problem with tape robot affecting some tape mounts. (Engineer expected today).
All CEs. UNSCHEDULED OUTAGE 09/05/2016 18:45 10/05/2016 08:49 14 hours and 4 minutes Following a cooling problem the batch system will not be running overnight.
All Castor SRMs (except disk only) UNSCHEDULED WARNING 09/05/2016 17:00 10/05/2016 12:00 19 hours Problem with tape robot affecting some tape mounts
Whole Site UNSCHEDULED OUTAGE 09/05/2016 16:30 09/05/2016 18:45 2 hours and 15 minutes Problem with Cooling in the Datacente
srm-lhcb.gridpp.rl.ac.uk, UNSCHEDULED OUTAGE 04/05/2016 13:30 04/05/2016 17:00 3 hours and 30 minutes Problem with Tape Robot. Engineer working on it.
All Castor SRMs (except disk only) UNSCHEDULED WARNING 04/05/2016 13:30 04/05/2016 17:00 3 hours and 30 minutes Problem with Tape Robot. Engineer working on it. No tape access. Disk Only Access OK.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
121412 Green Less Urgent In Progress 2016-05-11 2016-05-11 Alice mismatch between GOCDB and VOfeed
121322 Green Less Urgent In Progress 2016-05-10 2016-05-10 SNO+ Unable to access grid storage
121147 Green Very Urgent Waiting Reply 2016-04-29 2016-04-29 CMS Failures reading files at T1_UK_RAL
120954 Green Less Urgent In Progress 2016-04-21 2016-05-11 LHCb SRM endpoint entry in GOCDB
120920 Green Less Urgent In Progress 2016-04-19 2016-05-06 SNO+ XRootD issues at RAL
120810 Green Urgent On Hold 2016-04-13 2016-04-27 Biomed Decommissioning of SE srm-biomed.gridpp.rl.ac.uk - forbid write access for biomed users
120350 Green Less Urgent In Progress 2016-03-22 2016-05-06 LSST Enable LSST at RAL
119841 Amber Less Urgent On Hold 2016-03-01 2016-04-26 LHCb HTTP support for lcgcadm04.gridpp.rl.ac.uk
117683 Yellow Less Urgent On Hold 2015-11-18 2016-04-05 CASTOR at RAL not publishing GLUE 2
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
04/05/16 100 100 94 100 100 98 100 Single SRM Test failure (User Timeout)
05/05/16 97.9 100 100 100 100 100 100 Assorted Test failures.
06/05/16 100 100 100 100 100 100 100
07/05/16 100 100 100 100 96 100 N/A Single SRM Test failure (No such file or directory)
08/05/16 100 100 100 100 100 100 100
09/05/16 100 100 100 100 100 93 100
10/05/16 100 100 100 100 100 100 100