Tier1 Operations Report 2016-01-13

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 13th January 2016

Review of Issues during the week 6th to 13th January 2016.
  • The CMSTape instance has become busy and there have been problems with two, out of five, of its disk servers. (Both disk servers had double disk failures - a possible consequence of the high load on older servers. Both were taken out of service on Monday (11th). One was returned to service the following day - the other remains out.) Some changes were made to the servers in this class to try and improve performance: Applying the change to the Linux I/O scheduled (to 'noop') which it is planned to rollout everywhere, and adjusting one of the Castor parameters. The CMS SRM SAM test runs against CMSTape - and this led to a lot of test failures (with error 'user timeout').

!-- ***********End Review of Issues during last week*********** ----->

Resolved Disk Server Issues
  • GDSS674 (CMSTape - D0T1) was taken out of service on Monday morning (11th Jan) as it had a double disk failure. It was returned to service yesterday (12th Jan) after the first disk had been replaced and the RAID array rebuilt.
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites. A recent modification has improved, but not completed fixed this.
  • The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
Ongoing Disk Server Issues
  • GDSS620 (GenTape - D0T1) failed on the first January. This server has failed before recently. Investigations ongoing.
  • GDSS677 (CMSTape - D0T1) failed yesterday evening (5th Jan) and was returned to service on the 7th Jan. However, a double disk failure was found on Monday (11th Jan) and the server was taken out of production while the first disk rebuilt. Although that was completed successfully the server has shown anther problematic disk and will not be returned to service until these that has also been replaced and the RAID array rebuilt.
Notable Changes made since the last meeting.
  • Following problems the 'bypass' network link has been dropped back to a single 10Gbit link.
  • The new batch draining script has been modified to eliminate load on Condor.
  • The quarterly UPS/Generator load test took place successfully this morning.
  • A power supply was replaced on Monday 4th Jan. in the disk array that hosts the Castor standby databases. This had reported problems over the holiday period.
Declared in the GOC DB

None

Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Upgrade of remaining Castor disk servers (those in tape-backed service classes) to SL6. This will be transparent to users.

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Update disk servers in tape-backed service classes to SL6 (ongoing)
    • Update to Castor version 2.1.15.
  • Networking:
    • Make routing changes to allow the removal of the UKLight Router.
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, LFC)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-atlas.gridpp.rl.ac.uk, UNSCHEDULED WARNING 01/01/2016 04:30 01/01/2016 12:27 7 hours and 57 minutes one of four machines in DNS alias down, so some transfer failures possible
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
118809 Green Urgent On Hold 2016-01-05 2016-01-06 Towards a recommendation on how to configure memory limits for batch jobs
118722 Green Very Urgent In Progress 2016-01-08 2016-01-11 CMS Please check on this file for me
118549 Green Urgent Waiting for Reply 2015-12-30 2016-01-12 CMS Volume Idle about 100% of Volume Requested at T1_UK_RAL
118494 Green Urgent In Progress 2015-12-23 2015-12-24 CMS Xrootd problems???
118209 Green Less Urgent In Progress 2015-12-15 2015-12-18 Enabling CVMFS for the vo.neugrid.eu VO
118044 Yellow Less Urgent Waiting Reply 2015-11-30 2016-01-05 Atlas gLExec hammercloud jobs failing at RAL-LCG2 since October
117846 Green Urgent In Progress 2015-11-23 2016-01-11 Atlas ATLAS request- storage consistency checks
117683 Green Less Urgent In Progress 2015-11-18 2016-01-05 CASTOR at RAL not publishing GLUE 2
116866 Red Less Urgent In Progress 2015-10-12 2016-01-07 SNO+ snoplus support at RAL-LCG2 (pilot role)
116864 Red Urgent In Progress 2015-10-12 2015-12-16 CMS T1_UK_RAL AAA opening and reading test failing again...
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
06/01/16 100 100 64 60 60 94 100 Problem on the Bypass (non-OPN) link. (Problem overnight. Fixed in the morning.)
07/01/16 100 100 100 90 100 93 100 A pair of test failures. (User timeout on Put).
08/01/16 100 100 100 65 100 97 100 The CMS SRM tests run agains CMSTape which was under high load.
09/01/16 100 100 100 92 100 91 100 Continuation of above
10/01/16 100 100 100 100 100 93 100
11/01/16 100 100 100 96 100 97 100 Single SRM test failure on GET (User timeout).
12/01/16 100 100 100 96 100 93 100 Single SRM test failure on GET (User timeout).