Tier1 Operations Report 2016-04-06

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 6th April 2016

Review of Issues during the week 30th March to 6th April 2016.
  • As reported last week there had been a network problem on the evening of Tuesday 30th March between around 18:30 and 21:00. One of the RAL site routers had a problem and the link between it and the primary of our Tier1 router pair was flapping. There should have been an automatic failover to our secondary router - but owing to a configuration error (since understood) that did not happen. Staff attended on site to force the failover which resolved the problem. Following the fix to the RAL site router we reverted to using the Primary Tier1 router this morning (6th April).
Resolved Disk Server Issues
  • GDSS635 (AtlasTape - D0T1) was taken out of production on the 26th March following a crash. There were no files awaiting migration to tape. It was returned to service on Thursday (31st March) although no specific cause for the crash had been identified.
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites. A recent modification has improved, but not completed fixed this.
  • The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
  • The draining of Castor disk servers is very slow. We need to drain a few old servers to provide spares - this being part of the deployment plan and server lifecycle. This does not yet impact services but is being followed up.
Ongoing Disk Server Issues
  • GDSS635 (AtlasTape - D0T1) crashed this morning. It is currently under investigation.
Notable Changes made since the last meeting.
  • A load balancer (a pair of systems running "HAProxy") has been introduced in front of the "test" FTS3 instance which is used by Atlas. At present this just handles some 20% of the requests to the service. It will gradually ramp up to handle all requests.
  • Eight disk servers, each with 111TBytes storage, have been deployed to AtlasdataDisk.
  • Atlas have now been swung over to writing new data to the T10KD drives.
Declared in the GOC DB

None

Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • The Castor 2.1.15 update is pending. Testing has shown a database related performance issue which is being followed up. We await successful resolution of that problem and completion of testing before scheduling. In the meantime we plan to carry out the update to the Castor SRMs.
  • Decommissioning of "GEN Scratch" storage in Castor.

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Update to Castor version 2.1.15.
    • Migration of data from T10KC to T10KD tapes (affects Atlas & LHCb data).
  • Networking:
    • Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 2*10Gbit.
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, LFC)
  • Grid Services
    • A Load Balancer (HAProxy) will be used in front of the FTS service.
Entries in GOC DB starting since the last report.
  • None
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
120624 Green Urgent Waiting Reply 2016-04-05 2016-04-05 CMS Consistency Check for T1_UK_RAL
120350 Green Less Urgent In Progress 2016-03-22 2016-04-05 LSST Enable LSST at RAL
119841 Green Less Urgent In Progress 2016-03-01 2016-03-22 LHCb HTTP support for lcgcadm04.gridpp.rl.ac.uk
117683 Green Less Urgent On Hold 2015-11-18 2016-04-05 CASTOR at RAL not publishing GLUE 2
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
30/03/16 100 100 97 100 100 N/A 96 Single SRM Test failure (Unable to schedule transfer )
31/03/16 100 100 100 100 100 N/A 100
01/04/16 100 100 100 100 100 N/A 100
02/04/16 100 100 100 100 96 N/A 100 Single SRM Test failure on List: [SRM_INVALID_PATH] No such file or directory
03/04/16 100 100 100 70 100 N/A N/A Problem with test submission (also affected other sites).
04/04/16 100 100 100 100 100 N/A 100
05/04/16 100 100 97 100 100 N/A 100 Single SRM Test failure on GET (User timeout)