Difference between revisions of "Tier1 Operations Report 2015-12-02"

From GridPP Wiki
Jump to: navigation, search
()
()
Line 33: Line 33:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
|}
 
|}
* There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
+
* There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites. A recent modification has improved, but not completed fixed this.
 
* The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
 
* The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->

Revision as of 10:12, 2 December 2015

RAL Tier1 Operations Report for 2nd December 2015

Review of Issues during the week 25th November to 2nd December 2015.
  • None
Resolved Disk Server Issues
  • GDSS687 (AtlasDataDisk - D1T0) crashed on Saturday morning (28th Nov). Following initial tests it was returned to service in read-only mode later that day. The type of crash suggests a possible CPU fault. This morning this server was briefly taken down for a BIOS update (which may improve the handling of this type of fault).
  • GDSS722 (AtlasDataDisk - D1T0) also crashed on Saturday morning (28th Nov). Following initial tests it was returned to service in read-only mode later the following day. Symptoms similar to GDSS687.
  • GDSS704 (AtlasDataDisk - D1T0) crashed on Sunday evening (29th Nov). Following initial tests it was returned to service in read-only mode on Monday (30th Nov). Symptoms also similar to GDSS687.
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites. A recent modification has improved, but not completed fixed this.
  • The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
Ongoing Disk Server Issues

None

Notable Changes made since the last meeting.
  • None.
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Upgrade of remaining Castor disk servers (those in tape-backed service classes) to SL6. This will be transparent to users.
  • Some detailed internal network re-configurations to enable the removal of the old 'core' switch from our network. This includes changing the way the UKLIGHT router connects into the Tier1 network. As part of this we plan to increase the bandwidth available on the bypass route from 10 to 20Gbit.
  • Implementing a changed algorithm for the draining of worker nodes to make space for multi-core jobs. The new version allow "pre-emptable" jobs to run in the job slots until they are needed.

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Update disk servers to SL6 (ongoing)
    • Update to Castor version 2.1.15.
  • Networking:
    • Complete changes needed to remove the old core switch from the Tier1 network.
    • Make routing changes to allow the removal of the UKLight Router.
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, LFC)
Entries in GOC DB starting since the last report.

None

Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
118044 Green Less Urgent In Progress 2015-11-30 2015-11-30 Atlas gLExec hammercloud jobs failing at RAL-LCG2 since October
117846 Green Urgent In Progress 2015-11-23 2015-11-24 Atlas ATLAS request- storage consistency checks
117683 Green Less Urgent In Progress 2015-11-18 2015-11-19 CASTOR at RAL not publishing GLUE 2
116866 Yellow Less Urgent In Progress 2015-10-12 2015-11-30 SNO+ snoplus support at RAL-LCG2 (pilot role)
116864 Amber Urgent In Progress 2015-10-12 2015-11-20 CMS T1_UK_RAL AAA opening and reading test failing again...
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
25/11/15 100 100 100 100 96 97 100 Single SRM test failure “[SRM_INVALID_PATH] No such file or directory”.
26/11/15 100 100 100 96 100 100 100 Single SRM test failure on PUT ("Input/output error")
27/11/15 100 100 100 100 100 100 100
28/11/15 100 100 100 100 96 100 100 Single SRM test failure with "[SRM_INVALID_PATH] No such file or directory"
29/11/15 100 100 100 100 100 100 100
30/11/15 100 100 100 95 100 97 N/A Looks like SUM monitoring failure.
01/12/15 100 100 100 100 100 100 100