Tier1 Operations Report 2015-10-21

From GridPP Wiki
Revision as of 09:51, 21 October 2015 by Gareth Smith 6edc9bf92b (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

RAL Tier1 Operations Report for 21st October 2015

Review of Issues during the week 14th to 21st October 2015.
  • We have been seeing problems with very high load on the AtlasTape Castor instance. This was leading to a big backlog of files waiting to go to tape as well as a big delay in files being brought online. Additinal disk servers have been added to this instance (the number of servers doubled from five to ten). At the time of the meeting Atlas have also throttled back the write requests.
  • LHCb have reported an ongoing low but persistent rate of failure when copying the results of batch jobs to other sites. They have also reported a current problem that sometimes occurs when writing these files to our Castor storage.
  • ILC reported that a CVMFS repository (/cvmfs/ilc.desy.de) was not available on the worker nodes. This was traced to a missing public key. This has now been fixed across the worker nodes.
Resolved Disk Server Issues
  • GDSS657 (LHCb_RAW - D0T1) was returned to service on Thursday (8th Oct). This server had failed on Saturday (3rd October). After being checked it was returned to service the following day in read-only mode. However, subsequent investigations to fix a problem on this server uncovered a further problem. As reported last week four files were found to be corrupt (partially written) from the time when the server first failed and these were reported to LHCb.
Current operational status and issues
  • The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
  • Long-standing CMS issues. The two items that remain are CMS Xroot (AAA) redirection and file open times. Work is ongoing into the Xroot redirection with a new server having been added in recent weeks. File open times using Xroot remain slow but this is a less significant problem.
Ongoing Disk Server Issues
  • None
Notable Changes made since the last meeting.
  • The final step in the update of the Castor Oracle databases to version 11.2.0.4 took place yesterday (Tuesday 13th Oct). This was the switch-over of the production and standby versions of the "pluto" database to return these to their correct configuration. This completes the upgrades to 11.2.0.4 of all Oracle databases used by the Tier1.
  • Five additional disk servers were added to atlasTape in order to ease the high load seen on this instance. (Tuesday 13th).
  • Disk server GDSS758 was deployed into into lhcbDst (D1T0). This is a server that had been out of service for some time.
  • The update of the final batch of worker nodes to the new configuration has been done.
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Upgrade of remaining Castor disk servers (those in tape-backed service classes) to SL6. This will be transparent to users.
  • Some detailed internal network re-configurations to enable the removal of the old 'core' switch from our network. This includes changing the way the UKLIGHT router connects into the Tier1 network.

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Update disk servers to SL6 (ongoing)
    • Update to Castor version 2.1.15.
  • Networking:
    • Make routing changes to allow the removal of the UKLight Router.
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, LFC)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
All Castor (All SRMs) UNSCHEDULED OUTAGE 13/10/2015 08:00 13/10/2015 11:08 3 hours and 8 minutes Outage of All Castor instances during upgrade of Oracle back end database.
All Castor (All SRMs) SCHEDULED WARNING 08/10/2015 08:30 08/10/2015 15:49 7 hours and 19 minutes Warning (At Risk) on All Castor instances during upgrade of back end Oracle database.
Whole site SCHEDULED WARNING 07/10/2015 10:00 07/10/2015 11:30 1 hour and 30 minutes Warning on site during quarterly UPS/Generator load test.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
116866 Green Less Urgent In Progress 2015-10-12 2015-10-13 SNO+ snoplus support at RAL-LCG2 (pilot role)
116864 Green Urgent In Progress 2015-10-12 2015-10-12 CMS T1_UK_RAL AAA opening and reading test failing again...
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
14/10/15 100 100 100 100 100 97 n/a
15/10/15 100 100 98 100 100 83 100 Single SRM test failure (ould not open connection to srm-atlas.gridpp.rl.ac.uk:8443)
16/10/15 100 100 100 98 100 91 100 Short problem with glexec in the early hours of the morning.
17/10/15 100 100 100 100 100 85 100
18/10/15 100 100 100 100 100 92 n/a
19/10/15 100 100 100 100 100 90 100
20/10/15 100 100 100 100 100 94 100