Tier1 Operations Report 2015-09-30

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 30th September 2015

Review of Issues during the week 23rd to 30th September 2015.
  • Problems with the production FTS service, reported last week, are resolved. As reported at last weeks meeting the various problems had been fixed but we were then waiting for the service to return to normal. A specific problem with one of the FTS servers (lcgfts01) was reported and on Wednesday (23rd) this node was drained of FTS transfers which cleared a set of stalled transfers. The node was restarted later that day restarted.
  • We have been working to understand and reduce the (now) lower level of packet loss seen within a part of our Tier1 network. As part of this we had some of our worker nodes (the 2013 batches) down while some detailed network re-configurations were made. These systems have now all been returned to service.
  • LHCb have reported an ongoing low but persistent rate of failure when copying the results of batch jobs to other sites. They have also reported a current problem that sometimes occurs when writing these files to our Castor storage.
  • We were informed by JANET that there was a short (few minute) break in the Primary OPN link on Friday afternoon (25th Sep). This were no effect on Tier1 operations and we can see there was some traffic on the backup link at this time.
Resolved Disk Server Issues
  • None
Current operational status and issues
  • The intermittent, low-level, load-related packet loss seen over external connections is still being tracked.
  • Long-standing CMS issues. The two items that remain are CMS Xroot (AAA) redirection and file open times. Work is ongoing into the Xroot redirection with a new server having been added in recent weeks. File open times using Xroot remain slow but this is a less significant problem.
Ongoing Disk Server Issues
  • None
Notable Changes made since the last meeting.
  • The network link from the Tier1 routers into the RAL core network was upgraded to a resilient pair of 40Gbit connections this morning (Wed 30th Oct).
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
All Castor SCHEDULED OUTAGE 13/10/2015 08:00 13/10/2015 16:00 8 hours Outage of All Castor instances during upgrade of Oracle back end database.
All Castor SCHEDULED WARNING 08/10/2015 08:30 08/10/2015 20:30 12 hours Warning (At Risk) on All Castor instances during upgrade of back end Oracle database.
Whole site SCHEDULED WARNING 07/10/2015 10:00 07/10/2015 11:30 1 hour and 30 minutes Warning on site during quarterly UPS/Generator load test.
All Castor SCHEDULED OUTAGE 06/10/2015 08:00 06/10/2015 20:30 12 hours and 30 minutes Outage of All Castor instances during upgrade of Oracle back end database.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Upgrade of remaining Castor disk servers (those in tape-backed service classes) to SL6. This will be transparent to users.
  • Upgrade of the Oracle databases behind Castor to version 11.2.0.4. The first two steps of this multi-step upgrade has been carried out. Further dates are declared in the GOC DB.
  • Some detailed internal network re-configurations to enable the removal of the old 'core' switch from our network. This includes changing the way the UKLIGHT router connects into the Tier1 network.
  • Continuing the rollout of the new worker node configuration. The penultimate batch is being upgraded at the moment.

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
    • Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases: Castor, Atlas Frontier (LFC done)
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Update the Oracle databases behind Castor to version 11.2.0.4. Will require some downtimes (See above)
    • Update disk servers to SL6 (ongoing)
    • Update to Castor version 2.1.15.
  • Networking:
    • Make routing changes to allow the removal of the UKLight Router.
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, LFC)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
Whole site SCHEDULED WARNING 30/09/2015 07:00 30/09/2015 09:00 2 hours At Risk during upgrade of network connection from the Tier1 into RAL site core network.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
116094 Yellow Very Urgent Waiting Reply 2015-09-07 2015-09-24 Atlas Many thousands file transfers failed to NL, ND, IT, UK in the last 4 hours
115290 Amber Less Urgent In Progress 2015-07-28 2015-09-29 FTS3@RAL: missing proper host names in subjectAltName of FTS agent nodes
108944 Green Less Urgent in Progress 2014-10-01 2015-09-29 CMS AAA access test failing at T1_UK_RAL
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
23/09/15 100 100 100 100 100 98 N/A
24/09/15 100 100 100 100 100 98 100
25/09/15 100 100 98 96 100 100 100 In both cases a single SRM test error on PUT.
26/09/15 100 100 100 100 100 98 100
27/09/15 100 100 100 100 100 100 100
28/09/15 100 100 100 100 100 100 100
29/09/15 100 100 100 100 100 98 100