Tier1 Operations Report 2015-09-23

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 23rd September 2015

Review of Issues during the week 16th to 23rd September 2015.
  • On the 9th September it was found that xrootd transfers for ALICE had not been working since the disk server SL6 upgrade on 26/27 August. This was due to an incorrect xrootd configuration.
  • At the end of last week there was a problem with the Atlas Castor instance - which a few times became unresponsive for some tens of minutes.
  • There have been some problems with the production FTS service which has been struggling under very high load. Yesterday (15th Sep) a significant portion of the Atlas traffic being handled by this server was moved to our "test" FTS instance. We are monitoring the service to check that as the backlog reduces the systems performance does recover.
Resolved Disk Server Issues
  • None
Current operational status and issues
  • The intermittent, low-level, load-related packet loss seen over external connections is still being tracked.
  • Over recent weeks there have been some work on the long-standing CMS issues. Batch job efficiencies now look good (e.g. taking last month's figures). A new server has been provided for the CMS Xroot (AAA) redirection to Castor but the problems remain. File open times using Xroot remain slow but this is a less significant problem.
Ongoing Disk Server Issues
  • None
Notable Changes made since the last meeting.
  • The first step of the update of the Castor Oracle databases to version 11.2.0.4 was successfully carried out yesterday (15th September).
  • The production FTS service was upgraded to version 3.3.1 this morning.
  • Some detailed internal re-connection in the lower levels of the Tier1 network ongoing with the aim of removing the old 'core' switch. One of these required the re-connection of systems in the "UPS" room which was done during a site 'warning' on the morning of Thursday 3rd Sep.
  • The roll-out of the new worker node configuration continues.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
All Castor SCHEDULED OUTAGE 13/10/2015 08:00 13/10/2015 16:00 8 hours Outage of All Castor instances during upgrade of Oracle back end database.
All Castor SCHEDULED WARNING 08/10/2015 08:30 08/10/2015 20:30 12 hours Warning (At Risk) on All Castor instances during upgrade of back end Oracle database.
All Castor SCHEDULED OUTAGE 06/10/2015 08:00 06/10/2015 20:30 12 hours and 30 minutes Outage of All Castor instances during upgrade of Oracle back end database.
Whole site SCHEDULED WARNING 30/09/2015 07:00 30/09/2015 09:00 2 hours At Risk during upgrade of network connection from the Tier1 into RAL site core network.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Upgrade of remaining Castor disk servers (those in tape-backed service classes) to SL6. This will be transparent to users.
  • Upgrade of the Oracle databases behind Castor to version 11.2.0.4. The first step of this multi-step upgrade has been carried out. Further dates are declared in the GOC DB.
  • Some detailed internal network re-configurations to be tackled now that the routers are stable. Notably:
    • Change the way the UKLIGHT router connects into the Tier1 network.
    • Changes to increase the bandwidth of the link from the Tier1 into the RAL internal site network. This will take place on the morning of 30th September.
  • Extending the rollout of the new worker node configuration.

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
    • Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases: Castor, Atlas Frontier (LFC done)
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Update the Oracle databases behind Castor to version 11.2.0.4. Will require some downtimes (See above)
    • Update disk servers to SL6 (ongoing)
    • Update to Castor version 2.1.15.
  • Networking:
    • Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
    • Make routing changes to allow the removal of the UKLight Router.
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
Atlas & GEN Castor instances. SCHEDULED WARNING 22/09/2015 08:30 22/09/2015 20:30 12 hours Warning (At Risk) on Atlas and GEN Castor instances during upgrade of back end Oracle database.
lcgfts3.gridpp.rl.ac.uk, SCHEDULED WARNING 16/09/2015 11:00 16/09/2015 12:00 1 hour Upgrade of FTS server to version 3.3.1.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
116295 Green Urgent In progress 2015-09-18 2015-09-23 CMS Transfers in FTS3@RAL are slow
116269 Green Less Urgent Waiting Reply 2015-09-17 2015-09-18 Atlas RAL-LCG2: deletion errors
116262 Green Urgent In progress 2015-09-17 2015-09-17 SNO+ Job submitted to non-SNO+ CE
116180 Green Less Urgent In progress 2015-09-14 2015-09-14 DiRAC Failing fts transmissions
116094 Green Very Urgent In Progress 2015-09-07 2015-09-15 Atlas Many thousands file transfers failed to NL, ND, IT, UK in the last 4 hours
115290 Yellow Less Urgent On Hold 2015-07-28 2015-07-29 FTS3@RAL: missing proper host names in subjectAltName of FTS agent nodes
108944 Green Less Urgent in Progress 2014-10-01 2015-09-14 CMS AAA access test failing at T1_UK_RAL
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
16/09/15 100 100 100 100 100 100 100
17/09/15 100 100 94 100 100 100 100 Three SRM test failures on Put
18/09/15 100 100 98 100 100 98 100 Single SRM test failure on Put
19/09/15 100 100 100 100 100 100 100
20/09/15 100 100 100 100 100 100 100
21/09/15 100 100 100 100 100 98 N/A
22/09/15 100 100 100 100 100 100 100