Tier1 Operations Report 2015-09-23

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 23rd September 2015

Review of Issues during the week 16th to 23rd September 2015.
  • Problems with the production FTS service, reported last week, have continued. As the backlog (and queue length) has reduced the database load has dropped. CMS have also submitted many transfers with a large number for a single, slow site. However, following the application of the updated FTS3 software to the production system last week a memory leak was introduced which also caused a set of problems (swapping etc) that have affected performance. A fix has for this has been supplied by the developers and applied (yesterday - 22nd Sep). At the moment the remaining problems have not yet cleared.
  • There is ongoing work to move switches off the old 'core' network switch ahead of that being decommissioned. During last week this work continued and including moving switches hosting some worker nodes. These changes introduced some packet loss into part of our Tier1 network. This rose to around 10% to systems on one of the switches (worker nodes). A number of steps were taken at the end of last week - including reducing traffic on a couple of switches by stopping some worker nodes (2013 generation) and rebooting particular switches. This significantly reduced the packet loss (to figures of around 0.5%). Work is ongoing to check the detailed network configuration in the area of the packet loss.
Resolved Disk Server Issues
  • None
Current operational status and issues
  • The intermittent, low-level, load-related packet loss seen over external connections is still being tracked.
  • Long-standing CMS issues. The two items that remain are CMS Xroot (AAA) redirection and file open times. Work is ongoing into the Xroot redirection with a new server having been added in recent weeks. File open times using Xroot remain slow but this is a less significant problem.
Ongoing Disk Server Issues
  • None
Notable Changes made since the last meeting.
  • The second step of the update of the Castor Oracle databases to version 11.2.0.4 was successfully carried out yesterday (22nd September). This has re-enabled the Oracle DataGuard copy for the Atlas & GEN Castor stager databases.
  • Some detailed internal re-connection in the lower levels of the Tier1 network ongoing with the aim of removing the old 'core' switch.
  • The roll-out of the new worker node configuration continues.
  • Updating the first batch of the remaining Castor disk servers (those in tape-backed service classes) to SL6. This will be done a handful of servers at a time and is transparent to the VOs.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
All Castor SCHEDULED OUTAGE 13/10/2015 08:00 13/10/2015 16:00 8 hours Outage of All Castor instances during upgrade of Oracle back end database.
All Castor SCHEDULED WARNING 08/10/2015 08:30 08/10/2015 20:30 12 hours Warning (At Risk) on All Castor instances during upgrade of back end Oracle database.
All Castor SCHEDULED OUTAGE 06/10/2015 08:00 06/10/2015 20:30 12 hours and 30 minutes Outage of All Castor instances during upgrade of Oracle back end database.
Whole site SCHEDULED WARNING 30/09/2015 07:00 30/09/2015 09:00 2 hours At Risk during upgrade of network connection from the Tier1 into RAL site core network.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Upgrade of remaining Castor disk servers (those in tape-backed service classes) to SL6. This will be transparent to users.
  • Upgrade of the Oracle databases behind Castor to version 11.2.0.4. The first two steps of this multi-step upgrade has been carried out. Further dates are declared in the GOC DB.
  • Some detailed internal network re-configurations to be tackled now that the routers are stable. Notably:
    • Change the way the UKLIGHT router connects into the Tier1 network.
    • Changes to increase the bandwidth of the link from the Tier1 into the RAL internal site network. This will take place on the morning of 30th September.
  • Extending the rollout of the new worker node configuration.

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
    • Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases: Castor, Atlas Frontier (LFC done)
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Update the Oracle databases behind Castor to version 11.2.0.4. Will require some downtimes (See above)
    • Update disk servers to SL6 (ongoing)
    • Update to Castor version 2.1.15.
  • Networking:
    • Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
    • Make routing changes to allow the removal of the UKLight Router.
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, LFC)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
Atlas & GEN Castor instances. SCHEDULED WARNING 22/09/2015 08:30 22/09/2015 20:30 12 hours Warning (At Risk) on Atlas and GEN Castor instances during upgrade of back end Oracle database.
lcgfts3.gridpp.rl.ac.uk, SCHEDULED WARNING 16/09/2015 11:00 16/09/2015 12:00 1 hour Upgrade of FTS server to version 3.3.1.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
116295 Green Urgent In progress 2015-09-18 2015-09-23 CMS Transfers in FTS3@RAL are slow
116269 Green Less Urgent Waiting Reply 2015-09-17 2015-09-18 Atlas RAL-LCG2: deletion errors
116262 Green Urgent In progress 2015-09-17 2015-09-17 SNO+ Job submitted to non-SNO+ CE
116180 Green Less Urgent In progress 2015-09-14 2015-09-14 DiRAC Failing fts transmissions
116094 Green Very Urgent In Progress 2015-09-07 2015-09-15 Atlas Many thousands file transfers failed to NL, ND, IT, UK in the last 4 hours
115290 Yellow Less Urgent On Hold 2015-07-28 2015-07-29 FTS3@RAL: missing proper host names in subjectAltName of FTS agent nodes
108944 Green Less Urgent in Progress 2014-10-01 2015-09-14 CMS AAA access test failing at T1_UK_RAL
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
16/09/15 100 100 100 100 100 100 100
17/09/15 100 100 94 100 100 100 100 Three SRM test failures on Put
18/09/15 100 100 98 100 100 98 100 Single SRM test failure on Put
19/09/15 100 100 100 100 100 100 100
20/09/15 100 100 100 100 100 100 100
21/09/15 100 100 100 100 100 98 N/A
22/09/15 100 100 100 100 100 100 100