Tier1 Operations Report 2015-05-13

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 13th May2015

Review of Issues during the week 6th to 13th May 2015.
  • At the end of last week the Atlas queue ANALY_RAL_SL6 was being put offline then back online a few times per day by Atlas. This was traced to a problem with a new configuration of a batch of new worker nodes. These nodes are temporarily disabled pending a fix.
  • Problems with Castor xroot response for CMS have been very acute. In particular the time required to open files is often excessively long. Work is ongoing to understand this problem. It is clear that many files are located on two particularly 'hot' disk servers which may account for most of the effect - although this is not the only cause of the problem. In order to elimiate possible causes of problems on Friday (8th May) the CMS AAA xroot redirector was stopped.
  • As reported last week the network latency difference between the two directions over the OPN and the balancing of the paired link to the UKLIght router have been fixed and remained good this last week. However, intermittent, low-level, load-related packet loss over the OPN to CERN is still being seen and tracked.
Resolved Disk Server Issues
  • None
Current operational status and issues
  • We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
  • Castor xroot performance problems seen by CMS - particularly in file open times.
  • The post mortem review of the network incident on the 8th April is being finalised.
Ongoing Disk Server Issues
  • None
Notable Changes made since the last meeting.
  • gfal2 and davix rpms are in the process of being updated across the worker nodes and should be completed this week.
  • A start has been made on updating the Castor tape servers to SL6.
  • On Thursday (7th May) the production FTS3 upgraded to 3.2.33 + gfal upgraded to 2.9.1
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
cream-ce01, cream-ce02 SCHEDULED OUTAGE 05/05/2015 12:00 02/06/2015 12:00 28 days Decommissioning of CREAM CEs (cream-ce01.gridpp.rl.ac.uk, cream-ce02.gridpp.rl.ac.uk).
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Turn off ARC-CE05. This will leave four ARC CEs (as planned.) This fifth was set-up to temporarily workround a specific problem and is no longer required.
  • Progressive upgrading of Castor Tape Servers to SL6.
  • Upgrade Tier1 Castor Oracle Databases to version 11.2.0.4. Proposed timetable (delayed by one week since last week's report):
    • Week 26-28 May: Install software on Database Systems (some 'At Risks')
    • Tuesday 2nd June: Switchover and upgrade Neptune (ATLAS and GEN downtime)
    • Monday 8th June: Upgrade Neptune's standby (ATLAS and GEN at risk)
    • Wednesday 10th June: Switchover Neptune and Pluto, and upgrade Pluto (All Tier1 Castor downtime)
    • Tuesday 16th June: Upgrade Pluto's standby (Tier1 at risk)
    • Thursday 18th June: Switchover Pluto (All Tier1 Castor downtime)

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
    • Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases (Castor, LFC and Atlas Frontier)
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Update the Oracle databses behind Castor to version 11.2.0.4. Will require some downtimes (See above)
    • Update disk servers to SL6.
    • Update to Castor version 2.1.15.
  • Networking:
    • Separate some non-Tier1 services off our network so as to be able to more easily investigate the router problems.
    • Resolve problems with primary Tier1 Router
    • Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
    • Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
    • Make routing changes to allow the removal of the UKLight Router.
    • Cabling/switch changes to the network in the UPS room to improve resilience.
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
lcgfts3.gridpp.rl.ac.uk, SCHEDULED WARNING 07/05/2015 11:00 07/05/2015 12:00 1 hour Update of Production FTS3 Server to version 3.2.33
cream-ce01, cream-ce02 SCHEDULED OUTAGE 05/05/2015 12:00 02/06/2015 12:00 28 days, Decommissioning of CREAM CEs (cream-ce01.gridpp.rl.ac.uk, cream-ce02.gridpp.rl.ac.uk).
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
113591 Green Top Priority Waiting Reply 2015-05-07 2015-05-12 Atlas RAL-LCG2 DATATAPE, all tranfers are failing at destination
113320 Green Urgent In Progress 2015-04-27 2015-05-12 CMS Data transfer issues between T1_UK_RAL_MSS, T1_UK_RAL_Buffer and T1_UK_RAL_Disk
112866 Green Less Urgent In Progress 2015-04-02 2015-04-07 CMS Many jobs are failed/aborted at T1_UK_RAL
112819 Green Less Urgent In Progress 2015-04-02 2015-04-07 SNO+ ArcSync hanging
112721 Green Less Urgent In Progress 2015-03-28 2015-04-16 Atlas RAL-LCG2: SOURCE Failed to get source file size
109694 Red Urgent In Progress 2014-11-03 2015-05-13 SNO+ gfal-copy failing for files at RAL
108944 Red Less Urgent In Progress 2014-10-01 2015-05-11 CMS AAA access test failing at T1_UK_RAL
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
06/05/15 100 100 100 100 100 100 100
07/05/15 100 100 100 100 100 97 88
08/05/15 100 100 100 100 100 100 92
09/05/15 100 100 100 100 100 100 100
10/05/15 100 100 100 100 100 100 99
11/05/15 100 100 100 100 96.0 100 98 Single SRM test failure on list. (HANDLING TIMEOUT)
12/05/15 100 100 98.0 100 100 100 99 A single SRM test failure. ([SRM_INVALID_PATH] No such file or directory)