Tier1 Operations Report 2015-05-06

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 6th May2015

Review of Issues during the three weeks 15th April to 6th May 2015.
  • Around 16 - 24 April there were problems with some CMS hot files.
  • On Tuesday 28th April we failed some CE SAM tests owing to a problem with Argus. However, this fixed itself before staff intervened.
  • We have seen many zero-sized files created for Alice in Castor. This appears to be a Castor timeout affecting files that are written over a period of more than two hours. These appear as checksum validation errors. This morning (6th May) a fix was put in place (xroot timeout increased) - we will track if this does resolve the problem.
  • There have been some network problems over this period. We have seen a latency difference in the two directions over the OPN, along with some low level packet loss over this connection. In addition there have been problems with the balancing of the paired link to the UKLight router (this is the route our data uses to/from site). Work has gone on involving CERN to fix the latency mis-match as well as work on the physical connections to the UKLight router. As of this morning (6th May) both the latency problem and the balancing of the link to the UKLIght router are looking much better - but we will continue to track this.
Resolved Disk Server Issues
  • GDSS633 (AtlasTape - D0T1) failed on Sunday 3rd May. It was returned to service late evening the following day (ie. Bank Holiday Monday 4th May).
Current operational status and issues
  • We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
  • The post mortem review of the network incident on the 8th April is being finalised.
Ongoing Disk Server Issues
  • None
Notable Changes made since the last meeting.
  • gfal2 and davix rpms are in the process of being updated across the worker nodes.
  • On the 21st April the CVMFS stratum-0 server was updated to v2.1.20
  • On Tuesday 28th April the Oracle database behind the LFC srevice was upgraded to version 11.2.0.4.
  • A start has been made on updating the Castor tape servers to SL6. (One server for each of the 'C' and 'D' drives was updated on 1st May.)
  • Yesterday (5th May) the two remaining CREAM CEs were put into draining mode as the next step in their decommissioning.
  • Yesterday (5th May) the "test" FTS3 instance was upgraded to 3.2.33
  • Yesterday (5th May) the regular Oracle patches were applied the database hosting the Atlas Conditions database (behind Frontier).
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
lcgfts3.gridpp.rl.ac.uk, SCHEDULED WARNING 07/05/2015 11:00 07/05/2015 12:00 1 hour Update of Production FTS3 Server to version 3.2.33
cream-ce01.gridpp.rl.ac.uk, cream-ce02.gridpp.rl.ac.uk. SCHEDULED OUTAGE 05/05/2015 12:00 02/06/2015 12:00 28 days Decommissioning of CREAM CEs (cream-ce01.gridpp.rl.ac.uk, cream-ce02.gridpp.rl.ac.uk).
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Turn off ARC-CE05. This will leave four ARC CEs (as planned.) This fifth was set-up to temporarily workround a specific problem and is no longer required.
  • Upgrade Tier1 Castor Oracle Databases to version 11.2.0.4. Proposed timetable:
    • Week 18-21 May: Install software on Database Systems (some 'At Risks')
    • Tuesday 26th May: Switchover and upgrade Neptune (ATLAS and GEN downtime)
    • Monday 1st June: Upgrade Neptune's standby (ATLAS and GEN at risk)
    • Wednesday 3rd June: Switchover Neptune and Pluto, and upgrade Pluto (All Tier1 Castor downtime)
    • Tuesday 9th June: Upgrade Pluto's standby (Tier1 at risk)
    • Thursday 11th June: Switchover Pluto (All Tier1 Castor downtime)

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
    • Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases (Castor, LFC and Atlas Frontier)
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Update the Oracle databses behind Castor to version 11.2.0.4. Will require some downtimes (See above)
    • Update disk servers to SL6.
    • Update to Castor version 2.1.15.
  • Networking:
    • Separate some non-Tier1 services off our network so as to be able to more easily investigate the router problems.
    • Resolve problems with primary Tier1 Router
    • Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
    • Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
    • Make routing changes to allow the removal of the UKLight Router.
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
cream-ce01.gridpp.rl.ac.uk, cream-ce02.gridpp.rl.ac.uk SCHEDULED OUTAGE 05/05/2015 12:00 02/06/2015 12:00 28 days, Decommissioning of CREAM CEs (cream-ce01.gridpp.rl.ac.uk, cream-ce02.gridpp.rl.ac.uk).
lcgft-atlas.gridpp.rl.ac.uk, SCHEDULED WARNING 05/05/2015 09:00 05/05/2015 11:55 2 hours and 55 minutes Regular patching of Oracle Database behind the Atlas Frontier service.
lfc.gridpp.rl.ac.uk SCHEDULED OUTAGE 28/04/2015 09:00 28/04/2015 14:03 5 hours and 3 minutes Update of Oracle Database behind the LFC service. There will be an initial outage of up to 90 minutes. Following this the LFC will be available READ-ONLY for some (up to five) hours. At the end of the upgrade the LFC will again be unavailable for a period of up to 90 minutes.
All Castor (all SRM endpoints) SCHEDULED OUTAGE 15/04/2015 09:30 15/04/2015 11:57 2 hours and 27 minutes Castor upgrade to 2.1.14-15 as this was postponed last week.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
113320 Green Urgent In Progress 2015-04-27 2015-05-05 CMS Data transfer issues between T1_UK_RAL_MSS, T1_UK_RAL_Buffer and T1_UK_RAL_Disk
112866 Green Less Urgent In Progress 2015-04-02 2015-04-07 CMS Many jobs are failed/aborted at T1_UK_RAL
112819 Green Less Urgent In Progress 2015-04-02 2015-04-07 SNO+ ArcSync hanging
112721 Green Less Urgent In Progress 2015-03-28 2015-04-16 Atlas RAL-LCG2: SOURCE Failed to get source file size
111699 Red Less Urgent In Progress 2015-02-10 2015-03-23 Atlas gLExec hammercloud jobs keep failing at RAL-LCG2 & RALPP
109694 Red Urgent In Progress 2014-11-03 2015-03-31 SNO+ gfal-copy failing for files at RAL
108944 Red Less Urgent In Progress 2014-10-01 2015-04-30 CMS AAA access test failing at T1_UK_RAL
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
15/04/15 97.9 100 90.0 90.0 90.0 100 98 Castor 2.1.14-15 upgrade.
16/04/15 100 100 94.0 100 100 100 99 Couple of SRM Test errors around half an hour apart.
17/04/15 100 100 100 100 100 100 100
18/04/15 100 100 100 100 100 100 100
19/04/15 100 100 100 100 100 98 100
20/04/15 100 100 100 100 100 100 n/a
21/04/15 100 100 100 100 100 100 n/a
22/04/15 100 100 100 100 100 100 n/a
23/04/15 100 100 100 100 100 100 n/a
24/04/15 100 100 100 100 100 100 n/a
25/04/15 100 100 100 100 100 100 n/a
26/04/15 100 100 100 100 100 100 n/a
27/04/15 100 100 100 99.0 99.0 100 100 Argus problem - job sumbissions failed for a while.
28/04/15 100 100 100 100 100 100 100
29/04/15 100 100 100 100 100 100 100
30/04/15 100 100 100 100 100 100 100
01/05/15 100 100 100 100 96.0 95 100 Single SRM test failure. Error listing file.
02/05/15 100 100 100 100 100 96 100
03/05/15 100 100 100 100 100 83 100
04/05/15 100 100 100 100 100 90 99
05/05/15 100 100 100 52.0 100 93 100 CMS CE SAM tests are failed at many sites due to an expired proxy.