Difference between revisions of "Tier1 Operations Report 2015-07-29"

From GridPP Wiki
Jump to: navigation, search
(Created page with "==RAL Tier1 Operations Report for 29th July 2015== __NOTOC__ ====== ====== <!-- ************************************************************* -----> <!-- ***********Start Rev...")
 
()
Line 203: Line 203:
 
| 22/07/15 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
 
| 22/07/15 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
 
|-
 
|-
| 23/07/15 || 100 || 100 || style="background-color: lightgrey;" | 94.0 || 100 || 100 || 95 || 100 || SRM test failures (Error message: __main__.TimeoutException)  
+
| 23/07/15 || 100 || 100 || style="background-color: lightgrey;" | 94.0 || 100 || 100 || 98 || 100 || SRM test failures (Error message: __main__.TimeoutException)  
 
|-
 
|-
| 24/07/15 || 100 || 100 || style="background-color: lightgrey;" | 98.0 || 100 || 100 || 91 || 100 || Single SRM test failure (as above).
+
| 24/07/15 || 100 || 100 || style="background-color: lightgrey;" | 98.0 || 100 || 100 || 100 || 100 || Single SRM test failure (as above).
 
|-
 
|-
| 25/07/15 || 100 || 100 || 100 || 100 || 100 || 98 || 100 ||
+
| 25/07/15 || 100 || 100 || 100 || 100 || 100 || 94 || 100 ||
 
|-
 
|-
 
| 26/07/15 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
 
| 26/07/15 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
Line 213: Line 213:
 
| 27/07/15 || style="background-color: lightgrey;" | 84.0 || 100 || style="background-color: lightgrey;" | 98.0 || 100 || 100 || 100 || 75 || OPS: Failures for ARC CE tests for OPS VO affected many sites. Atlas: As on 24/7.
 
| 27/07/15 || style="background-color: lightgrey;" | 84.0 || 100 || style="background-color: lightgrey;" | 98.0 || 100 || 100 || 100 || 75 || OPS: Failures for ARC CE tests for OPS VO affected many sites. Atlas: As on 24/7.
 
|-
 
|-
| 28/07/15 || style="background-color: lightgrey;" | 70.5 || 100 || style="background-color: lightgrey;" | 93.0 || style="background-color: lightgrey;" | 94.0 || style="background-color: lightgrey;" | 94.0 || 93 || 100 || LHC VOs: Problem with Tier1 network router affected tests. OPS: Continuation of previous day's problem.
+
| 28/07/15 || style="background-color: lightgrey;" | 70.5 || 100 || style="background-color: lightgrey;" | 93.0 || style="background-color: lightgrey;" | 94.0 || style="background-color: lightgrey;" | 94.0 || 96 || 100 || LHC VOs: Problem with Tier1 network router affected tests. OPS: Continuation of previous day's problem.
 
|}
 
|}
 
<!-- **********************End Availability Report************************** ----->
 
<!-- **********************End Availability Report************************** ----->
 
<!-- *********************************************************************** ----->
 
<!-- *********************************************************************** ----->

Revision as of 10:17, 29 July 2015

RAL Tier1 Operations Report for 29th July 2015

Review of Issues during the week 22nd to 29th July 2015.
  • On Friday (17th July) there was a short (roughly ten minute) break in network connectivity to much of the RAL site when a core router rebooted.
  • On Monday (20th July) there was a problem with very high load on the myproxy server. There has also been a similar short-lived problem the day before. This was traced to the activity of a single user and some temporary rate limiting for requests has been put in place to protect the server.
  • A race condition has been uncovered in Castor whereby some stored files do not have the correct location on disk recorded in the database. Investigations show this applies to very few files (10 files for Atlas in the last 200 days) and a method finding them has been established so that they can be patched up.
  • The deployment of additional disk servers for CMS & LHCb is underway. It had been hoped these would enter service this week - but this has been delayed around a week.
  • The Castor re-balancer has been stopped for a while. For a few files it was creating multiple bad copies. This is not a problem seen by the VOs (as Castor will still use the original good copy) - but it needs to be resolved.
Resolved Disk Server Issues
  • GDSS657 (lhcbRawRdst, D0T1) which had failed on Saturday (11th July) was returned to service around midday on Thursday (16th July).
  • GDSS615 (AliceDisk - D1T0) was out of service for two hours on Thursday 16th July while the battery on the RAID card was replaced.
Current operational status and issues
  • We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
  • The post mortem review of the network incident on the 8th April is being finalised.
  • The intermittent, low-level, load-related packet loss over the OPN to CERN is still being tracked.
  • There are some on-going issues for CMS. These are a problem with the Xroot (AAA) redirection accessing Castor; Slow file open times using Xroot; and poor batch job efficiencies.
Ongoing Disk Server Issues
  • None
Notable Changes made since the last meeting.
  • The test of the updated worker node configuration (with grid middleware delivered via CVMFS) continues on a one whole batch of Worker Nodes.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
Whole site SCHEDULED OUTAGE 04/08/2015 08:30 04/08/2015 15:00 6 hours and 30 minutes Site Outage during investigation of problem with network router.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Vendor intervention on Tier1 Router scheduled for Tuesday 4th August.

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
    • Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases: Castor, Atlas Frontier (LFC done)
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Update the Oracle databses behind Castor to version 11.2.0.4. Will require some downtimes (See above)
    • Update disk servers to SL6.
    • Update to Castor version 2.1.15.
  • Networking:
    • Resolve problems with primary Tier1 Router. Need to schedule a roughly half-day outage for the vendor to carry out investigations.
    • Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
    • Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
    • Make routing changes to allow the removal of the UKLight Router.
    • Cabling/switch changes to the network in the UPS room to improve resilience.
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report.
  • None
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
115165 Green Less Urgent In Progress 2015-07-21 2015-07-21 SNO+ File status
114786 Green Less Urgent On Hold 2015-07-02 2015-07-07 OPS [Rod Dashboard] Issue detected : egi.eu.lowAvailability-/RAL-LCG2@RAL-LCG2_Availability
113836 Yellow Less Urgent In Progress 2015-05-20 2015-06-24 GLUE 1 vs GLUE 2 mismatch in published queues
108944 Red Less Urgent In Progress 2014-10-01 2015-07-17 CMS AAA access test failing at T1_UK_RAL
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
22/07/15 100 100 100 100 100 100 100
23/07/15 100 100 94.0 100 100 98 100 SRM test failures (Error message: __main__.TimeoutException)
24/07/15 100 100 98.0 100 100 100 100 Single SRM test failure (as above).
25/07/15 100 100 100 100 100 94 100
26/07/15 100 100 100 100 100 100 100
27/07/15 84.0 100 98.0 100 100 100 75 OPS: Failures for ARC CE tests for OPS VO affected many sites. Atlas: As on 24/7.
28/07/15 70.5 100 93.0 94.0 94.0 96 100 LHC VOs: Problem with Tier1 network router affected tests. OPS: Continuation of previous day's problem.