Tier1 Operations Report 2014-02-26

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 26th February 2014

Review of Issues during the week 19th to 26th February 2014.
  • Problems with the CEs overnight Thursday/Friday 20/21 Feb traced to problem on one of the site BDIIs which was restarted.
Resolved Disk Server Issues
  • None
Current operational status and issues
  • The intermittent failures of Castor access via the SRM (as seen in the availability tests) reported last week is still present. This has been seen across multiple Castor instances. The Castor team are actively working on this. They have been in contact with the Castor developers at CERN to try and find a solution and a number of approaches have been tried.
  • We are participating in an extensive FTS3 test with Atlas and CMS.
Ongoing Disk Server Issues
  • None
Notable Changes made this last week.
  • Two new FTS3 servers have been added into the configuration.
  • 2013 disk servers (not yet in production) are now connected using the new Tier1 mesh network.
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Move of Tier1 to use new site firewall - Most likely on Monday 17th March. There will some interruption to services.

Listing by category:

  • Databases:
    • Switch LFC/FTS/3D to new Database Infrastructure.
  • Castor:
    • Castor 2.1.14 testing is largely complete. We are starting to look at possible dates for rolling this out (probably around April).
  • Networking:
    • Implementation of new site firewall.
    • Update core Tier1 network and change connection to site and OPN including:
      • Install new Routing layer for Tier1 & change the way the Tier1 connects to the RAL network.
      • These changes will lead to the removal of the UKLight Router.
  • Fabric
    • We are phasing out the use of the software server used by the small VOs.
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during 2014.
    • The floor in the machine room in the Atlas building is being replaced. Production services on hypervisors located there have been moved elsewhere.
Entries in GOC DB starting between the 19th and 26th February 2014.
Service Scheduled? Outage/At Risk Start End Duration Reason
arc-ce01.gridpp.rl.ac.uk, SCHEDULED WARNING 26/02/2014 10:00 26/02/2014 12:00 2 hours At Risk during software upgrade to version 13.11 / 4.0.0.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
101532 Green Less Urgent In Progress 2014-02-25 2014-02-25 Publishing default value for Max CPU Time
101079 Yellow Urgent In Progress 2014-02-09 2014-02-25 ARC CEs have VOViews with a default SE of "0"
101052 Yellow Urgent In Progress 2014-02-06 2014-02-14 Biomed Can't retrieve job result file from cream-ce02.gridpp.rl.ac.uk
100114 Red Less Urgent Waiting Reply 2014-01-08 2014-02-11 Jobs failing to get from RAL WMS to Imperial
99556 Red Very Urgent In Progress 2013-12-06 2014-02-13 NGI Argus requests for NGI_UK
98249 Red Urgent On Hold 2013-10-21 2014-01-29 SNO+ please configure cvmfs stratum-0 for SNO+ at RAL T1
97025 Red Less urgent On Hold 2013-09-03 2014-02-25 Myproxy server certificate does not contain hostname
Availability Report
Day OPS Alice Atlas CMS LHCb Comment
19/02/14 100 100 100 91.9 100 Single SRM Test Failure ("Stager did not answer within the requested time".)
20/02/14 100 100 100 94.6 95.1 CMS: One SRM test failure: "User timeout"; LHCb - Site BDII problem (see below).
21/02/14 95.8 88.9 100 95.2 93.4 Failing CE tests overnight with 'no compatible resources'. Traced to problematic site BDII
22/02/14 100 100 100 100 100
23/02/14 100 100 100 100 100
24/02/14 100 100 100 96.5 100 Job Submission problem for all three ARC CEs.
25/02/14 100 100 100 100 100