Difference between revisions of "Tier1 Operations Report 2015-06-10"

From GridPP Wiki
Jump to: navigation, search
()
()
Line 9: Line 9:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 27th May to 3rd June 2015.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 27th May to 3rd June 2015.
 
|}
 
|}
* There have been some severe problems with the CMS Castor instance that are traiggered by particular high loads.
+
* There have been some severe problems with the CMS Castor instance that are traiggered by particular high loads. Teh amount of CMS work has been throttled back to mitigate this.
 +
* A memory fault in one of the hypervisors has led to a short planned outage of some services on Thursday (see GOC DB entries).
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->

Revision as of 17:19, 2 June 2015

RAL Tier1 Operations Report for 3rd June 2015

Review of Issues during the week 27th May to 3rd June 2015.
  • There have been some severe problems with the CMS Castor instance that are traiggered by particular high loads. Teh amount of CMS work has been throttled back to mitigate this.
  • A memory fault in one of the hypervisors has led to a short planned outage of some services on Thursday (see GOC DB entries).
Resolved Disk Server Issues
  • GDSS630 (AtlasDataDisk - D1T0) failed on Sunday morning, 31st May. (This server was/is in 'readonly' mode). Following checks it was returned to service the following day (1st June).
Current operational status and issues
  • We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
  • Castor xroot performance problems seen by CMS - particularly in very long file open times. The unbalanced data sets (that were appearing as hot files on two specific disk servers) have been redistributed. Other investigations are ongoing although we have not seen th eparicular problems of late - possibly the relevant CMS workflow is not running.
  • The post mortem review of the network incident on the 8th April is being finalised.
  • The intermittent, low-level, load-related packet loss over the OPN to CERN is still being tracked.
Ongoing Disk Server Issues
  • None
Notable Changes made since the last meeting.
  • On Thursday 28th May there was a network intervention routing for some non-Tier1 services was separated off our network so we can more easily investigate the router problems.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
arc-ce05.gridpp.rl.ac.uk SCHEDULED OUTAGE 10/06/2015 12:00 08/07/2015 12:00 28 days, This particular ARC CE will be decommisioned. (Four other ARC CEs remain in production).
lcgfts3.gridpp.rl.ac.uk, SCHEDULED WARNING 04/06/2015 12:00 04/06/2015 13:00 1 hour Intervention on two (out of eight) nodes in the FTS cluster.
lcgwms06.gridpp.rl.ac.uk, SCHEDULED OUTAGE 04/06/2015 12:00 04/06/2015 13:00 1 hour Outage for hardware maintenence
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • A short (fifteen minute) test needs to take place to confirm we have successfully separated the non-Tier1 traffic off the rouyter pair. This will require a break in connectivity to the Tier1. Once this has been done a longer (few hour) intervention can be scheduled to investigate the router problems.

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
    • Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases: Castor, Atlas Frontier (LFC done)
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Update the Oracle databses behind Castor to version 11.2.0.4. Will require some downtimes (See above)
    • Update disk servers to SL6.
    • Update to Castor version 2.1.15.
  • Networking:
    • Resolve problems with primary Tier1 Router
    • Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
    • Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
    • Make routing changes to allow the removal of the UKLight Router.
    • Cabling/switch changes to the network in the UPS room to improve resilience.
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report.
  • None
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
113910 Green Less Urgent In Progress 2015-05-26 2015-05-26 SNO+ RAL data staging
113836 Green Less Urgent In Progress 2015-05-20 2015-05-20 GLUE 1 vs GLUE 2 mismatch in published queues
112721 Yellow Less Urgent In Progress 2015-03-28 2015-05-14 Atlas RAL-LCG2: SOURCE Failed to get source file size
109694 Red Urgent In Progress 2014-11-03 2015-05-19 SNO+ gfal-copy failing for files at RAL
108944 Red Less Urgent In Progress 2014-10-01 2015-05-26 CMS AAA access test failing at T1_UK_RAL
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
27/05/15 100 100 100 100 100 100 n/a
28/05/15 100 100 100 100 100 100 100
29/05/15 100 100 100 100 96.0 100 100 Single SRM test failure
30/05/15 100 100 100 88.0 100 100 100 Intermittent SRM test failures. Instance under load.
31/05/15 100 100 100 83.0 100 100 29 Intermittent SRM test failures. Instance under load.
01/06/15 100 100 100 83.0 100 100 89 Intermittent SRM test failures. Instance under load.
02/06/15 100 100 100 100 100 100 100