Difference between revisions of "Tier1 Operations Report 2015-05-27"

From GridPP Wiki
Jump to: navigation, search
()
()
Line 9: Line 9:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 20th to 27th May 2015.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 20th to 27th May 2015.
 
|}
 
|}
* None
+
* On Wednesday 20th May the rack containing the standby database systems for Castor were moved to the Atlas building. There were some difficulties in getting the systems to work correctly following the move - and this was fixed during the following day. During this period we were running without Oracle Dataguard copying the data to the standby rack and no backups to tape. There was a possibility that we would stop Castor if the problems became further extended - and such an outage was added to the gOC DB - but removed before  to became active. Castor was in a 'warning' state for some time.
 +
* On Thursday morning it was found that one of the pair of LFC front end systems (lcglfc01) was not responding. Fixed by a restart.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->

Revision as of 10:48, 27 May 2015

RAL Tier1 Operations Report for 27th May 2015

Review of Issues during the week 20th to 27th May 2015.
  • On Wednesday 20th May the rack containing the standby database systems for Castor were moved to the Atlas building. There were some difficulties in getting the systems to work correctly following the move - and this was fixed during the following day. During this period we were running without Oracle Dataguard copying the data to the standby rack and no backups to tape. There was a possibility that we would stop Castor if the problems became further extended - and such an outage was added to the gOC DB - but removed before to became active. Castor was in a 'warning' state for some time.
  • On Thursday morning it was found that one of the pair of LFC front end systems (lcglfc01) was not responding. Fixed by a restart.
Resolved Disk Server Issues
  • GDSS649 (LHCbUser - D1T0) failed on Saturday 16th May when the system hung up. Following tests a faulty drive was replaced. It was returned to service on Monday morning (18th May).
Current operational status and issues
  • We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
  • Castor xroot performance problems seen by CMS - particularly in very long file open times. In order to elimiate possible causes of problems the CMS AAA xroot redirector was stopped for a while, although it has since been restarted. At least part of the problem is caused by two 'hot' disk servers and we are in the process of re-distributing the files from the frequently accessed datsets (the job to do this is currently running).
  • The post mortem review of the network incident on the 8th April is being finalised.
  • The intermittent, low-level, load-related packet loss over the OPN to CERN is still being tracked.
Ongoing Disk Server Issues
  • None
Notable Changes made since the last meeting.
  • The Castor tape servers are being updated to SL6.
  • Last week a problem with a new configuration of a batch of new worker nodes was reported. Most of this batch have now been re-set to have the usual worker node configuration.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
arc-ce05.gridpp.rl.ac.uk SCHEDULED OUTAGE 10/06/2015 12:00 08/07/2015 12:00 28 days This particular ARC CE will be decommisioned. (Four other ARC CEs remain in production).
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • (Ongoing at time of meeting). The Castor standby Oracle database system is being moved to the Atlas building. This is expected to take most of the working day during which time we are running Castor with a reduced level of backup.
  • On Thursday morning, 28th May, there will be a short network intervention to separate some non-Tier1 services off our network so we can more easily investigate the router problems.
  • Turn off ARC-CE05. This will leave four ARC CEs (as planned.) This fifth was set-up to temporarily workround a specific problem and is no longer required.
  • Progressive upgrading of Castor Tape Servers to SL6.
  • Upgrade Tier1 Castor Oracle Databases to version 11.2.0.4. Proposed timetable (delayed by one week since last week's report):
    • Week 26-28 May: Install software on Database Systems (some 'At Risks')
    • Tuesday 2nd June: Switchover and upgrade Neptune (ATLAS and GEN downtime - likely to be around one working day)
    • Monday 8th June: Upgrade Neptune's standby (ATLAS and GEN at risk)
    • Wednesday 10th June: Switchover Neptune and Pluto, and upgrade Pluto (All Tier1 Castor downtime - likely to be around one working day)
    • Tuesday 16th June: Upgrade Pluto's standby (Tier1 at risk)
    • Thursday 18th June: Switchover Pluto (All Tier1 Castor downtime - less than a working day)

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
    • Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases (Castor, LFC and Atlas Frontier)
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Update the Oracle databses behind Castor to version 11.2.0.4. Will require some downtimes (See above)
    • Update disk servers to SL6.
    • Update to Castor version 2.1.15.
  • Networking:
    • Resolve problems with primary Tier1 Router
    • Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
    • Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
    • Make routing changes to allow the removal of the UKLight Router.
    • Cabling/switch changes to the network in the UPS room to improve resilience.
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
All Castor (All SRMs endpoints). UNSCHEDULED WARNING 21/05/2015 10:00 22/05/2015 11:11 1 day, 1 hour and 11 minutes Warning on Castor Service during ongoing investigation into a problem.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
113910 Green Less Urgent In Progress 2015-05-26 2015-05-26 SNO+ RAL data staging
113836 Green Less Urgent In Progress 2015-05-20 2015-05-20 GLUE 1 vs GLUE 2 mismatch in published queues
112721 Yellow Less Urgent In Progress 2015-03-28 2015-05-14 Atlas RAL-LCG2: SOURCE Failed to get source file size
109694 Red Urgent In Progress 2014-11-03 2015-05-19 SNO+ gfal-copy failing for files at RAL
108944 Red Less Urgent In Progress 2014-10-01 2015-05-26 CMS AAA access test failing at T1_UK_RAL
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
20/05/15 100 100 98.0 100 100 100 100 Single SRM Put Test error: Error reading token data header: Connection closed
21/05/15 100 100 100 100 96.0 100 100 Single SRM test failure:   [SRM_INVALID_PATH] No such file or directory
22/05/15 100 100 100 100 100 100 96
23/05/15 100 100 100 100 100 100 99
24/05/15 100 100 100 100 96.0 100 100 Single SRM test failure: [SRM_INVALID_PATH] No such file or directory
25/05/15 100 100 100 100 100 100 100
26/05/15 100 100 100 100 100 100 99