Difference between revisions of "Tier1 Operations Report 2015-05-27"
From GridPP Wiki
(→) |
|||
Line 34: | Line 34: | ||
|} | |} | ||
* We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair. | * We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair. | ||
− | * Castor xroot performance problems seen by CMS - particularly in very long file open times. | + | * Castor xroot performance problems seen by CMS - particularly in very long file open times. The unbalanced data sets (that were appearing as hot files on two specific disk servers) have been redistributed. Other investigations are ongoing although we have not seen th eparicular problems of late - possibly the relevant CMS workflow is not running. |
* The post mortem review of the network incident on the 8th April is being finalised. | * The post mortem review of the network incident on the 8th April is being finalised. | ||
* The intermittent, low-level, load-related packet loss over the OPN to CERN is still being tracked. | * The intermittent, low-level, load-related packet loss over the OPN to CERN is still being tracked. | ||
Line 60: | Line 60: | ||
* All CASTOR Tier 1 Tape Servers that are not due for decommissioning are now running on SL6. | * All CASTOR Tier 1 Tape Servers that are not due for decommissioning are now running on SL6. | ||
* Updated errata has been rolled out across worker nodes. | * Updated errata has been rolled out across worker nodes. | ||
− | |||
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> |
Revision as of 10:59, 27 May 2015
RAL Tier1 Operations Report for 27th May 2015
Review of Issues during the week 20th to 27th May 2015. |
- On Wednesday 20th May the rack containing the standby database systems for Castor were moved to the Atlas building. There were some difficulties in getting the systems to work correctly following the move - and this was fixed during the following day. During this period we were running without Oracle Dataguard copying the data to the standby rack and no backups to tape. There was a possibility that we would stop Castor if the problems became further extended - and such an outage was added to the gOC DB - but removed before to became active. Castor was in a 'warning' state for some time.
- On Thursday morning it was found that one of the pair of LFC front end systems (lcglfc01) was not responding. Fixed by a restart.
Resolved Disk Server Issues |
- GDSS692 (CMSDisk - D1T0) was taken out of production for around seven hours during Friday (22nd May). It had a double disk failure and the rebuild of the first disk was proceeding very slowly. The server was removed from production to allow the rebuild to complete.
- GDSS682 (AtlasDataDisk - D1T0) was taken out of production on Monday (25th May) following an error from the FSProbe test. Following checks it was returned to service the following day.
Current operational status and issues |
- We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
- Castor xroot performance problems seen by CMS - particularly in very long file open times. The unbalanced data sets (that were appearing as hot files on two specific disk servers) have been redistributed. Other investigations are ongoing although we have not seen th eparicular problems of late - possibly the relevant CMS workflow is not running.
- The post mortem review of the network incident on the 8th April is being finalised.
- The intermittent, low-level, load-related packet loss over the OPN to CERN is still being tracked.
Ongoing Disk Server Issues |
- None
Notable Changes made since the last meeting. |
- All CASTOR Tier 1 Tape Servers that are not due for decommissioning are now running on SL6.
- Updated errata has been rolled out across worker nodes.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
arc-ce05.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 10/06/2015 12:00 | 08/07/2015 12:00 | 28 days | This particular ARC CE will be decommisioned. (Four other ARC CEs remain in production). |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
- (Ongoing at time of meeting). The Castor standby Oracle database system is being moved to the Atlas building. This is expected to take most of the working day during which time we are running Castor with a reduced level of backup.
- On Thursday morning, 28th May, there will be a short network intervention to separate some non-Tier1 services off our network so we can more easily investigate the router problems.
- Turn off ARC-CE05. This will leave four ARC CEs (as planned.) This fifth was set-up to temporarily workround a specific problem and is no longer required.
- Progressive upgrading of Castor Tape Servers to SL6.
- Upgrade Tier1 Castor Oracle Databases to version 11.2.0.4. Proposed timetable (delayed by one week since last week's report):
- Week 26-28 May: Install software on Database Systems (some 'At Risks')
- Tuesday 2nd June: Switchover and upgrade Neptune (ATLAS and GEN downtime - likely to be around one working day)
- Monday 8th June: Upgrade Neptune's standby (ATLAS and GEN at risk)
- Wednesday 10th June: Switchover Neptune and Pluto, and upgrade Pluto (All Tier1 Castor downtime - likely to be around one working day)
- Tuesday 16th June: Upgrade Pluto's standby (Tier1 at risk)
- Thursday 18th June: Switchover Pluto (All Tier1 Castor downtime - less than a working day)
Listing by category:
- Databases:
- Switch LFC/3D to new Database Infrastructure.
- Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases (Castor, LFC and Atlas Frontier)
- Castor:
- Update SRMs to new version (includes updating to SL6).
- Update the Oracle databses behind Castor to version 11.2.0.4. Will require some downtimes (See above)
- Update disk servers to SL6.
- Update to Castor version 2.1.15.
- Networking:
- Resolve problems with primary Tier1 Router
- Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
- Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
- Make routing changes to allow the removal of the UKLight Router.
- Cabling/switch changes to the network in the UPS room to improve resilience.
- Fabric
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
All Castor (All SRMs endpoints). | UNSCHEDULED | WARNING | 21/05/2015 10:00 | 22/05/2015 11:11 | 1 day, 1 hour and 11 minutes | Warning on Castor Service during ongoing investigation into a problem. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
113910 | Green | Less Urgent | In Progress | 2015-05-26 | 2015-05-26 | SNO+ | RAL data staging |
113836 | Green | Less Urgent | In Progress | 2015-05-20 | 2015-05-20 | GLUE 1 vs GLUE 2 mismatch in published queues | |
112721 | Yellow | Less Urgent | In Progress | 2015-03-28 | 2015-05-14 | Atlas | RAL-LCG2: SOURCE Failed to get source file size |
109694 | Red | Urgent | In Progress | 2014-11-03 | 2015-05-19 | SNO+ | gfal-copy failing for files at RAL |
108944 | Red | Less Urgent | In Progress | 2014-10-01 | 2015-05-26 | CMS | AAA access test failing at T1_UK_RAL |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
20/05/15 | 100 | 100 | 98.0 | 100 | 100 | 100 | 100 | Single SRM Put Test error: Error reading token data header: Connection closed |
21/05/15 | 100 | 100 | 100 | 100 | 96.0 | 100 | 100 | Single SRM test failure: [SRM_INVALID_PATH] No such file or directory |
22/05/15 | 100 | 100 | 100 | 100 | 100 | 100 | 96 | |
23/05/15 | 100 | 100 | 100 | 100 | 100 | 100 | 99 | |
24/05/15 | 100 | 100 | 100 | 100 | 96.0 | 100 | 100 | Single SRM test failure: [SRM_INVALID_PATH] No such file or directory |
25/05/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
26/05/15 | 100 | 100 | 100 | 100 | 100 | 100 | 99 |