Difference between revisions of "Tier1 Operations Report 2015-06-10"
From GridPP Wiki
(→) |
(→) |
||
Line 285: | Line 285: | ||
| 08/06/15 || style="background-color: lightgrey;" | 66 || 100 || 100 || 100 || 100 || 100 || 100 || Central monitoring problem. | | 08/06/15 || style="background-color: lightgrey;" | 66 || 100 || 100 || 100 || 100 || 100 || 100 || Central monitoring problem. | ||
|- | |- | ||
− | | 09/06/15 || 100 || 100 || 100 || 100 || 100 || 100 || | + | | 09/06/15 || 100 || 100 || 100 || 100 || 100 || 100 || n/a || |
|} | |} | ||
<!-- **********************End Availability Report************************** -----> | <!-- **********************End Availability Report************************** -----> | ||
<!-- *********************************************************************** -----> | <!-- *********************************************************************** -----> |
Revision as of 09:35, 10 June 2015
RAL Tier1 Operations Report for 10th June 2015
Review of Issues during the fortnight 27th May to 10th June 2015. |
- There have been some severe problems with the CMS Castor instance that are traiggered by particular high loads. The amount of CMS work has been throttled back to mitigate this.
- A memory fault in one of the hypervisors has led to a short planned outage of some services on Thursday (see GOC DB entries).
Resolved Disk Server Issues |
- GDSS630 (AtlasDataDisk - D1T0) failed on Sunday morning, 31st May. (This server was/is in 'readonly' mode). Following checks it was returned to service the following day (1st June).
Current operational status and issues |
- We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
- Castor xroot performance problems seen by CMS - particularly in very long file open times. The unbalanced data sets (that were appearing as hot files on two specific disk servers) have been redistributed. Other investigations are ongoing although we have not seen th eparicular problems of late - possibly the relevant CMS workflow is not running.
- The post mortem review of the network incident on the 8th April is being finalised.
- The intermittent, low-level, load-related packet loss over the OPN to CERN is still being tracked.
Ongoing Disk Server Issues |
- None
Notable Changes made since the last meeting. |
- On Thursday 28th May there was a network intervention routing for some non-Tier1 services was separated off our network so we can more easily investigate the router problems.
- Some changes to the FTS3 srevers have been made at the request of developres to fix a problem with the fts-bringonline process crashing frequently.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
arc-ce05.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 10/06/2015 12:00 | 08/07/2015 12:00 | 28 days, | This particular ARC CE will be decommisioned. (Four other ARC CEs remain in production). |
lcgfts3.gridpp.rl.ac.uk, | SCHEDULED | WARNING | 04/06/2015 12:00 | 04/06/2015 13:00 | 1 hour | Intervention on two (out of eight) nodes in the FTS cluster. |
lcgwms06.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 04/06/2015 12:00 | 04/06/2015 13:00 | 1 hour | Outage for hardware maintenence |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
- A short (fifteen minute) test needs to take place to confirm we have successfully separated the non-Tier1 traffic off the rouyter pair. This will require a break in connectivity to the Tier1. Once this has been done a longer (few hour) intervention can be scheduled to investigate the router problems.
Listing by category:
- Databases:
- Switch LFC/3D to new Database Infrastructure.
- Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases: Castor, Atlas Frontier (LFC done)
- Castor:
- Update SRMs to new version (includes updating to SL6).
- Update the Oracle databses behind Castor to version 11.2.0.4. Will require some downtimes (See above)
- Update disk servers to SL6.
- Update to Castor version 2.1.15.
- Networking:
- Resolve problems with primary Tier1 Router
- Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
- Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
- Make routing changes to allow the removal of the UKLight Router.
- Cabling/switch changes to the network in the UPS room to improve resilience.
- Fabric
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report. |
- None
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
114017 | Green | Very Urgent | In Progress | 2015-06-01 | 2015-06-02 | LHCb | Define LHCb tapesets for RAW and reconstruction data |
114010 | Green | Urgent | In Progress | 2015-06-01 | 2015-06-01 | CMS | T1_UK_RAL pileup reading errors |
114004 | Green | Less Urgent | In Progress | 2015-05-31 | 2015-06-01 | Atlas | RAL-LCG2: bring-online timeout has been exceeded |
113914 | Green | Urgent | In Progress | 2015-05-26 | 2015-06-02 | SNO+ | VOInfoPath undefined |
113910 | Green | Less Urgent | In Progress | 2015-05-26 | 2015-05-26 | SNO+ | RAL data staging |
113836 | Green | Less Urgent | In Progress | 2015-05-20 | 2015-05-20 | GLUE 1 vs GLUE 2 mismatch in published queues | |
112721 | Yellow | Less Urgent | In Progress | 2015-03-28 | 2015-05-14 | Atlas | RAL-LCG2: SOURCE Failed to get source file size |
109694 | Red | Urgent | In Progress | 2014-11-03 | 2015-05-19 | SNO+ | gfal-copy failing for files at RAL |
108944 | Red | Less Urgent | In Progress | 2014-10-01 | 2015-05-26 | CMS | AAA access test failing at T1_UK_RAL |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
27/05/15 | 100 | 100 | 100 | 100 | 100 | 100 | n/a | |
28/05/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
29/05/15 | 100 | 100 | 100 | 100 | 96.0 | 100 | 100 | Single SRM test failure |
30/05/15 | 100 | 100 | 100 | 88.0 | 100 | 100 | 100 | Intermittent SRM test failures. Instance under load. |
31/05/15 | 100 | 100 | 100 | 83.0 | 100 | 100 | 29 | Intermittent SRM test failures. Instance under load. |
01/06/15 | 100 | 100 | 100 | 83.0 | 100 | 100 | 89 | Intermittent SRM test failures. Instance under load. |
02/06/15 | 100 | 100 | 100 | 96 | 100 | 100 | 100 | Single SRM Test failure |
03/06/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
04/06/15 | 100 | 100 | 100 | 100 | 100 | 98 | 100 | |
05/06/15 | 49 | 92 | 100 | 92 | 94 | 100 | 100 | For LHC VOs: Argus problem - job sumbissions failed for a while; For OPS - Central monitoring problem. |
06/06/15 | 0 | 100 | 100 | 96 | 100 | 100 | 95 | CMS: Single SRM Put Test failure; OPS: Central monitoring problem. |
07/06/15 | 0 | 100 | 100 | 100 | 100 | 100 | 100 | Central monitoring problem. |
08/06/15 | 66 | 100 | 100 | 100 | 100 | 100 | 100 | Central monitoring problem. |
09/06/15 | 100 | 100 | 100 | 100 | 100 | 100 | n/a |