Difference between revisions of "Tier1 Operations Report 2015-06-10"
From GridPP Wiki
(→) |
(→) |
||
Line 11: | Line 11: | ||
* There have been some severe problems with the CMS Castor instance that are traiggered by particular high loads. The amount of CMS work has been throttled back to mitigate this. | * There have been some severe problems with the CMS Castor instance that are traiggered by particular high loads. The amount of CMS work has been throttled back to mitigate this. | ||
* A memory fault in one of the hypervisors led to a short planned outage of some services on Thursday 4th June (See GocDB entries). | * A memory fault in one of the hypervisors led to a short planned outage of some services on Thursday 4th June (See GocDB entries). | ||
− | * A problem with the Argus | + | * A problem with the Argus server affected batch job submissions for a while during the early evening of Friday 5th June. This can be seen in the availabilities of the LHC VOs on that day. |
* A central monitoring problem affected the OPS CE tests for many sites over the weekend 5-8 June. | * A central monitoring problem affected the OPS CE tests for many sites over the weekend 5-8 June. | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> |
Latest revision as of 13:24, 10 June 2015
RAL Tier1 Operations Report for 10th June 2015
Review of Issues during the fortnight 27th May to 10th June 2015. |
- There have been some severe problems with the CMS Castor instance that are traiggered by particular high loads. The amount of CMS work has been throttled back to mitigate this.
- A memory fault in one of the hypervisors led to a short planned outage of some services on Thursday 4th June (See GocDB entries).
- A problem with the Argus server affected batch job submissions for a while during the early evening of Friday 5th June. This can be seen in the availabilities of the LHC VOs on that day.
- A central monitoring problem affected the OPS CE tests for many sites over the weekend 5-8 June.
Resolved Disk Server Issues |
- GDSS630 (AtlasDataDisk - D1T0) failed on Sunday morning, 31st May. (This server was/is in 'readonly' mode). Following checks it was returned to service the following day (1st June).
Current operational status and issues |
- We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
- Castor xroot performance problems seen by CMS - particularly in very long file open times. The unbalanced data sets (that were appearing as hot files on two specific disk servers) have been redistributed. Other investigations are ongoing although we have not seen the paricular problems of late - possibly the relevant CMS workflow is not running.
- The post mortem review of the network incident on the 8th April is being finalised.
- The intermittent, low-level, load-related packet loss over the OPN to CERN is still being tracked.
Ongoing Disk Server Issues |
- None
Notable Changes made since the last meeting. |
- On Thursday 28th May there was an intervention that separated the network routing for some non-Tier1 services away from the Tier1 router pair. This is so we can more easily investigate the router problems.
- Some changes to the FTS3 servers have been made at the request of developers to fix a problem with the fts-bringonline process crashing frequently.
- Various changes have been made to setup the DiRAC VO.
- CASTOR tape pool and service class set up, local gridmap file, tested and working
- Will start testing transfers this week (today?) using "manual" GridFTP
- Local user accounts (UI) still being set up
- Awaiting SRM alias and (then) BDII/CIP info, also setting up LFC and FTS
- The second batch of 2014 CPU purchases has been brought online.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Whole site | SCHEDULED | OUTAGE | 17/06/2015 13:45 | 17/06/2015 15:15 | 1 hour and 30 minutes | During this time window there will be a 15 minute disconnection of the RAL-LCG2 site from the network. This will take place sometime between 13:00 - 13:30 UTC. For this 15-minute period all services will be unavailable. The Castor storage system will be stopped at 12:45 UTC before the network break, and restarted once the 15-minute break is over. The declared time window allows time for Castor and other services to be checked out after the network break. The network outage is for a test of a revised network configuration. |
arc-ce05.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 10/06/2015 12:00 | 08/07/2015 12:00 | 28 days, | This particular ARC CE will be decommisioned. (Four other ARC CEs remain in production). |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
- A short (fifteen minute) test needs to take place to confirm we have successfully separated the non-Tier1 traffic off the router pair. This will require a break in connectivity to the Tier1. (We will also make use of this short outage to stop a small number of additional services while the hypervisor hosting them is physically moved.) Once this test has been done a longer (few hour) intervention can be scheduled to investigate the router problems.
Listing by category:
- Databases:
- Switch LFC/3D to new Database Infrastructure.
- Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases: Castor, Atlas Frontier (LFC done)
- Castor:
- Enable disk server rebalancing.
- Update SRMs to new version (includes updating to SL6).
- Update the Oracle databses behind Castor to version 11.2.0.4. Will require some downtimes (See above)
- Update disk servers to SL6.
- Update to Castor version 2.1.15.
- Networking:
- Resolve problems with primary Tier1 Router
- Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
- Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
- Make routing changes to allow the removal of the UKLight Router.
- Cabling/switch changes to the network in the UPS room to improve resilience.
- Fabric
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
arc-ce05.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 10/06/2015 12:00 | 08/07/2015 12:00 | 28 days, | This particular ARC CE will be decommisioned. (Four other ARC CEs remain in production). |
lcgwms06.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 04/06/2015 12:00 | 04/06/2015 13:00 | 1 hour | Outage for hardware maintenence |
lcgfts3.gridpp.rl.ac.uk, | SCHEDULED | WARNING | 04/06/2015 12:00 | 04/06/2015 13:00 | 1 hour | Intervention on two (out of eight) nodes in the FTS cluster. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
114017 | Green | Very Urgent | Waiting Reply | 2015-06-01 | 2015-06-04 | LHCb | Define LHCb tapesets for RAW and reconstruction data |
113914 | Green | Urgent | Waiting Reply | 2015-05-26 | 2015-06-09 | SNO+ | VOInfoPath undefined |
113910 | Green | Less Urgent | In Progress | 2015-05-26 | 2015-05-28 | SNO+ | RAL data staging |
113836 | Green | Less Urgent | In Progress | 2015-05-20 | 2015-06-08 | GLUE 1 vs GLUE 2 mismatch in published queues | |
112721 | Yellow | Less Urgent | Waiting Reply | 2015-03-28 | 2015-06-08 | Atlas | RAL-LCG2: SOURCE Failed to get source file size |
109694 | Red | Urgent | In Progress | 2014-11-03 | 2015-05-19 | SNO+ | gfal-copy failing for files at RAL |
108944 | Red | Less Urgent | In Progress | 2014-10-01 | 2015-05-26 | CMS | AAA access test failing at T1_UK_RAL |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
27/05/15 | 100 | 100 | 100 | 100 | 100 | 100 | n/a | |
28/05/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
29/05/15 | 100 | 100 | 100 | 100 | 96.0 | 100 | 100 | Single SRM test failure |
30/05/15 | 100 | 100 | 100 | 88.0 | 100 | 100 | 100 | Intermittent SRM test failures. Instance under load. |
31/05/15 | 100 | 100 | 100 | 83.0 | 100 | 100 | 29 | Intermittent SRM test failures. Instance under load. |
01/06/15 | 100 | 100 | 100 | 83.0 | 100 | 100 | 89 | Intermittent SRM test failures. Instance under load. |
02/06/15 | 100 | 100 | 100 | 96 | 100 | 100 | 100 | Single SRM Test failure |
03/06/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
04/06/15 | 100 | 100 | 100 | 100 | 100 | 98 | 100 | |
05/06/15 | 49 | 92 | 100 | 92 | 94 | 100 | 100 | For LHC VOs: Argus problem - job sumbissions failed for a while; For OPS - Central monitoring problem. |
06/06/15 | 0 | 100 | 100 | 96 | 100 | 100 | 95 | CMS: Single SRM Put Test failure; OPS: Central monitoring problem. |
07/06/15 | 0 | 100 | 100 | 100 | 100 | 100 | 100 | Central monitoring problem. |
08/06/15 | 66 | 100 | 100 | 100 | 100 | 100 | 100 | Central monitoring problem. |
09/06/15 | 100 | 100 | 100 | 100 | 100 | 100 | n/a |