Difference between revisions of "Tier1 Operations Report 2015-07-29"
From GridPP Wiki
(→) |
(→) |
||
(One intermediate revision by one user not shown) | |||
Line 9: | Line 9: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 22nd to 29th July 2015. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 22nd to 29th July 2015. | ||
|} | |} | ||
− | * There have been two problems with network connectivity caused by a failure of our secondary (and currently only) router. The first of these occurred on Tuesday (28th July). This outage lasted around 45 minutes and it was found that the router had lost its routing information. The second | + | * Early Friday morning (24th July) there was a warning level in the fire suppression system in the machine room. The cause seems to have been the failure of a PDU feeding Tier1 equipment. This had no operational impact. |
+ | * There have been two problems with network connectivity caused by a failure of our secondary (and currently only) router. The first of these occurred on Tuesday (28th July). This outage lasted around 45 minutes and it was found that the router had lost its routing information. The second occurred this morning (Wed. 29th July). This time the router has failed to restart. the outage began at around 11:15 and (at the time of editing this page) is still ongoing. | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 56: | Line 57: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | ||
|} | |} | ||
+ | * Three disk servers have been deployed into each of CMSDisk and LHCbDst service classes. A further disk server has been added to AtlasDataDisk to replace one that had been drained and withdrawn following a failure. | ||
* The test of the updated worker node configuration (with grid middleware delivered via CVMFS) continues on a one whole batch of Worker Nodes. | * The test of the updated worker node configuration (with grid middleware delivered via CVMFS) continues on a one whole batch of Worker Nodes. | ||
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> |
Latest revision as of 12:17, 29 July 2015
RAL Tier1 Operations Report for 29th July 2015
Review of Issues during the week 22nd to 29th July 2015. |
- Early Friday morning (24th July) there was a warning level in the fire suppression system in the machine room. The cause seems to have been the failure of a PDU feeding Tier1 equipment. This had no operational impact.
- There have been two problems with network connectivity caused by a failure of our secondary (and currently only) router. The first of these occurred on Tuesday (28th July). This outage lasted around 45 minutes and it was found that the router had lost its routing information. The second occurred this morning (Wed. 29th July). This time the router has failed to restart. the outage began at around 11:15 and (at the time of editing this page) is still ongoing.
Resolved Disk Server Issues |
- None.
Current operational status and issues |
- We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
- The post mortem review of the network incident on the 8th April is being finalised.
- The intermittent, low-level, load-related packet loss over the OPN to CERN is still being tracked.
- There are some on-going issues for CMS. These are a problem with the Xroot (AAA) redirection accessing Castor; Slow file open times using Xroot; and poor batch job efficiencies.
Ongoing Disk Server Issues |
- None
Notable Changes made since the last meeting. |
- Three disk servers have been deployed into each of CMSDisk and LHCbDst service classes. A further disk server has been added to AtlasDataDisk to replace one that had been drained and withdrawn following a failure.
- The test of the updated worker node configuration (with grid middleware delivered via CVMFS) continues on a one whole batch of Worker Nodes.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Whole site | SCHEDULED | OUTAGE | 04/08/2015 08:30 | 04/08/2015 15:00 | 6 hours and 30 minutes | Site Outage during investigation of problem with network router. |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
- Vendor intervention on Tier1 Router scheduled for Tuesday 4th August.
Listing by category:
- Databases:
- Switch LFC/3D to new Database Infrastructure.
- Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases: Castor, Atlas Frontier (LFC done)
- Castor:
- Update SRMs to new version (includes updating to SL6).
- Update the Oracle databses behind Castor to version 11.2.0.4. Will require some downtimes (See above)
- Update disk servers to SL6.
- Update to Castor version 2.1.15.
- Networking:
- Resolve problems with primary Tier1 Router. Need to schedule a roughly half-day outage for the vendor to carry out investigations.
- Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
- Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
- Make routing changes to allow the removal of the UKLight Router.
- Cabling/switch changes to the network in the UPS room to improve resilience.
- Fabric
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report. |
- None
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
115336 | Green | Less Urgent | In Progress | 2015-29 | 2015-29 | OPS | [Rod Dashboard] Issue detected : hr.srce.MyProxy-Store-/ops/Role=lcgadmin@myproxy.gridpp.rl.ac.uk |
115290 | Green | Less Urgent | In Progress | 2015-07-28 | 2015-07-28 | FTS3@RAL: missing proper host names in subjectAltName of FTS agent nodes | |
114786 | Green | Less Urgent | On Hold | 2015-07-02 | 2015-07-07 | OPS | [Rod Dashboard] Issue detected : egi.eu.lowAvailability-/RAL-LCG2@RAL-LCG2_Availability |
113836 | Amber | Less Urgent | In Progress | 2015-05-20 | 2015-06-24 | GLUE 1 vs GLUE 2 mismatch in published queues | |
108944 | Red | Less Urgent | In Progress | 2014-10-01 | 2015-07-17 | CMS | AAA access test failing at T1_UK_RAL |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
22/07/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
23/07/15 | 100 | 100 | 94.0 | 100 | 100 | 98 | 100 | SRM test failures (Error message: __main__.TimeoutException) |
24/07/15 | 100 | 100 | 98.0 | 100 | 100 | 100 | 100 | Single SRM test failure (as above). |
25/07/15 | 100 | 100 | 100 | 100 | 100 | 94 | 100 | |
26/07/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
27/07/15 | 84.0 | 100 | 98.0 | 100 | 100 | 100 | 75 | OPS: Failures for ARC CE tests for OPS VO affected many sites. Atlas: As on 24/7. |
28/07/15 | 70.5 | 100 | 93.0 | 94.0 | 94.0 | 96 | 100 | LHC VOs: Problem with Tier1 network router affected tests. OPS: Continuation of previous day's problem. |