Difference between revisions of "Tier1 Operations Report 2016-09-14"
From GridPP Wiki
(→) |
(→) |
||
(4 intermediate revisions by one user not shown) | |||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 9th to 14th September 2016. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 9th to 14th September 2016. | ||
|} | |} | ||
− | * | + | * Atlas reported a problem with the batch system last Friday (9th Sep). It turned out that there was a problem on one particular worker node (json module missing in python). |
− | + | ||
− | + | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 23: | Line 21: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | ||
|} | |} | ||
− | * | + | * GDSS665 (AtlasTape - D0T1) failed with a read-only filesystem yesterday on Sunday 4th Sep. All files that were awaiting migration were copied to tape. Following investigation a faulty drive was replaced and the server returned to service yesterday (13th Sep). |
+ | * GDSS730 (CMSDisk - D1T0) failed in the early hours of Tuesday morning (13th Sep). Following the replacement of a drive the server was put back in read-only mode later that day. | ||
<!-- ***************************************************** -----> | <!-- ***************************************************** -----> | ||
Line 45: | Line 44: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues | ||
|} | |} | ||
− | * | + | * GDSS779 (LHCbDst - D1T0) reported problems earlier this morning (14th Sep). It is currently our of production while the cause is being investigated. |
<!-- ***************End Ongoing Disk Server Issues**************** -----> | <!-- ***************End Ongoing Disk Server Issues**************** -----> | ||
<!-- ************************************************************* -----> | <!-- ************************************************************* -----> | ||
Line 56: | Line 55: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | ||
|} | |} | ||
− | * A number | + | * A number more services have been moved to the Hyper-V 2012 infrastructure. |
− | * | + | * Oracle carried out a preventative maintenance and firmware update on the tape libraries yesterday (Tuesday 13th Sep). |
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 169: | Line 168: | ||
| T2K | | T2K | ||
| proxy expiration | | proxy expiration | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
| 122827 | | 122827 | ||
Line 209: | Line 199: | ||
| Yellow | | Yellow | ||
| Less Urgent | | Less Urgent | ||
− | | | + | | On Hold |
| 2016-03-22 | | 2016-03-22 | ||
| 2016-08-09 | | 2016-08-09 |
Latest revision as of 11:29, 14 September 2016
RAL Tier1 Operations Report for 14th September 2016
Review of Issues during the week 9th to 14th September 2016. |
- Atlas reported a problem with the batch system last Friday (9th Sep). It turned out that there was a problem on one particular worker node (json module missing in python).
Resolved Disk Server Issues |
- GDSS665 (AtlasTape - D0T1) failed with a read-only filesystem yesterday on Sunday 4th Sep. All files that were awaiting migration were copied to tape. Following investigation a faulty drive was replaced and the server returned to service yesterday (13th Sep).
- GDSS730 (CMSDisk - D1T0) failed in the early hours of Tuesday morning (13th Sep). Following the replacement of a drive the server was put back in read-only mode later that day.
Current operational status and issues |
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
- The intermittent, low-level, load-related packet loss seen over external connections is still being tracked.
Ongoing Disk Server Issues |
- GDSS779 (LHCbDst - D1T0) reported problems earlier this morning (14th Sep). It is currently our of production while the cause is being investigated.
Notable Changes made since the last meeting. |
- A number more services have been moved to the Hyper-V 2012 infrastructure.
- Oracle carried out a preventative maintenance and firmware update on the tape libraries yesterday (Tuesday 13th Sep).
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
arc-ce04.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 23/09/2016 10:00 | 30/09/2016 18:00 | 7 days, 8 hours | ARC-CE04 being drained ahead of a reconfiguration and move to run on different infrastructure. |
arc-ce03.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 15/09/2016 13:00 | 22/09/2016 18:00 | 7 days, 5 hours | ARC-CE03 being drained ahead of a reconfiguration and move to run on different infrastructure. |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Preventative Maintenance on the Tape Libraries. Tuesday 13th September.
Listing by category:
- Castor:
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Update to Castor version 2.1.15. This awaits successful resolution and testing of the new version.
- Migration of LHCb data from T10KC to T10KD tapes.
- Networking:
- Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 2*40Gbit.
- Fabric
- Firmware updates on older disk servers.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
All Castor Tape | SCHEDULED | WARNING | 13/09/2016 08:00 | 13/09/2016 17:00 | 9 hours | Maintenance on Tape Library. Tape access for read will stop. Writes will be buffered on disk and flushed to tape after the maintenance has completed. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
123504 | Green | Less Urgent | In Progress | 2016-08-19 | 2016-08-23 | T2K | proxy expiration |
122827 | Green | Less Urgent | In Progress | 2016-07-12 | 2016-08-22 | SNO+ | Disk area at RAL |
122364 | Green | Less Urgent | On Hold | 2016-06-27 | 2016-08-23 | cvmfs support at RAL-LCG2 for solidexperiment.org | |
121687 | Red | Less Urgent | On Hold | 2016-05-20 | 2016-05-23 | packet loss problems seen on RAL-LCG perfsonar | |
120350 | Yellow | Less Urgent | On Hold | 2016-03-22 | 2016-08-09 | LSST | Enable LSST at RAL |
117683 | Amber | Less Urgent | On Hold | 2015-11-18 | 2016-04-05 | CASTOR at RAL not publishing GLUE 2 |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
07/09/16 | 100 | 100 | 100 | 98 | 100 | N/A | 100 | Single SRM test failure: [SRM_FAILURE] Unable to issue PrepareToPut request to Castor |
08/09/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
09/09/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
10/09/16 | 100 | 100 | 100 | 100 | 96 | N/A | 100 | Single SRM error on listing: [SRM_INVALID_PATH] No such file or directory |
11/09/16 | 100 | 100 | 100 | 98 | 100 | N/A | 100 | Single SRM test failure because of a user timeout error |
12/09/16 | 100 | 100 | 100 | 98 | 100 | N/A | 100 | Single SRM test failure because of a user timeout error |
13/09/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 |