Difference between revisions of "Tier1 Operations Report 2016-10-05"
From GridPP Wiki
(→) |
m (→) |
||
(2 intermediate revisions by 2 users not shown) | |||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 28th September to 5th October 2016. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 28th September to 5th October 2016. | ||
|} | |} | ||
+ | * On Sunday morning (2nd Oct) there was a problem with the LHCb Cator instance with access to LHCb-DISK not working. LHCb raised a GGUS alarm ticket. The problems was resolved during Sunday by the Castor on-call. Although there was space in the disk pool as a whole, some of the disks in there had become full which led to the problems. Since then some re-balancing of the disks has been carried out. | ||
* On Tuesday morning there was a recurrence of problems seen just over a week ago with all three top-level BDIIs not working. This lasted until early afternoon - when the problem went away (not fixed by us). An outage was declared on the Top-BDII alias in the GOC DB. A similar problem affected at least one other site at the same time. Following discussions on the e-mail list the BDIIs will be upgraded. | * On Tuesday morning there was a recurrence of problems seen just over a week ago with all three top-level BDIIs not working. This lasted until early afternoon - when the problem went away (not fixed by us). An outage was declared on the Top-BDII alias in the GOC DB. A similar problem affected at least one other site at the same time. Following discussions on the e-mail list the BDIIs will be upgraded. | ||
* Yesterday (Tuesday 4th Oct) morning there was a problem with a very large queue on AtlasScratch in Castor. (This did not affect other areas in Castor). Initial attempts to flush the queue did not help. The load on Atlas Scratch was reduced (the number of Atlas pilot batch jobs was reduced during the afternoon) and, whether by this or the particular jobs ending, AtlasScratch in Castor recovered. The limits on the Atlas pilot jobs was removed this morning (Wed 5th Oct). | * Yesterday (Tuesday 4th Oct) morning there was a problem with a very large queue on AtlasScratch in Castor. (This did not affect other areas in Castor). Initial attempts to flush the queue did not help. The load on Atlas Scratch was reduced (the number of Atlas pilot batch jobs was reduced during the afternoon) and, whether by this or the particular jobs ending, AtlasScratch in Castor recovered. The limits on the Atlas pilot jobs was removed this morning (Wed 5th Oct). | ||
Line 222: | Line 223: | ||
| 2016-04-05 | | 2016-04-05 | ||
| | | | ||
− | | CASTOR at RAL not publishing GLUE 2 | + | | CASTOR at RAL not publishing GLUE 2 (Rob & Jens will discuss & update ticket later today) |
|} | |} | ||
<!-- **********************End GGUS Tickets************************** -----> | <!-- **********************End GGUS Tickets************************** -----> |
Latest revision as of 08:06, 6 October 2016
RAL Tier1 Operations Report for 5th October 2016
Review of Issues during the week 28th September to 5th October 2016. |
- On Sunday morning (2nd Oct) there was a problem with the LHCb Cator instance with access to LHCb-DISK not working. LHCb raised a GGUS alarm ticket. The problems was resolved during Sunday by the Castor on-call. Although there was space in the disk pool as a whole, some of the disks in there had become full which led to the problems. Since then some re-balancing of the disks has been carried out.
- On Tuesday morning there was a recurrence of problems seen just over a week ago with all three top-level BDIIs not working. This lasted until early afternoon - when the problem went away (not fixed by us). An outage was declared on the Top-BDII alias in the GOC DB. A similar problem affected at least one other site at the same time. Following discussions on the e-mail list the BDIIs will be upgraded.
- Yesterday (Tuesday 4th Oct) morning there was a problem with a very large queue on AtlasScratch in Castor. (This did not affect other areas in Castor). Initial attempts to flush the queue did not help. The load on Atlas Scratch was reduced (the number of Atlas pilot batch jobs was reduced during the afternoon) and, whether by this or the particular jobs ending, AtlasScratch in Castor recovered. The limits on the Atlas pilot jobs was removed this morning (Wed 5th Oct).
Resolved Disk Server Issues |
- GDSS738 (LHCbDst - D1T0) failed late on Friday evening (30th Sep). A single faulty disk drive was found. It was returned to service initially read-only around lunchtime on Sunday (2nd Oct) and to full production on Tuesday (4th Oct).
Current operational status and issues |
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
- The intermittent, low-level, load-related packet loss seen over external connections is still being tracked.
Ongoing Disk Server Issues |
- GDSS677 (CMSTape - D0T1) failed on Thursday afternoon, 29th Sep. There are no files awaiting migration to tape on the server.
Notable Changes made since the last meeting. |
- Firmware updates were carried out those remaining Castor disk servers from the Clustervision '11 batch for which this had not yet been done.
- Arc-ce04 was re-installed on the Windows Hyper-V 2012 infrastructure with a bigger disk.
- (Ongoing at time of meeting - replace UKLight router)
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
All Storage (all SRM endpoints) | SCHEDULED | WARNING | 05/10/2016 09:00 | 05/10/2016 15:00 | 6 hours | Series of breaks in the external data path to/from our storage while the network router is replaced and tests carried out on the resilience of the links. Internal access to storage unaffected. |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Replace the UKLight Router including upgrading the 'bypass' link to the RAL border routers to 40Gbit. Being scheduled for Wednesday 5th October.
- Intervention on Tape Libraries - early November.
Listing by category:
- Castor:
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Update to Castor version 2.1.15. Planning to roll out January 2017.
- Migration of LHCb data from T10KC to T10KD tapes.
- Networking:
- Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 40Gbit.
- Fabric
- Firmware updates on older disk servers.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
All Storage (all SRM endpoints) | SCHEDULED | WARNING | 05/10/2016 09:00 | 05/10/2016 15:00 | 6 hours | Series of breaks in the external data path to/from our storage while the network router is replaced and tests carried out on the resilience of the links. Internal access to storage unaffected. |
lcgbdii.gridpp.rl.ac.uk | UNSCHEDULED | OUTAGE | 04/10/2016 07:00 | 04/10/2016 13:29 | 6 hours and 29 minutes | We have an ongoing problem affecting all three of our production Top-BDII systems that are behind the alias. |
arc-ce04.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 23/09/2016 10:00 | 30/09/2016 18:00 | 7 days, 8 hours | ARC-CE04 being drained ahead of a reconfiguration and move to run on different infrastructure. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
124188 | Green | Less Urgent | In Progress | 2016-10-03 | 2016-10-03 | Atlas | UK Lpad-RAL-LCG224 : Frontier squid down |
123504 | Yellow | Less Urgent | Waiting for Reply | 2016-08-19 | 2016-09-20 | T2K | proxy expiration |
122827 | Green | Less Urgent | Waiting for Reply | 2016-07-12 | 2016-09-14 | SNO+ | Disk area at RAL |
121687 | Red | Less Urgent | On Hold | 2016-05-20 | 2016-09-30 | packet loss problems seen on RAL-LCG perfsonar | |
120350 | Yellow | Less Urgent | On Hold | 2016-03-22 | 2016-08-09 | LSST | Enable LSST at RAL |
117683 | Amber | Less Urgent | On Hold | 2015-11-18 | 2016-04-05 | CASTOR at RAL not publishing GLUE 2 (Rob & Jens will discuss & update ticket later today) |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
28/09/16 | 100 | 100 | 100 | 98 | 100 | N/A | 100 | Single SRM test failure because of a user timeout error |
29/09/16 | 100 | 100 | 100 | 98 | 100 | N/A | 100 | Single SRM test failure because of a user timeout error |
30/09/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
01/10/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
02/10/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
03/10/16 | 100 | 100 | 100 | 100 | 100 | N/A | N/A | |
04/10/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 |