Difference between revisions of "Tier1 Operations Report 2016-09-28"
From GridPP Wiki
(→) |
m (→) |
||
(One intermediate revision by one user not shown) | |||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 21st to 28th September 2016. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 21st to 28th September 2016. | ||
|} | |} | ||
− | * At | + | * At last week's meeting we reported a problem with one of the OPN link with one of the links not working outbound. This was resolved on Thursday morning (22nd Sep). It was found the relevant protocol (ECMP) had been reset in UKLight Router. This was re-configured and the problem fixed. |
* On Sunday morning, there were problems with all three top-level BDIIs that lasted an hour or so. The automatic re-starters fixed it although the cause is not understood. | * On Sunday morning, there were problems with all three top-level BDIIs that lasted an hour or so. The automatic re-starters fixed it although the cause is not understood. | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> |
Latest revision as of 13:15, 28 September 2016
RAL Tier1 Operations Report for 28th September 2016
Review of Issues during the week 21st to 28th September 2016. |
- At last week's meeting we reported a problem with one of the OPN link with one of the links not working outbound. This was resolved on Thursday morning (22nd Sep). It was found the relevant protocol (ECMP) had been reset in UKLight Router. This was re-configured and the problem fixed.
- On Sunday morning, there were problems with all three top-level BDIIs that lasted an hour or so. The automatic re-starters fixed it although the cause is not understood.
Resolved Disk Server Issues |
- GDSS617 (AliceDisk - D1T0) failed in the early hours of Saturday morning (24th Sep). It was returned to service, initially read-only on the evening of the same day. No specific fault found.
- GDSS739 (LHCbDst - D1T0) failed in the early hours of Sunday morning (25th Sep). It was returned to service, initially read-only, at the end of Monday afternoon (26th Sep). Two faulty disks were replaced.
Current operational status and issues |
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
- The intermittent, low-level, load-related packet loss seen over external connections is still being tracked.
Ongoing Disk Server Issues |
- None
Notable Changes made since the last meeting. |
- We continue to move services (including arc-ce03) to the Hyper-V 2012 infrastructure.
- arc-ce03 was re-installed on this infrastructure with a bigger disk.
Declared in the GOC DB |
None
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Replace the UKLight Router including upgrading the 'bypass' link to the RAL border routers to 40Gbit. Being scheduled for Wednesday 5th October.
- Intervention on Tape Libraries - early November.
Listing by category:
- Castor:
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Update to Castor version 2.1.15. This awaits successful resolution and testing of the new version.
- Migration of LHCb data from T10KC to T10KD tapes.
- Networking:
- Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 40Gbit.
- Fabric
- Firmware updates on older disk servers.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
arc-ce04.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 23/09/2016 10:00 | 30/09/2016 18:00 | 7 days, 8 hours | ARC-CE04 being drained ahead of a reconfiguration and move to run on different infrastructure. |
arc-ce03.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 15/09/2016 13:00 | 22/09/2016 18:00 | 7 days, 5 hours | ARC-CE03 being drained ahead of a reconfiguration and move to run on different infrastructure. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
123504 | Green | Less Urgent | Waiting for Reply | 2016-08-19 | 2016-09-20 | T2K | proxy expiration |
122827 | Green | Less Urgent | Waiting for Reply | 2016-07-12 | 2016-09-14 | SNO+ | Disk area at RAL |
122364 | Green | Less Urgent | On Hold | 2016-06-27 | 2016-08-23 | cvmfs support at RAL-LCG2 for solidexperiment.org | |
121687 | Red | Less Urgent | On Hold | 2016-05-20 | 2016-05-23 | packet loss problems seen on RAL-LCG perfsonar | |
120350 | Yellow | Less Urgent | On Hold | 2016-03-22 | 2016-08-09 | LSST | Enable LSST at RAL |
117683 | Amber | Less Urgent | On Hold | 2015-11-18 | 2016-04-05 | CASTOR at RAL not publishing GLUE 2 |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
21/09/16 | 100 | 100 | 100 | 93 | 100 | N/A | 100 | Several SRM test failures because of a user timeout error |
22/09/16 | 100 | 100 | 100 | 89 | 100 | N/A | 100 | Block of problems with CE tests around lunchtime. |
23/09/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
24/09/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
25/09/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
26/09/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
27/09/16 | 100 | 100 | 100 | 98 | 100 | N/A | 100 | Single SRM test failure because of a user timeout error |