Difference between revisions of "Tier1 Operations Report 2016-09-21"
From GridPP Wiki
(→) |
|||
Line 137: | Line 137: | ||
! Reason | ! Reason | ||
|- | |- | ||
− | | | + | | arc-ce03.gridpp.rl.ac.uk |
| SCHEDULED | | SCHEDULED | ||
− | | | + | | OUTAGE |
− | | | + | | 15/09/2016 13:00 |
− | | | + | | 22/09/2016 18:00 |
− | | | + | | 7 days, 5 hours |
− | | | + | | ARC-CE03 being drained ahead of a reconfiguration and move to run on different infrastructure. |
|} | |} | ||
<!-- **********************End GOC DB Entries************************** -----> | <!-- **********************End GOC DB Entries************************** -----> |
Revision as of 09:58, 21 September 2016
RAL Tier1 Operations Report for 21st September 2016
Review of Issues during the week 14th to 21st September 2016. |
- Atlas reported a problem with the batch system last Friday (9th Sep). It turned out that there was a problem on one particular worker node (json module missing in python).
Resolved Disk Server Issues |
- GDSS665 (AtlasTape - D0T1) failed with a read-only filesystem yesterday on Sunday 4th Sep. All files that were awaiting migration were copied to tape. Following investigation a faulty drive was replaced and the server returned to service yesterday (13th Sep).
- GDSS730 (CMSDisk - D1T0) failed in the early hours of Tuesday morning (13th Sep). Following the replacement of a drive the server was put back in read-only mode later that day.
Current operational status and issues |
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
- The intermittent, low-level, load-related packet loss seen over external connections is still being tracked.
Ongoing Disk Server Issues |
- GDSS779 (LHCbDst - D1T0) reported problems earlier this morning (14th Sep). It is currently our of production while the cause is being investigated.
Notable Changes made since the last meeting. |
- A number more services have been moved to the Hyper-V 2012 infrastructure.
- Oracle carried out a preventative maintenance and firmware update on the tape libraries yesterday (Tuesday 13th Sep).
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
arc-ce04.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 23/09/2016 10:00 | 30/09/2016 18:00 | 7 days, 8 hours | ARC-CE04 being drained ahead of a reconfiguration and move to run on different infrastructure. |
arc-ce03.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 15/09/2016 13:00 | 22/09/2016 18:00 | 7 days, 5 hours | ARC-CE03 being drained ahead of a reconfiguration and move to run on different infrastructure. |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Preventative Maintenance on the Tape Libraries. Tuesday 13th September.
Listing by category:
- Castor:
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Update to Castor version 2.1.15. This awaits successful resolution and testing of the new version.
- Migration of LHCb data from T10KC to T10KD tapes.
- Networking:
- Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 2*40Gbit.
- Fabric
- Firmware updates on older disk servers.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
arc-ce03.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 15/09/2016 13:00 | 22/09/2016 18:00 | 7 days, 5 hours | ARC-CE03 being drained ahead of a reconfiguration and move to run on different infrastructure. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
123504 | Green | Less Urgent | In Progress | 2016-08-19 | 2016-08-23 | T2K | proxy expiration |
122827 | Green | Less Urgent | In Progress | 2016-07-12 | 2016-08-22 | SNO+ | Disk area at RAL |
122364 | Green | Less Urgent | On Hold | 2016-06-27 | 2016-08-23 | cvmfs support at RAL-LCG2 for solidexperiment.org | |
121687 | Red | Less Urgent | On Hold | 2016-05-20 | 2016-05-23 | packet loss problems seen on RAL-LCG perfsonar | |
120350 | Yellow | Less Urgent | On Hold | 2016-03-22 | 2016-08-09 | LSST | Enable LSST at RAL |
117683 | Amber | Less Urgent | On Hold | 2015-11-18 | 2016-04-05 | CASTOR at RAL not publishing GLUE 2 |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
14/09/16 | 100 | 99 | 100 | 100 | 100 | N/A | 100 | AliEn-SE test failures. Problem seen at other sites too. |
15/09/16 | 100 | 81 | 100 | 92 | 100 | N/A | 100 | ALICE: AliEn-SE test failures. Problem seen at other sites too; CMS: Several SRM test failures because of a user timeout error |
16/09/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
17/09/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
18/09/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
19/09/16 | 100 | 100 | 100 | 96 | 100 | N/A | 100 | Two SRM test failures (could not open connection to srm-cms.gridpp.rl.ac.uk) plus CE test failures. |
20/09/16 | 100 | 100 | 100 | 98 | 50 | N/A | 100 | CMS: Single SRM test failure (user timeout) plus CE test failures; LHCb: We have one CE down, others working OK. But the SAM results don't seem to allow for it correctly. |