RAL Tier1 Operations Report for 19th October 2016
Review of Issues during the fortnight 5th to 19th October 2016.
|
- On Sunday morning (2nd Oct) there was a problem with the LHCb Cator instance with access to LHCb-DISK not working. LHCb raised a GGUS alarm ticket. The problems was resolved during Sunday by the Castor on-call. Although there was space in the disk pool as a whole, some of the disks in there had become full which led to the problems. Since then some re-balancing of the disks has been carried out.
- On Tuesday morning there was a recurrence of problems seen just over a week ago with all three top-level BDIIs not working. This lasted until early afternoon - when the problem went away (not fixed by us). An outage was declared on the Top-BDII alias in the GOC DB. A similar problem affected at least one other site at the same time. Following discussions on the e-mail list the BDIIs will be upgraded.
- Yesterday (Tuesday 4th Oct) morning there was a problem with a very large queue on AtlasScratch in Castor. (This did not affect other areas in Castor). Initial attempts to flush the queue did not help. The load on Atlas Scratch was reduced (the number of Atlas pilot batch jobs was reduced during the afternoon) and, whether by this or the particular jobs ending, AtlasScratch in Castor recovered. The limits on the Atlas pilot jobs was removed this morning (Wed 5th Oct).
Resolved Disk Server Issues
|
- GDSS677 (CMSTape - D0T1) failed on Thursday afternoon, 29th Sep. There are no files awaiting migration to tape on the server. It was returned to service on the 6th October. Four disk drives showed various levels of problems and were replaced.
- GDSS744 (AtlasDataDisk - D1T0) failed in the early hours of Wed. 12th October. After a disk failure the RAID array started a rebuild automatically, bur the disk it was rebuilding on failed just before a complete rebuild. Put back read-only later that day while the rebuild took place on another disk.
- GDSS612 (AtlasScratchDisk - D1T0) was taken out of production for an hour or so on the 12th October for a battery replacement.
- GDSS730 (CMSDisk - D1T0) reported reported problems on the 13th October. The RAID card automatically rebuilt onto a drive that itself showed errors. That was then swapped and we rebuilt onto another good drive. Returned to service on the 17th Oct.
- GDSS658 (AtlasScratchDisk - D1T0) reported FSProbe errors on the 14th Oct and was taken out of service. A battery failure was found. Returned to prod initially read-only later that day. Put back read/write on 18/10.
- GDSS723 (AtlasDataDisk - D1T0) crashed on Saturday (15th Oct). The cause was not understood. It was put back in service read-only the next day (Sun 16th). The server was taken out of production to run memory tests the following day (Mon 17th). It passed the tests and went back to full production on the next day (Tue 18th).
Current operational status and issues
|
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
- The intermittent, low-level, load-related packet loss seen over external connections is still being tracked.
Ongoing Disk Server Issues
|
- GDSS648 (LHCbUser - D1T0) failed on Saturday evening (15th Oct). It is showing both disk and network problems. It is still under investigation.
Notable Changes made since the last meeting.
|
- On Wednesday 5th October the UKLight Router was replaced. This has also enabled a a 40 GBit connection to the RAL border routers (i.e. giving a datapath of up to 40Gbit for the bypass connection.
- Some re-balancing of the disk servers in lhcbDst has been carried out following a problem where some became full.
- Some changes were made to increase the number of CMS batch jobs that we run in order to bring the number more into line with the pledge. This was both am increase in the number of draining machines (from 10 to 20) to allow more multi-core jobs (CMS' jobs are purely multicore) as well as an increase in the limit in Condor.
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
All Castor Endpoints that Use Tape
|
SCHEDULED
|
WARNING
|
01/11/2016 07:00
|
01/11/2016 16:00
|
9 hours
|
Tape Library not available during work on the mechanics. Tape access for read will stop. Writes will be buffered on disk and flushed to tape after the work has completed.
|
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
Listing by category:
- Replace the UKLight Router including upgrading the 'bypass' link to the RAL border routers to 40Gbit. Being scheduled for Wednesday 5th October.
- Intervention on Tape Libraries - early November.
Listing by category:
- Castor:
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Update to Castor version 2.1.15. Planning to roll out January 2017.
- Migration of LHCb data from T10KC to T10KD tapes.
- Networking:
- Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 40Gbit.
- Fabric
- Firmware updates on older disk servers.
Entries in GOC DB starting since the last report.
|
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
Castor GEN instance (srm-alice, srm-biomed, srm-dteam, srm-hone, srm-ilc, srm-mice, srm-minos, srm-na62, srm-pheno, srm-snoplus, srm-superb, srm-t2k)
|
UNSCHEDULED
|
WARNING
|
12/10/2016 15:00
|
12/10/2016 16:00
|
1 hour
|
At risk on some Castor instances while we replace a disk in a headnode.
|
All Castor storage
|
SCHEDULED
|
WARNING
|
05/10/2016 09:00
|
05/10/2016 15:00
|
6 hours
|
Series of breaks in the external data path to/from our storage while the network router is replaced and tests carried out on the resilience of the links. Internal access to storage unaffected.
|
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
124188
|
Green
|
Less Urgent
|
In Progress
|
2016-10-03
|
2016-10-03
|
Atlas
|
UK Lpad-RAL-LCG224 : Frontier squid down
|
123504
|
Yellow
|
Less Urgent
|
Waiting for Reply
|
2016-08-19
|
2016-09-20
|
T2K
|
proxy expiration
|
122827
|
Green
|
Less Urgent
|
Waiting for Reply
|
2016-07-12
|
2016-09-14
|
SNO+
|
Disk area at RAL
|
121687
|
Red
|
Less Urgent
|
On Hold
|
2016-05-20
|
2016-09-30
|
|
packet loss problems seen on RAL-LCG perfsonar
|
120350
|
Yellow
|
Less Urgent
|
On Hold
|
2016-03-22
|
2016-08-09
|
LSST
|
Enable LSST at RAL
|
117683
|
Amber
|
Less Urgent
|
On Hold
|
2015-11-18
|
2016-04-05
|
|
CASTOR at RAL not publishing GLUE 2 (Rob & Jens will discuss & update ticket later today)
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas HC |
CMS HC |
Comment
|
05/10/16 |
100 |
100 |
100 |
96 |
100 |
N/A |
100 |
Two SRM tests failed during the network intervention to replace the UKLight router.
|
06/10/16 |
100 |
100 |
100 |
100 |
100 |
N/A |
100 |
|
07/10/16 |
100 |
100 |
100 |
100 |
100 |
N/A |
N/A |
|
08/10/16 |
100 |
100 |
100 |
100 |
100 |
N/A |
N/A |
|
09/10/16 |
100 |
100 |
93 |
100 |
100 |
N/A |
100 |
A few SRM test failures. Although the tests run against AtlasDataDisk they were coincident with severe problems for AtlasScratch.
|
10/10/16 |
100 |
100 |
100 |
100 |
100 |
N/A |
100 |
|
11/10/16 |
100 |
100 |
100 |
98 |
100 |
N/A |
100 |
Single SRM test failure because of a user timeout error
|
12/10/16 |
100 |
100 |
100 |
100 |
100 |
N/A |
100 |
|
13/10/16 |
100 |
100 |
100 |
100 |
100 |
N/A |
N/A |
|
14/10/16 |
100 |
100 |
100 |
100 |
100 |
N/A |
100 |
|
15/10/16 |
100 |
100 |
100 |
100 |
100 |
N/A |
N/A |
|
16/10/16 |
100 |
100 |
100 |
98 |
100 |
N/A |
93 |
Single SRM test failure on GET because of a user timeout error
|
17/10/16 |
100 |
100 |
100 |
100 |
96 |
N/A |
97 |
SRM Test failure on list ([SRM_INVALID_PATH] No such file or directory)
|
18/10/16 |
100 |
91 |
100 |
100 |
100 |
N/A |
N/A |
Some failures of the ALICE AliEn-SE Test. We were aware of problems on the Castor GEN instance caused by heavy use by a user from another VO.
|