RAL Tier1 Operations Report for 25th May 2016
Review of Issues during the week 25th May to 1st June 2016.
|
- Last week we reported a problem with the Tape Library control software that had started crashing. On Thursday (26th) we switched to using a backup server for the connection to the tape libraries. This still crashed but less frequently. Since then we have been running in this configuration. Crashes have still occurred but we have a re-starter in place plus monitoring so that we are aware of and can fix problems. Since then the tape libraries have delivered a reasonable service - although this is because we are managing the crashes and problems. This morning an engineer has come to fix a couple of hardware faults which are not show stoppers but need resolving. It is not clear that this will help the problem of the software crashing. We have put a 'warning' in the GOC DB for some hours today to cover this time. We continue to follow up with the vendor.
Resolved Disk Server Issues
|
Current operational status and issues
|
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites. A recent modification has improved, but not completed fixed this.
- The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
Ongoing Disk Server Issues
|
- GDSS698 (LHCbDst - D1T0) crashed overnight Monday/Tuesday 30/31 May. The cause is being investigated.
- GDSS718 (LHCbDst - D1T0) crashed ysterday evening (Tuesday 31st May. The cause is being investigated.
Notable Changes made since the last meeting.
|
- The migration of Atlas data from "C" to "D" tapes is ongoing.
- Write access to GenScratch has been stopped.
- One (of two) new batches of Worker Nodes has been put into production on Monday/Tuesday of this week.
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
lcgwms06.gridpp.rl.ac.uk,
|
SCHEDULED
|
OUTAGE
|
01/06/2016 11:00
|
30/06/2016 11:00
|
29 days,
|
Server lcgwms06.gridpp.rl.ac.uk Decommissioning
|
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
- The Castor 2.1.15 update is pending. Testing has shown a database related performance issue which is being followed up. We await successful resolution of that problem and completion of testing before scheduling. Following advice from the developers we will not upgrade the SRMs before the Castor 2.1.15 upgrade.
- Decommissioning of "GEN Scratch" storage in Castor. (Formally announced by EGI broadcast). Write access to this area has now been stopped in preparation for completely stopping access on the 20th June.
- Decommissioning of lcgwms06. This will leave two WMS systems remaining in service.
- The HAProxy Load Balancer will be added in front of the Production FTS3 Service.
Listing by category:
- Databases:
- Switch LFC/3D to new Database Infrastructure.
- Castor:
- Update SRMs to new version (includes updating to SL6).
- Update to Castor version 2.1.15.
- Migration of data from T10KC to T10KD tapes (Affects Atlas & LHCb data).
- Networking:
- Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 2*40Gbit.
- Fabric
- Firmware updates on remaining EMC disk arrays (Castor, LFC)
- Grid Services
- Once the use of the Load Balancer (HAProxy) has been proven for the test FTS service it will be extended to other services.
Entries in GOC DB starting since the last report.
|
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
lcgwms06.gridpp.rl.ac.uk,
|
SCHEDULED
|
OUTAGE
|
01/06/2016 11:00
|
30/06/2016 11:00
|
29 days,
|
Server lcgwms06.gridpp.rl.ac.uk Decommissioning
|
All Castor endpoints (SRM) with tape.
|
SCHEDULED
|
WARNING
|
01/06/2016 10:00
|
01/06/2016 15:00
|
5 hours
|
Expected Break in Tape Access for Around Two Hours During this Time Window while Engineer Attends Tape Library.
|
All Castor endpoints (SRM) with tape.
|
UNSCHEDULED
|
WARNING
|
26/05/2016 18:00
|
27/05/2016 17:00
|
23 hours
|
Ongoing problems with tape library. Tape mounts may be delayed.
|
All Castor endpoints (SRM) with tape.
|
UNSCHEDULED
|
WARNING
|
25/05/2016 19:30
|
26/05/2016 14:00
|
18 hours and 30 minutes
|
At risk on tape system overnight following problem mounting tapes.
|
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
121687
|
Green
|
Less Urgent
|
On Hold
|
2016-05-20
|
2016-05-23
|
|
packet loss problems seen on RAL-LCG perfsonar
|
120810
|
Amber
|
Urgent
|
In Progress
|
2016-04-13
|
2016-05-24
|
Biomed
|
Decommissioning of SE srm-biomed.gridpp.rl.ac.uk - forbid write access for biomed users
|
120350
|
Green
|
Less Urgent
|
In Progress
|
2016-03-22
|
2016-05-06
|
LSST
|
Enable LSST at RAL
|
119841
|
Red
|
Less Urgent
|
On Hold
|
2016-03-01
|
2016-04-26
|
LHCb
|
HTTP support for lcgcadm04.gridpp.rl.ac.uk
|
117683
|
Yellow
|
Less Urgent
|
On Hold
|
2015-11-18
|
2016-04-05
|
|
CASTOR at RAL not publishing GLUE 2
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas HC |
CMS HC |
Comment
|
25/05/16 |
100 |
100 |
100 |
100 |
100 |
100 |
95 |
|
26/05/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
27/05/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
28/05/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
29/05/16 |
100 |
100 |
100 |
100 |
100 |
100 |
N/A |
|
30/05/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
31/05/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|