RAL Tier1 Operations Report for 8th June 2016
Review of Issues during the week 1st to 8th June 2016.
|
- There have been significant problems with the tape library in this last week. There was an intervention on Wednesday (1st June) to fix some outstanding hardware issues. This included hanging one of the 'handbots'. The library has a feature whereby a handbot can be moved outside a safety door so that it can be worked on while the rest of the library is running. However, what appears to have been a fault in the replacement unit caused a short. The original unit was replaced - the library passed its power-up tests and ran overnight. A safety issue was flagged and the library was taken down from Thursday (2nd June) until Friday (3rd) so it could be checked. Also on Friday the original handbot was swapped out with another replacement unit. At the end of Friday all hardware components on the library were working OK. The library worked reasonably over the weekend, although the control software (which has been running on a spare system) has been crashing a few times per day. Yesterday (Tuesday 7th June) we moved back to running the control software on the 'production' server and restarted services slowly after that had come up. Has / Has not run OK since ????
Resolved Disk Server Issues
|
- GDSS698 (LHCbDst - D1T0) crashed overnight Monday/Tuesday 30/31 May. It was returned to service this morning following BIOS and firmware updates.
- GDSS718 (LHCbDst - D1T0) crashed ysterday evening (Tuesday 31st May. It was returned to service this morning following BIOS and firmware updates.
Current operational status and issues
|
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites. A recent modification has improved, but not completed fixed this.
- The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
Ongoing Disk Server Issues
|
- GDSS658 (LHCbUser - D1T0) crashed on Monday morning. It is undergoing checkes ahead of being returned to service.
Notable Changes made since the last meeting.
|
- The migration of Atlas data from "C" to "D" tapes is ongoing. We have migrated around 400 of the 1300 tapes so far. This migration has carried on through this last week despite the tape library problems.
- The HAProxy load balancers are now fully in use in front of the production FTS3 service.
- New tape pools have been set-up for Atlas and LHCb.
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
lcgwms06.gridpp.rl.ac.uk,
|
SCHEDULED
|
OUTAGE
|
01/06/2016 11:00
|
30/06/2016 11:00
|
29 days,
|
Server lcgwms06.gridpp.rl.ac.uk Decommissioning
|
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
- Decommissioning of "GEN Scratch" storage in Castor. (Formally announced by EGI broadcast). Write access to this area has now been stopped in preparation for completely stopping access on the 20th June.
- Decommissioning of lcgwms06. This is being stopped from receiving new work today (1st June).
Listing by category:
- Databases:
- Switch LFC/3D to new Database Infrastructure.
- Castor:
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Update to Castor version 2.1.15. This awaits successful resolution and testing of the new version.
- Migration of data from T10KC to T10KD tapes (Affects Atlas & LHCb data).
- Networking:
- Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 2*40Gbit.
- Fabric
- Firmware updates on remaining EMC disk arrays (Castor, LFC)
- Grid Services
- Once the use of the Load Balancer (HAProxy) has been proven for the test FTS service it will be extended to other services.
Entries in GOC DB starting since the last report.
|
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
lcgwms06.gridpp.rl.ac.uk,
|
SCHEDULED
|
OUTAGE
|
01/06/2016 11:00
|
30/06/2016 11:00
|
29 days,
|
Server lcgwms06.gridpp.rl.ac.uk Decommissioning
|
All Castor endpoints (SRM) with tape.
|
SCHEDULED
|
WARNING
|
01/06/2016 10:00
|
01/06/2016 15:00
|
5 hours
|
Expected Break in Tape Access for Around Two Hours During this Time Window while Engineer Attends Tape Library.
|
All Castor endpoints (SRM) with tape.
|
UNSCHEDULED
|
WARNING
|
26/05/2016 18:00
|
27/05/2016 17:00
|
23 hours
|
Ongoing problems with tape library. Tape mounts may be delayed.
|
All Castor endpoints (SRM) with tape.
|
UNSCHEDULED
|
WARNING
|
25/05/2016 19:30
|
26/05/2016 14:00
|
18 hours and 30 minutes
|
At risk on tape system overnight following problem mounting tapes.
|
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
121687
|
Green
|
Less Urgent
|
On Hold
|
2016-05-20
|
2016-05-23
|
|
packet loss problems seen on RAL-LCG perfsonar
|
120810
|
Amber
|
Urgent
|
In Progress
|
2016-04-13
|
2016-05-24
|
Biomed
|
Decommissioning of SE srm-biomed.gridpp.rl.ac.uk - forbid write access for biomed users
|
120350
|
Green
|
Less Urgent
|
In Progress
|
2016-03-22
|
2016-05-06
|
LSST
|
Enable LSST at RAL
|
119841
|
Red
|
Less Urgent
|
On Hold
|
2016-03-01
|
2016-04-26
|
LHCb
|
HTTP support for lcgcadm04.gridpp.rl.ac.uk
|
117683
|
Yellow
|
Less Urgent
|
On Hold
|
2015-11-18
|
2016-04-05
|
|
CASTOR at RAL not publishing GLUE 2
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas HC |
CMS HC |
Comment
|
25/05/16 |
100 |
100 |
100 |
100 |
100 |
100 |
95 |
|
26/05/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
27/05/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
28/05/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
29/05/16 |
100 |
100 |
100 |
100 |
100 |
100 |
N/A |
|
01/06/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
02/06/16 |
95.8 |
100 |
100 |
100 |
100 |
100 |
100 |
Looks like transitory problem with site BDII,
|
03/06/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
04/06/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
05/06/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
06/06/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
07/06/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|