RAL Tier1 Operations Report for 25th January 2017
Review of Issues during the week 18th to 25th January 2017.
|
- We have still been seeing SAM SRM tests failures for CMS. These are owing to the total load on the instance.
- LHCb have reported a problem accessing some files - and a GGUS ticket was opened about this. This may now be solved (we are awaiting confirmation). A problem was found with the xroot configuration on one disk server.
- Some disk errors had been seen on hypervisors in our High Availability Hyper-V 2012 cluster. Errors on two of the network connections supporting the iSCSI links to the disk array were found. These were swapped on Monday (16th Jan) - during an unscheduled 'warning'. However, this has not resolved the problem.
- The tape migration queues for Atlas, CMS and LHCb were growing from around 6pm on Saturday until Monday morning. It looks like a tape was stuck in one drive and others became blocked when being asked to mount the same tape.
- Yesterday (17th Jan) there was a problem with teh ALICE Castor instance with many xrootd connections to disk servers but not much activity. At the end of the afternoon the number of ALICE batch jobs was cut back (to 500) as a temporary measure to reduce the load on the instance.
Resolved Disk Server Issues
|
- GDSS772 (LHCbDst - D1T0) failed on Thursday evening, 19th Jan. Back read-only the following afternoon. A disk drive was replaced. Two files reported lost.
- GDSS667 (AtlasScratchDisk - D1T0) failed on Sunday morning (22nd Jan). It was returned to service read-only the following afternoon. One drive with a lot of media errors was replaced. Eleven files reported lost to Atlas.
- GDSS776 (LHCbDst - D1T0) has problems after the reboots to pick up security patches on Monday (23rd). It was returned to service teh following day.
Current operational status and issues
|
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
Ongoing Disk Server Issues
|
- GDSS780 (LHCbDst - D1T0) crashed at around 8am this morning (Wed 25th Jan). System ubder investigation.
Notable Changes made since the last meeting.
|
- Changes made to the publishing of CPU capacity to the information system (GLUE1 & GLUE2).
- Migration of LHCb data from 'C' to 'D' tapes ongoing. Now a little over 80% done. Around 170 out of the 1000 tapes still to do.
- The site-BDIIs are have been put fully behind the load balancers.
- The (internal) Castor "repack" instance was upgraded to Castor version 2.1.15 on Monday (16th). The upgrade of the LHCb stager ongoing at time of meeting.
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
Castor CMS instance
|
SCHEDULED
|
OUTAGE
|
31/01/2017 10:00
|
31/01/2017 16:00
|
6 hours
|
Castor 2.1.15 Upgrade. Only affecting CMS instance. (CMS stager component being upgraded).
|
Castor GEN instance
|
SCHEDULED
|
OUTAGE
|
26/01/2017 10:00
|
26/01/2017 16:00
|
6 hours
|
Castor 2.1.15 Upgrade. Only affecting GEN instance. (GEN stager component being upgraded).
|
Castor Atlas instance
|
SCHEDULED
|
OUTAGE
|
24/01/2017 10:00
|
24/01/2017 16:00
|
6 hours
|
Castor 2.1.15 Upgrade. Only affecting Atlas instance. (Atlas stager component being upgraded).
|
Castor LHCb instance
|
SCHEDULED
|
OUTAGE
|
18/01/2017 10:00
|
18/01/2017 16:00
|
6 hours
|
Castor 2.1.15 Upgrade. Only affecting LHCb instance. (LHCb stager component being upgraded).
|
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
Pending - but not yet formally announced:
- Merge AtlasScratchDisk into larger Atlas disk pool.
Listing by category:
- Castor:
- Update to Castor version 2.1.15. Dates announced via GOC DB for early 2017 and ongoing.
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
Entries in GOC DB starting since the last report.
|
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
srm-lhcb.gridpp.rl.ac.uk
|
SCHEDULED
|
OUTAGE
|
18/01/2017 10:00
|
18/01/2017 16:00
|
6 hours
|
Castor 2.1.15 Upgrade. Only affecting LHCb instance. (LHCb stager component being upgraded).
|
Most services (not Castor)
|
UNSCHEDULED
|
WARNING
|
16/01/2017 13:30
|
16/01/2017 14:30
|
1 hour
|
Warning on site services during short intervention on system supporting VMs.
|
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
125856
|
Green
|
Top Priority
|
Waiting Reply
|
2017-01-06
|
2016-01-18
|
LHCb
|
Permission denied for some files
|
124876
|
Amber
|
Less Urgent
|
On Hold
|
2016-11-07
|
2017-01-01
|
OPS
|
[Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
|
117683
|
Red
|
Less Urgent
|
On Hold
|
2015-11-18
|
2016-12-07
|
|
CASTOR at RAL not publishing GLUE 2. We looked at this as planned in December (report).
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 808); CMS HC = CMS HammerCloud
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas HC |
CMS HC |
Comment
|
18/01/17 |
100 |
100 |
100 |
94 |
72 |
N/A |
100 |
LHCb: Catsor 2.1.15 update; CMS: SRM test failures - User timeout
|
19/01/17 |
100 |
100 |
100 |
97 |
87 |
N/A |
100 |
LHCb Problems after 2.1.15 upgrade; CMS: SRM test failures - User timeout
|
20/01/17 |
100 |
100 |
100 |
87 |
100 |
N/A |
100 |
SRM test failures - User timeout
|
21/01/17 |
100 |
100 |
100 |
90 |
100 |
N/A |
100 |
SRM test failures - User timeout
|
22/01/17 |
100 |
100 |
100 |
92 |
100 |
N/A |
100 |
SRM test failures - User timeout
|
23/01/17 |
100 |
100 |
100 |
100 |
92 |
N/A |
100 |
Patching nodes in LHCb Castor instance. (New kernel)
|
24/01/17 |
100 |
100 |
100 |
?? |
100 |
N/A |
100 |
|