RAL Tier1 Operations Report for 28th June 2017
Review of Issues during the week 21st to 28th June 2017.
|
- There were severe problems with the Atlas SRMs at the end of last week. On Thursday afternoon one of the SRM back end daemon process started crashing on each of the Atlas SRMs. A greatly increased number of SRM requests was also seen. Work went on through the remainder of Thursday and Friday but failed to resolve the problem. On Sunday a correction was applied to the Atlas SRMs to filter out double-slashes ("//") in the incoming requests. This was re-instating a fix that had been applied to the old SRMs back in 2014. Since then the Atlas SRMs have worked OK. Work is going on the confirm this really is the solution before applying the fix to the SRMs for the other Castor instances. The high SRM request rate seen is possibly (probably) the response of the Atlas software as it tried to query the status of files and transfers during the problem. Atlas Castor was declared down in the GOC DB from Friday afternoon to Sunday morning when the fix was applied.
Resolved Disk Server Issues
|
Current operational status and issues
|
- We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
- There is a problem on the site firewall which is causing problems for some specific data flows. This was being investigated in connection with videoconferencing problems. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues
|
Limits on concurrent batch system jobs.
|
Notable Changes made since the last meeting.
|
- Yesterday (Tuesday 27th June) the paired link to R26 was switched from 2*10Gb/s to 2*40Gb/s.
- This morning the OPN link to CERN was increased from 2*10Gb/s to 3*10Gb/s.
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
Pending - but not yet formally announced:
- Firmware updates in OCF 14 disk servers.
- Upgrade the FTS3 service to a version that will no longer support the SOAP interface.
- Increase the number of placement groups in the Atlas Echo CEPH pool.
Listing by category:
- Castor:
- Move to generic Castor headnodes.
- Merge AtlasScratchDisk into larger Atlas disk pool.
- Echo:
- Increase the number of placement groups in the Atlas Echo CEPH pool.
- Networking
- Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
- Services
- The production FTS needs updating. This will no longer support the SOAP interface. (The "test" FTS , used by Atlas, has already been upgraded.)
Entries in GOC DB starting since the last report.
|
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
srm-atlas.gridpp.rl.ac.uk,
|
UNSCHEDULED
|
OUTAGE
|
23/06/2017 16:00
|
25/06/2017 11:59
|
1 day, 19 hours and 59 minutes
|
Ongoing problems with Atlas SRM nodes - GGUS 129098
|
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
129098
|
Green
|
Urgent
|
In Progress
|
2017-06-22
|
2017-06-27
|
Atlas
|
RAL-LCG2: source / destination file transfer errors ("Connection timed out")
|
129072
|
Green
|
Less Urgent
|
In Progress
|
2017-06-20
|
2017-06-20
|
|
Please remove vo.londongrid.ac.uk from RAL-LCG2 resources
|
129059
|
Green
|
Very Urgent
|
In Progress
|
2017-06-20
|
2017-06-27
|
LHCb
|
Timeouts on RAL Storage
|
128991
|
Green
|
Less Urgent
|
In Progress
|
2017-06-16
|
2017-06-16
|
Solid
|
solidexperiment.org CASTOR tape support
|
127612
|
Red
|
Alarm
|
In Progress
|
2017-04-08
|
2017-06-27
|
LHCb
|
CEs at RAL not responding
|
127597
|
Red
|
Urgent
|
On Hold
|
2017-04-07
|
2017-06-14
|
CMS
|
Check networking and xrootd RAL-CERN performance
|
124876
|
Red
|
Less Urgent
|
On Hold
|
2016-11-07
|
2017-01-01
|
OPS
|
[Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
|
117683
|
Red
|
Less Urgent
|
On Hold
|
2015-11-18
|
2017-05-10
|
|
CASTOR at RAL not publishing GLUE 2.
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas Echo |
Atlas HC |
Atlas HC Echo |
CMS HC |
Comment
|
21/06/17 |
100 |
100 |
100 |
99 |
100 |
100 |
97 |
100 |
N/A |
Single SRM test failure: Unable to issue PrepareToPut request to Castor.
|
22/06/17 |
100 |
100 |
88 |
95 |
100 |
100 |
14 |
98 |
N/A |
Atlas: SRM problems; CMS: A few ‘User timeout over’ errors on the SRM SAM tests. And there were a few ‘held job’ errors on the ARC-CEs.
|
23/06/17 |
100 |
100 |
67 |
89 |
100 |
100 |
28 |
100 |
N/A |
Atlas: SRM problems; CMS: Problems with a full cmsDisk.
|
24/06/17 |
100 |
100 |
100 |
61 |
100 |
100 |
0 |
100 |
N/A |
CMS: Problems with a full cmsDisk.
|
25/06/17 |
100 |
100 |
100 |
71 |
100 |
100 |
100 |
100 |
N/A |
CMS: Problems with a full cmsDisk.
|
26/06/17 |
100 |
100 |
100 |
97 |
100 |
100 |
100 |
100 |
100 |
SRM test failures. (User timeout).
|
27/06/17 |
100 |
100 |
100 |
98 |
100 |
100 |
100 |
100 |
100 |
Three SRM test failures. Two were timeouts; One was “Error while searching for end of reply”
|
- EGI have announced withdrawal of support for the WMS at the end of 2017.
- The capacity storage nodes (to go into Echo) have now had two weeks of acceptance testing.
- Data is now shipping from the Leicester Dirac site.
|