RAL Tier1 Operations Report for 5th July 2017
Review of Issues during the week 28th June to 5th July 2017.
|
- There have been some intermittent problems accessing the Atlas Scratch Disk. We received a GGUS ticket from Atlas but the problem had resolved itself beforehand. However, it has also recurred. At present the cause is unknown.
- There were problems with xroot redirection for CMS last Thursday to Friday. The usual fix - restarting the services - didn't work. A full disk area on one of the nodes was found to be the cause.
- Castor Gen instance has been failing OPS SRM tests since Friday (30th June). This appears to be because we the tests have started trying to access an area that was decommissioned around a year ago.
- I have not been reporting data loss incidents in this report. To pick this back up for the last week:
- Following draining of disk servers we reported:
- When GDSS648 was drained one file was lost from LHCbUser.
- When GDSS694 was drained two files were lost for CMS
- One corrupted file from CMSDisk reported from GDSS801. File appears to have written correctly but later read failed owing to corruption.
Resolved Disk Server Issues
|
Current operational status and issues
|
- We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
- There is a problem on the site firewall which is causing problems for some specific data flows. This was being investigated in connection with videoconferencing problems. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues
|
Limits on concurrent batch system jobs.
|
Notable Changes made since the last meeting.
|
- The number of placement groups in the Echo CEPH Atlas pool is being steadily increased. This is in preparation for the increase in storage capacity when new hardware is added.
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
Pending - but not yet formally announced:
- Firmware updates in OCF 14 disk servers. Likely timescale is withing next fortnight.
- Upgrade the FTS3 service to a version that will no longer support the SOAP interface.
- Increase the number of placement groups in the Atlas Echo CEPH pool.
Listing by category:
- Castor:
- Move to generic Castor headnodes.
- Merge AtlasScratchDisk into larger Atlas disk pool.
- Echo:
- Increase the number of placement groups in the Atlas Echo CEPH pool.
- Networking
- Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
- Services
- The production FTS needs updating. This will no longer support the SOAP interface. (The "test" FTS , used by Atlas, has already been upgraded.)
Entries in GOC DB starting since the last report.
|
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
129342
|
Green
|
Urgent
|
Assigned to someone else
|
2017-07-04
|
2017-07-04
|
|
[Rod Dashboard] Issue detected : org.sam.SRM-Put-ops@srm-mice.gridpp.rl.ac.uk
|
129299
|
Green
|
Urgent
|
In Progress
|
2017-06-30
|
2017-07-03
|
CMS
|
bad data was encountered errors in some transfers from RAL
|
129228
|
Green
|
Urgent
|
Waiting for Reply
|
2017-06-28
|
2017-06-30
|
CMS
|
Low HC xrootd success rates at T1_UK_RAL
|
129211
|
Green
|
Urgent
|
In Progress
|
2017-06-27
|
2017-06-29
|
Atlas
|
FR TOKYO-LCG2: DDM transfer errors with "A system call failed: Connection refused"
|
129098
|
Green
|
Urgent
|
In Progress
|
2017-06-22
|
2017-06-27
|
Atlas
|
RAL-LCG2: source / destination file transfer errors ("Connection timed out")
|
129059
|
Green
|
Very Urgent
|
Waiting for Reply
|
2017-06-20
|
2017-06-28
|
LHCb
|
Timeouts on RAL Storage
|
128991
|
Green
|
Less Urgent
|
In Progress
|
2017-06-16
|
2017-06-16
|
Solid
|
solidexperiment.org CASTOR tape support
|
127597
|
Red
|
Urgent
|
On Hold
|
2017-04-07
|
2017-06-14
|
CMS
|
Check networking and xrootd RAL-CERN performance
|
124876
|
Red
|
Less Urgent
|
On Hold
|
2016-11-07
|
2017-01-01
|
OPS
|
[Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
|
117683
|
Red
|
Less Urgent
|
On Hold
|
2015-11-18
|
2017-05-10
|
|
CASTOR at RAL not publishing GLUE 2.
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas Echo |
Atlas HC |
Atlas HC Echo |
CMS HC |
Comment
|
28/06/17 |
100 |
100 |
100 |
99 |
100 |
100 |
100 |
100 |
100 |
Single SRM test failure with User Timeout.
|
29/06/17 |
100 |
100 |
98 |
100 |
100 |
100 |
99 |
100 |
100 |
Single SRM test failure '[SRM_FAILURE] Unable to receive header’.
|
30/06/17 |
100 |
100 |
100 |
100 |
100 |
100 |
96 |
100 |
100 |
|
01/07/17 |
100 |
100 |
100 |
99 |
100 |
100 |
100 |
100 |
100 |
Single SRM test failure with User Timeout.
|
02/07/17 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
03/07/17 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
98 |
100 |
|
04/07/17 |
100 |
100 |
100 |
97 |
100 |
100 |
100 |
100 |
92 |
Two SRM test failures and a brief set of failures of Job submission tests.
|
- We are working to get jobs for LSST and LIGO running here. The VOs have been enabled before, but issues of actually running jobs successfully are being worked through.
|