RAL Tier1 Operations Report for 12th July 2017
Review of Issues during the week 5th to 12th July 2017.
|
- There have been some intermittent problems accessing the Atlas Scratch Disk. We received a GGUS ticket from Atlas but the problem had resolved itself beforehand. However, it has also recurred. At present the cause is unknown.
- There were problems with xroot redirection for CMS last Thursday to Friday. The usual fix - restarting the services - didn't work. A full disk area on one of the nodes was found to be the cause.
- Castor Gen instance has been failing OPS SRM tests since Friday (30th June). This appears to be because we the tests have started trying to access an area that was decommissioned around a year ago.
- I have not been reporting data loss incidents in this report. To pick this back up for the last week:
- Following draining of disk servers we reported:
- When GDSS648 was drained one file was lost from LHCbUser.
- When GDSS694 was drained two files were lost for CMS
- One corrupted file from CMSDisk reported from GDSS801. File appears to have written correctly but later read failed owing to corruption.
Resolved Disk Server Issues
|
Current operational status and issues
|
- We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
- There is a problem on the site firewall which is causing problems for some specific data flows. This was being investigated in connection with videoconferencing problems. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues
|
Limits on concurrent batch system jobs.
|
Notable Changes made since the last meeting.
|
- The number of placement groups in the Echo CEPH Atlas pool contiues to be increased. This is in preparation for the increase in storage capacity when new hardware is added.
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
Pending - but not yet formally announced:
- Firmware updates in OCF 14 disk servers. These will be done next week.
- Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface will be disabled on the 17th July.
- Increase the number of placement groups in the Atlas Echo CEPH pool.
Listing by category:
- Castor:
- Move to generic Castor headnodes.
- Merge AtlasScratchDisk into larger Atlas disk pool.
- Echo:
- Increase the number of placement groups in the Atlas Echo CEPH pool.
- Networking
- Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
- Services
- The production FTS needs updating. This will no longer support the SOAP interface. (The "test" FTS , used by Atlas, has already been upgraded.)
Entries in GOC DB starting since the last report.
|
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
129342
|
Green
|
Urgent
|
In Progress
|
2017-07-04
|
2017-07-10
|
|
[Rod Dashboard] Issue detected : org.sam.SRM-Put-ops@srm-mice.gridpp.rl.ac.uk
|
129059
|
Yellow
|
Very Urgent
|
Waiting for Reply
|
2017-06-20
|
2017-06-28
|
LHCb
|
Timeouts on RAL Storage
|
128991
|
Green
|
Less Urgent
|
In Progress
|
2017-06-16
|
2017-07-05
|
Solid
|
solidexperiment.org CASTOR tape support
|
127597
|
Red
|
Urgent
|
On Hold
|
2017-04-07
|
2017-06-14
|
CMS
|
Check networking and xrootd RAL-CERN performance
|
124876
|
Red
|
Less Urgent
|
On Hold
|
2016-11-07
|
2017-01-01
|
OPS
|
[Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
|
117683
|
Red
|
Less Urgent
|
On Hold
|
2015-11-18
|
2017-07-06
|
|
CASTOR at RAL not publishing GLUE 2.
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas Echo |
Atlas HC |
Atlas HC Echo |
CMS HC |
Comment
|
05/07/17 |
100 |
100 |
100 |
98 |
100 |
100 |
100 |
100 |
100 |
Few SRM test failures (mostly User timeout)
|
06/07/17 |
100 |
100 |
100 |
97 |
100 |
100 |
97 |
100 |
100 |
Few SRM test failures (User timeout)
|
07/07/17 |
100 |
100 |
100 |
99 |
100 |
100 |
100 |
99 |
100 |
Single SRM test failure (User timeout)
|
08/07/17 |
100 |
100 |
87 |
89 |
100 |
100 |
91 |
100 |
99 |
Atlas: Problem on two of the four SRMs. On-Call team intervened. CMS: mix of SRM test failures.
|
09/07/17 |
100 |
100 |
100 |
92 |
100 |
100 |
100 |
100 |
99 |
SRM test failures
|
10/07/17 |
100 |
100 |
83 |
95 |
92 |
100 |
75 |
100 |
99 |
Atlas: Failures on the SRM SAM test during evening. On-call team intervened. CMS: Block of failed tests for the CEs and sporadic SRM test failures. LHCb: Some SRM test failures ('Communication error’ and ‘SRM_FILE_BUSY‘)
|
11/07/17 |
100 |
100 |
100 |
97 |
96 |
100 |
100 |
100 |
100 |
CMS: SRM test failures. LHCb: Single SRM test failure: could not open connection to srm-lhcb.gridpp.rl.ac.uk:8443
|
|