|
|
Line 31: |
Line 31: |
| | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues | | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues |
| |} | | |} |
− | * We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded. | + | * We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. Our investigations are ongoing. |
| * There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. This is having an effect on our data that flows through the firewall (such as to/from worker nodes). | | * There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. This is having an effect on our data that flows through the firewall (such as to/from worker nodes). |
| <!-- ***********End Current operational status and issues*********** -----> | | <!-- ***********End Current operational status and issues*********** -----> |
Revision as of 11:20, 16 August 2017
RAL Tier1 Operations Report for 16th August 2017
Review of Issues during the week 9th to 16th August 2017.
|
- There was a problem with Atlas Castor during the afternoon / early evening of Thursday 10th August. Atlas Castor was restarted and the problem went away during the evening. However, the cause is not understood. We have previously had some problems with the Atlas Castor SRMs but the symptoms of this failure appeared different to those.
Resolved Disk Server Issues
|
- GDSS753 (AtlasDataDisk - D1T0) failed in the early hours of Thursday 10th August. It was returned to production early Friday afternoon (11th). Following a disk drive failure the RAID card tried a rebuild which also failed (either the spare drive was bad or the failed drive came back and initially appeared good).
Current operational status and issues
|
- We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. Our investigations are ongoing.
- There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. This is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues
|
Limits on concurrent batch system jobs.
|
Notable Changes made since the last meeting.
|
- On Monday (7th August) the AtlasScratchDisk was merged into the larger AtlasDataDisk pool in Castor.
- The planned increases in the number of placement groups in the Echo CEPH Atlas pool has been completed. The remaining third of the 2015 storage purchases have been placed into Echo and the process of moving data so that use is made of this hardware has been started.
- All squid nodes are now IPv4/IPv6 dual stack.
- "Test" FTS3 instance (used by Atlas) updated to 3.6.10 (emergency update - as this was the first server to reach 2 billion transfers. Due to an internal 32-bit integer being used it completely stopped working at this point.)
- Power work was carried out in building R26 (the Atlas building) over the weekend of 29/30 July. This had no impact on our operational services.
- There was a successful UPS/Generator load test this morning (9th Aug). These are done quarterly and this was the first regular test since the building UPS was replaced. (It had been tested shortly after installation).
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
srm-superb.gridpp.rl.ac.uk,
|
SCHEDULED
|
OUTAGE
|
20/07/2017 16:00
|
30/08/2017 13:00
|
40 days, 21 hours
|
SuperB no longer supported on Castor storage. Retiring endpoint.
|
srm-hone.gridpp.rl.ac.uk,
|
SCHEDULED
|
OUTAGE
|
20/07/2017 16:00
|
30/08/2017 13:00
|
40 days, 21 hours
|
H1 no longer supported on Castor storage. Retiring endpoint.
|
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
Pending - but not yet formally announced:
- Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface was disabled on Monday (17th July).
- Re-distribute the date in Echo onto the 2016 capacity hardware. (Ongoing)
Listing by category:
- Castor:
- Move to generic Castor headnodes.
- Echo:
- Re-distribute the date in Echo onto the remaining 2015 capacity hardware.
- Networking
- Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar and all squids now working over IPv6).
- Services
- The production FTS will be updated now the requirement to support the deprecated SOAP interface has been removed.
Entries in GOC DB starting since the last report.
|
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
srm-hone.gridpp.rl.ac.uk,
|
SCHEDULED
|
OUTAGE
|
20/07/2017 16:00
|
30/08/2017 13:00
|
40 days, 21 hours
|
H1 no longer supported on Castor storage. Retiring endpoint.
|
srm-superb.gridpp.rl.ac.uk,
|
SCHEDULED
|
OUTAGE
|
20/07/2017 16:00
|
30/08/2017 13:00
|
40 days, 21 hours
|
SuperB no longer supported on Castor storage. Retiring endpoint.
|
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
129883
|
Green
|
Urgent
|
In Progress
|
2017-08-01
|
2017-08-03
|
CMS
|
Low HC xrootd success rates at T1_UK_RAL
|
128991
|
Green
|
Less Urgent
|
On Hold
|
2017-06-16
|
2017-07-20
|
Solid
|
solidexperiment.org CASTOR tape support
|
127597
|
Red
|
Urgent
|
On Hold
|
2017-04-07
|
2017-06-14
|
CMS
|
Check networking and xrootd RAL-CERN performance
|
124876
|
Red
|
Less Urgent
|
On Hold
|
2016-11-07
|
2017-01-01
|
OPS
|
[Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
|
117683
|
Red
|
Less Urgent
|
On Hold
|
2015-11-18
|
2017-07-06
|
|
CASTOR at RAL not publishing GLUE 2.
|
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas Echo |
Comment
|
09/08/17 |
100 |
100 |
100 |
92 |
100 |
100 |
Sporadic SRM test failures with "user timeout".
|
10/08/17 |
100 |
100 |
77 |
76 |
100 |
100 |
Atlas Castor problems during the evening; CMS Sporadic SRM test failures with "user timeout".
|
11/08/17 |
100 |
100 |
100 |
83 |
100 |
100 |
Sporadic SRM test failures with "user timeout".
|
12/08/17 |
100 |
100 |
100 |
81 |
100 |
100 |
Sporadic SRM test failures with "user timeout".
|
13/08/17 |
100 |
100 |
100 |
86 |
100 |
100 |
Sporadic SRM test failures with "user timeout".
|
14/08/17 |
100 |
100 |
100 |
84 |
100 |
100 |
A lot of SRM test failures with "user timeout"
|
15/08/17 |
96.2 |
100 |
100 |
97 |
100 |
100 |
CMS: A small number of SRM test failures with "user timeout".
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud
Day |
Atlas HC |
Atlas HC Echo |
CMS HC |
Comment
|
09/08/17 |
95 |
96 |
100 |
|
10/08/17 |
71 |
88 |
100 |
|
11/08/17 |
96 |
100 |
100 |
|
12/08/17 |
96 |
100 |
100 |
|
13/08/17 |
100 |
100 |
100 |
|
14/08/17 |
100 |
100 |
100 |
|
15/08/17 |
100 |
100 |
100 |
|