RAL Tier1 Operations Report for 31st May 2017
Review of Issues during the week 24th to 31st May 2017.
|
- A problem of ARP poisoning on the Tier1 network is affecting some monitoring used by CEPH ECHO. The cause of this is now believed to be understood and the fix (a setting change in the OPN Router) will be applied at the earliest practical time.
- There is a problem on the site firewall which is causing problems for some specific data flows. It was being investigated in connection with videoconferencing problems. It is not clear if this could be having any effect on our services.
- Atlas reported a problem deleting files in ECHO towards on Wednesday evening ast week. This was solved by Friday.
Resolved Disk Server Issues
|
- GDSS773 (LHCbDst - D1T0) crashed on Sunday (21st May). It was returned to service Thursday morning (25th) although not fault was found during the diagnostic testing.
- GDSS804 (LHCbDst - D1T0) was taken out of production on Tuesday 23rd as it was showing memory errors. However, the memory tests then failed to find anything and it was returned to service the following day.
Current operational status and issues
|
- We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". This will be re-addresses once we have upgraded to Castor 2.1.16.
- LHCb Castor performance has been OK since the 2.1.16 update although this was not under and high load. A load test (mimicing the stipping/merging campain) is being carried out with LHCb today (24th May). Comment on load test.
Ongoing Disk Server Issues
|
- GDSS658 (AtlasScratchDisk - D1T0) crashed yesterday afternoon (30th May). It is still undergoing tests.
Limits on concurrent batch system jobs.
|
Notable Changes made since the last meeting.
|
- CMS Castor instance updated to Castor version 2.1.16-13 and the CMS SRMs also upgraded to version 2.1.16.
- For LHCb Castor the xroot manager was installed on the (LHCb) stager. This resolves a problem affecting LHCb when a TURL returned by the SRM does not always work when used for xroot access owing to an incorrect hostname.
- CEs being migrated to use the load balancers in front of the argus service.
- Work ongoing to enable batch access for CCP4, LIGO and MICE.
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
Castor GEN instace (srm-alice, srm-biomed, srm-dteam, srm-ilc, srm-mice, srm-na62, srm-pheno, srm-snoplus, srm-t2k)
|
SCHEDULED
|
OUTAGE
|
31/05/2017 10:00
|
31/05/2017 15:00
|
5 hours
|
Upgrade of Castor GEN instance to version 2.1.16.
|
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
Pending - but not yet formally announced:
- Update Castor (including SRMs) to version 2.1.16. Central nameserver and LHCb and Atlas stagers done. Current plan: CMS stager and SRMs on Thursday 25th May. GEN to follow.
- Update Castor SRMs - CMS & GEN still to do. These are being done at the same time as the Catsor 2.1.16 updates.
Listing by category:
- Castor:
- Update SRMs to new version, including updating to SL6.
- Update Castor to version 2.1.16 (ongoing)
- Merge AtlasScratchDisk into larger Atlas disk pool.
- Networking
- Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links.
- Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
- Services
- Put argus systems behind a load balancers to improve resilience.
- The production FTS needs updating. This will no longer support the soap interface. (The "test" FTS , used by Atlas, has already been upgraded.)
Entries in GOC DB starting since the last report.
|
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
Castor GEN instace (srm-alice, srm-biomed, srm-dteam, srm-ilc, srm-mice, srm-na62, srm-pheno, srm-snoplus, srm-t2k)
|
SCHEDULED
|
OUTAGE
|
31/05/2017 10:00
|
31/05/2017 15:00
|
5 hours
|
Upgrade of Castor GEN instance to version 2.1.16.
|
srm-cms-disk.gridpp.rl.ac.uk
|
SCHEDULED
|
OUTAGE
|
25/05/2017 10:00
|
25/05/2017 13:03
|
3 hours and 3 minutes
|
Upgrade of CMS Castor instance to version 2.1.16.
|
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
128398
|
Green
|
Top Priority
|
Waiting for Reply
|
2017-05-18
|
2017-05-24
|
LHCb
|
File cannot be opened using xroot at RAL
|
128308
|
Green
|
Urgent
|
In Progress
|
2017-05-14
|
2017-05-15
|
CMS
|
Description: T1_UK_RAL in error for about 6 hours
|
127967
|
Green
|
Less Urgent
|
On Hold
|
2017-04-27
|
2017-04-28
|
MICE
|
Enabling pilot role for mice VO at RAL-LCG2
|
127612
|
Yellow
|
Alarm
|
In Progress
|
2017-04-08
|
2017-05-19
|
LHCb
|
CEs at RAL not responding
|
127597
|
Yellow
|
Urgent
|
Waiting for Reply
|
2017-04-07
|
2017-05-16
|
CMS
|
Check networking and xrootd RAL-CERN performance
|
127240
|
Red
|
Urgent
|
In Progress
|
2017-03-21
|
2017-05-15
|
CMS
|
Staging Test at UK_RAL for Run2
|
124876
|
Red
|
Less Urgent
|
On Hold
|
2016-11-07
|
2017-01-01
|
OPS
|
[Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
|
117683
|
Red
|
Less Urgent
|
On Hold
|
2015-11-18
|
2017-05-10
|
|
CASTOR at RAL not publishing GLUE 2.
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC ECHO = Atlas ECHO (Template 841);CMS HC = CMS HammerCloud
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas ECHO |
Atlas HC |
Atlas HC ECHO |
CMS HC |
Comment
|
24/05/17 |
100 |
100 |
100 |
97 |
100 |
100 |
99 |
99 |
100 |
SRM Test failures (user timeout)
|
25/05/17 |
100 |
100 |
100 |
87 |
100 |
100 |
100 |
100 |
98 |
SRM Test failures (user timeout)
|
26/05/17 |
100 |
100 |
98 |
98 |
100 |
100 |
100 |
100 |
100 |
SRM Test failures: Atlas: One error ‘User belonging to VO not authorized to access space token ATLASDATADISK; CMS: user timeout.
|
27/05/17 |
100 |
100 |
100 |
98 |
100 |
100 |
100 |
100 |
100 |
SRM Test failures (user timeout)
|
28/05/17 |
100 |
100 |
100 |
97 |
100 |
100 |
100 |
100 |
100 |
SRM Test failures (user timeout)
|
29/05/17 |
100 |
100 |
100 |
98 |
100 |
100 |
98 |
100 |
100 |
SRM Test failures (user timeout)
|
30/05/17 |
100 |
100 |
100 |
94 |
100 |
100 |
92 |
100 |
100 |
SRM Test failures (user timeout)
|