RAL Tier1 Operations Report for 6th December 2017
Review of Issues during the week 30th November to 6th December 2017.
|
- The most recent Trust Anchor release included a change to the way a UK CA certificate is signed. This has caused problems for some old versions of code that checks certificates. The Tier1 systems were updated and then, on learning about problems elsewhere, downgraded. The FTS service was seen to have problems with the upgraded certificate and downgrading cured most (but not all) of those problems. The Tier1 will re-do the upgrade next Tuesday (12th). We will keep a careful watch on services such as the FTS which may have a problem with the new UK CA certificate.
- There was a problem of high packet loss for traffic to/from the Tier that passed through the RAL core network (and firewall) on Monday (4th). The problem started at midnight and was fixed around 15:30.
- LHCb D1T0 disk space in Castor has been very full during this last week. Discussions with LHCb are ongoing about the next steps with this problem.
Current operational status and issues
|
Resolved Disk Server Issues
|
- GDSS746 (AtlasDataDisk - D1T0) - Back in production
- GDSS753 (AtlasDataDisk - D1T0) - Back in production
Ongoing Disk Server Issues
|
- GDSS757 (CMSDisk - D1T0) - Connection refused or timed out
Limits on concurrent batch system jobs.
|
Notable Changes made since the last meeting.
|
- Allocation in Echo for ATLAS increased to 4.1PB. They now have 4PB in datadisk and 100TB in scratchdisk. This is part of the gradual increase of their usage to 5.1PB.
- The maximum number of gridftp connections to each Echo gateways has been increased to 200 (from 100).
Entries in GOC DB starting since the last report.
|
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
srm-alice.gridpp.rl.ac.uk, srm-atlas.gridpp.rl.ac.uk, srm-biomed.gridpp.rl.ac.uk, srm-cert.gridpp.rl.ac.uk, srm-cms-disk.gridpp.rl.ac.uk, srm-cms.gridpp.rl.ac.uk, srm-dteam.gridpp.rl.ac.uk, srm-ilc.gridpp.rl.ac.uk, srm-mice.gridpp.rl.ac.uk, srm-minos.gridpp.rl.ac.uk, srm-na62.gridpp.rl.ac.uk, srm-pheno.gridpp.rl.ac.uk, srm-preprod.gridpp.rl.ac.uk, srm-snoplus.gridpp.rl.ac.uk, srm-solid.gridpp.rl.ac.uk, srm-t2k.gridpp.rl.ac.uk,
|
SCHEDULED
|
OUTAGE
|
06/12/2017 13:00
|
06/12/2017 15:00
|
2 hours
|
Upgrade of non-LHCb SRM to version 2.1.16-18
|
lcgfts3.gridpp.rl.ac.uk,
|
SCHEDULED
|
WARNING
|
05/12/2017 11:00
|
05/12/2017 13:00
|
2 hours
|
FTS update to v3.7.7
|
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
Ongoing or Pending - but not yet formally announced:
Listing by category:
- Castor:
- Update systems (initially tape servers) to use SL7 and configured by Quattor/Aquilon.
- Move to generic Castor headnodes.
- Echo:
- Update to next CEPH version ("Luminous").
- Networking
- Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
- Services
- Internal
- DNS servers will be rolled out within the Tier1 network.
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
132356
|
Green
|
Very Urgent
|
Waiting for Reply
|
2017-12-07
|
2017-12-11
|
Ops
|
[Rod Dashboard] Issue detected : org.nagios.GLUE2-Check@site-bdii.gridpp.rl.ac.uk
|
132336
|
Green
|
Less Urgent
|
In Progress
|
2017-12-05
|
2017-12-06
|
Ops
|
[Rod Dashboard] Issue detected : org.nagios.GLUE2-Check@site-bdii.gridpp.rl.ac.uk
|
132314
|
Green
|
Less Urgent
|
In Progress
|
2017-12-05
|
2017-12-11
|
Ops
|
[Rod Dashboard] Issue detected : org.nordugrid.ARC-CE-SRM-result-ops@arc-ce02.gridpp.rl.ac.uk
|
132222
|
Green
|
Urgent
|
In Progress
|
2017-11-30
|
2017-12-05
|
CMS
|
Transfers failing to T1_UK_RAL_Disk
|
131840
|
Green
|
Urgent
|
Waiting for reply
|
2017-11-14
|
2017-12-05
|
Other
|
solidexperiment.org CASTOR tape copy fails
|
131815
|
Green
|
Less Urgent
|
In Progress
|
2017-11-13
|
2017-12-01
|
T2K.Org
|
Extremely long download times for T2K files on tape at RAL
|
130207
|
Red
|
Urgent
|
On Hold
|
2017-08-24
|
2017-11-13
|
MICE
|
Timeouts when copyiing MICE reco data to CASTOR
|
127597
|
Red
|
Urgent
|
On Hold
|
2017-04-07
|
2017-10-05
|
CMS
|
Check networking and xrootd RAL-CERN performance
|
124876
|
Red
|
Less Urgent
|
On Hold
|
2016-11-07
|
2017-11-13
|
Ops
|
[Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
|
117683
|
Red
|
Less Urgent
|
On Hold
|
2015-11-18
|
2017-11-06
|
None
|
CASTOR at RAL not publishing GLUE 2
|
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas Echo |
Comment
|
29/11/17 |
100 |
100 |
100 |
100 |
100 |
100 |
|
30/11/17 |
100 |
100 |
100 |
100 |
100 |
100 |
|
1/12/17 |
100 |
100 |
100 |
100 |
100 |
100 |
|
2/12/17 |
100 |
100 |
100 |
100 |
100 |
100 |
|
3/12/17 |
100 |
100 |
100 |
100 |
100 |
100 |
|
4/12/17 |
100 |
100 |
96 |
88 |
95 |
100 |
Problems caused by packet loss across RAL network.
|
5/12/17 |
100 |
100 |
100 |
100 |
76 |
100 |
Non-zero return from test of voms-proxy-info on some CEs.
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud
Day |
Atlas HC |
Atlas HC Echo |
CMS HC |
Comment
|
29/11/17 |
85 |
100 |
100 |
|
30/11/17 |
98 |
100 |
100 |
|
1/12/17 |
100 |
100 |
100 |
|
2/12/17 |
100 |
100 |
100 |
|
3/12/17 |
99 |
100 |
100 |
|
4/12/17 |
99 |
100 |
86 |
|
5/12/17 |
100 |
100 |
100 |
|
- EGI will withdraw support for the WMS from the end of 2017. Our WMS service will be stopped on this timescale.
- There is a problem with Perfsonar measurements using IPv6 to nodes accessed via JANET.
- There was a discussion about how best to bring files back online from tape. The MICE VO needs a better (bulk) solution than they are using at the moment.