RAL Tier1 Operations Report for 14th December 2016
Review of Issues during the week 7th to 14th December 2016.
|
- There was a problem with the Atlas Frontier service on Wednesday (7th). The excess load caused by particular Atlas user. The services on the squid systems needed several restarts through the day and evening.
- Since yesterday (Tuesday 13th Dec) we are seeing high load on CMSTape in Castor - and are failing SAM tests as a result.
Resolved Disk Server Issues
|
- GDSS657 (lhcbRawRdst - D0T1) failed on Saturday morning, 10th Dec. It was put back read-only later that day and the eight files awaiting migration to tape were flushed off. The server was then taken down on Monday (12th) for further investigation - which was transparent to the VO. No faults were found and the server was returned to service yesterday (13th Dec.)
Current operational status and issues
|
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
- The intermittent, low-level, load-related packet loss that has been seen over external connections is still being monitored. The replacement of the UKLight router appears to have reduced this. However, we are closely monitoring the links to confirm that any remaining error rates are low and typical for this type of wide area link.
Ongoing Disk Server Issues
|
Notable Changes made since the last meeting.
|
- Nothing p[articular to report.
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
Castor CMS instance
|
SCHEDULED
|
OUTAGE
|
31/01/2017 10:00
|
31/01/2017 16:00
|
6 hours
|
Castor 2.1.15 Upgrade. Only affecting CMS instance. (CMS stager component being upgraded).
|
Castor GEN instance
|
SCHEDULED
|
OUTAGE
|
26/01/2017 10:00
|
26/01/2017 16:00
|
6 hours
|
Castor 2.1.15 Upgrade. Only affecting GEN instance. (GEN stager component being upgraded).
|
Castor Atlas instance
|
SCHEDULED
|
OUTAGE
|
24/01/2017 10:00
|
24/01/2017 16:00
|
6 hours
|
Castor 2.1.15 Upgrade. Only affecting Atlas instance. (Atlas stager component being upgraded).
|
Castor LHCb instance
|
SCHEDULED
|
OUTAGE
|
17/01/2017 10:00
|
17/01/2017 16:00
|
6 hours
|
Castor 2.1.15 Upgrade. Only affecting LHCb instance. (LHCb stager component being upgraded).
|
All Castor (all SRM endpoints)
|
SCHEDULED
|
OUTAGE
|
10/01/2017 10:00
|
10/01/2017 16:00
|
6 hours
|
Castor 2.1.15 Upgrade. Upgrade of Nameserver component. All instances affected.
|
gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk,
|
SCHEDULED
|
OUTAGE
|
15/12/2016 09:00
|
15/12/2016 17:00
|
8 hours
|
Re-install of Echo Cluster
|
srm-atlas.gridpp.rl.ac.uk, srm-cms-disk.gridpp.rl.ac.uk, srm-lhcb.gridpp.rl.ac.uk,
|
SCHEDULED
|
WARNING
|
14/12/2016 10:00
|
14/12/2016 17:00
|
7 hours
|
Warning while some disk servers have rolling firmware updates and are rebooted. Temporary loss of access to files.
|
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
Pending - but not yet formally announced:
- Firmware update on Clustervision '13 disk servers. These are distributed as follows: AtlasDataDisk: 12; CMSDisk: 5; LHCbDst: 12.
- Merge AtlasScratchDisk and LhcbUser into larger disk pools. For LHCbUser this will be done on Thursday 8th Dec.
Listing by category:
- Castor:
- Merge AtlasScratchDisk and LhcbUser into larger disk pools
- Update to Castor version 2.1.15. Planning to roll out January 2017. (Proposed dates: 10th Jan: Nameserver; 17th Jan: First stager (LHCb); 24th Jan: Stager (Atlas); 26th Jan: Stager (GEN); 31st Jan: Final stager (CMS)).
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Fabric
- Firmware updates on older disk servers.
Entries in GOC DB starting since the last report.
|
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
srm-atlas.gridpp.rl.ac.uk, srm-cms-disk.gridpp.rl.ac.uk, srm-lhcb.gridpp.rl.ac.uk,
|
SCHEDULED
|
WARNING
|
14/12/2016 10:00
|
14/12/2016 17:00
|
7 hours
|
Warning while some disk servers have rolling firmware updates and are rebooted. Temporary loss of access to files.
|
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
125348
|
Green
|
Top Priority
|
In Progress
|
2016-12-05
|
2016-12-05
|
CMS
|
Request to update phedex
|
125157
|
Green
|
Less Urgent
|
In Progress
|
2016-11-24
|
2016-12-07
|
|
Creation of a repository within the EGI CVMFS infrastructure
|
124876
|
Green
|
Less Urgent
|
On Hold
|
2016-11-07
|
2016-11-21
|
OPS
|
[Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
|
124606
|
Red
|
Top Priority
|
In Progress
|
2016-10-24
|
2016-12-02
|
CMS
|
Consistency Check for T1_UK_RAL
|
124478
|
Green
|
Less Urgent
|
In Progress
|
2016-11-18
|
2016-11-18
|
NA62
|
Jobs submitted via RAL WMS stuck in state READY forever and ever and ever
|
122827
|
Green
|
Less Urgent
|
In Progress
|
2016-07-12
|
2016-12-01
|
SNO+
|
Disk area at RAL
|
117683
|
Red
|
Less Urgent
|
On Hold
|
2015-11-18
|
2016-10-05
|
|
CASTOR at RAL not publishing GLUE 2. Plan to revisit week starting 19th.
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 808); CMS HC = CMS HammerCloud
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas HC |
CMS HC |
Comment
|
07/12/16 |
100 |
100 |
100 |
98 |
100 |
N/A |
100 |
Single SRM test failure: User timeout over
|
08/12/16 |
100 |
100 |
100 |
100 |
100 |
N/A |
100 |
|
09/12/16 |
100 |
100 |
100 |
100 |
100 |
N/A |
100 |
|
10/12/16 |
100 |
100 |
100 |
100 |
100 |
N/A |
100 |
|
11/12/16 |
100 |
100 |
100 |
100 |
100 |
N/A |
100 |
|
12/12/16 |
100 |
100 |
100 |
97 |
100 |
N/A |
100 |
Some user timeout failures on SRM tests.
|
13/12/16 |
100 |
100 |
100 |
47 |
100 |
N/A |
100 |
Load problem on CMS_Tape causing multiple test failures.
|