RAL Tier1 Operations Report for 30th November 2016
Review of Issues during the week 23rd to 30th November 2016.
|
- It was found that some worker nodes were being put offline owing to clock drift. This was around 10% of them on Monday). This was traced to a problem within the NTP daemon and was fixed by a configuration change.
Resolved Disk Server Issues
|
Current operational status and issues
|
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
- The intermittent, low-level, load-related packet loss that has been seen over external connections is still being tracked. The replacement of the UKLight router appears to have reduced this - but we are allowing more time to pass before drawing any conclusions.
Ongoing Disk Server Issues
|
Notable Changes made since the last meeting.
|
- Maintenance was carried out on the UPS and generator in R89 yesterday.
- There was restart test of the ECHO Ceph system yesterday> this was to understand how best to do this and set-up appropriate operating procedures.
- LHCb writing to the 'D' tapes. The migration of their data from 'C' to 'D' tapes is underway - with around 300 (some 30%) of the tapes done.
- An update to the FTS3 srevice (to version ???) has taken place this morning.
None
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
Pending - but not yet formally announced:
- Firmware update on Clustervision '13 disk servers. These are distributed as follows: AtlasDataDisk: 12; CMSDisk: 5; LHCbDst: 12.
Listing by category:
- Castor:
- Merge AtlasScratchDisk and LhcbUser into larger disk pools
- Update to Castor version 2.1.15. Planning to roll out January 2017. (Proposed dates: 10th Jan: Nameserver; 17th Jan: First stager (LHCb); 24th Jan: Stager (Atlas); 26th Jan: Stager (GEN); 31st Jan: Final stager (CMS)).
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Fabric
- Firmware updates on older disk servers.
Entries in GOC DB starting since the last report.
|
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
lcgfts3.gridpp.rl.ac.uk,
|
SCHEDULED
|
WARNING
|
30/11/2016 11:00
|
30/11/2016 13:00
|
2 hours
|
Upgrade of FTS3 service
|
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
125126
|
Green
|
Urgent
|
In Progress
|
2016-11-22
|
2016-11-23
|
MICE
|
Problems connecting to srm-mice.gridpp.rl.ac.uk
|
125116
|
Green
|
Less Urgent
|
In Progress
|
2016-11-21
|
2016-11-23
|
SNO+
|
DNS configuration problem
|
124876
|
Green
|
Less Urgent
|
On Hold
|
2016-11-07
|
2016-11-21
|
OPS
|
[Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
|
124785
|
Red
|
Urgent
|
Reopened
|
2016-11-02
|
2016-11-09
|
CMS
|
Configuration updated AAA - CMS Site Name missing
|
124606
|
Red
|
Top Priority
|
In Progress
|
2016-10-24
|
2016-11-01
|
CMS
|
Consistency Check for T1_UK_RAL
|
124487
|
Green
|
Less Urgent
|
Waiting for Reply
|
2016-11-18
|
2016-11-18
|
|
Jobs submitted via RAL WMS stuck in state READY forever and ever and ever
|
122827
|
Green
|
Less Urgent
|
In Progress
|
2016-07-12
|
2016-10-11
|
SNO+
|
Disk area at RAL
|
117683
|
Red
|
Less Urgent
|
On Hold
|
2015-11-18
|
2016-10-05
|
|
CASTOR at RAL not publishing GLUE 2 (Updated. There are ongoing discussions with GLUE & WLCG)
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 808); CMS HC = CMS HammerCloud
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas HC |
CMS HC |
Comment
|
23/11/16 |
100 |
100 |
100 |
100 |
100 |
N/A |
100 |
|
24/11/16 |
100 |
100 |
100 |
100 |
100 |
N/A |
100 |
|
25/11/16 |
100 |
100 |
100 |
100 |
100 |
N/A |
100 |
|
26/11/16 |
100 |
100 |
100 |
98 |
100 |
N/A |
98 |
Two SRM 'GET' test failures. Both user timeout error
|
27/11/16 |
100 |
100 |
100 |
100 |
100 |
N/A |
100 |
|
28/11/16 |
100 |
100 |
100 |
100 |
100 |
N/A |
98 |
|
29/11/16 |
96.5 |
100 |
100 |
100 |
100 |
N/A |
99 |
Central monitoring problem affected other sites too.
|