Difference between revisions of "Tier1 Operations Report 2016-11-30"
From GridPP Wiki
(→) |
(→) |
||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 23rd to 30th November 2016. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 23rd to 30th November 2016. | ||
|} | |} | ||
− | * | + | * It was found that some worker nodes were being put offline owing to clock drift. This was around 10% of them on Monday). This was traced to a problem within the NTP daemon and was fixed by a configuration change. |
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> |
Revision as of 12:28, 30 November 2016
RAL Tier1 Operations Report for 30th November 2016
Review of Issues during the week 23rd to 30th November 2016. |
- It was found that some worker nodes were being put offline owing to clock drift. This was around 10% of them on Monday). This was traced to a problem within the NTP daemon and was fixed by a configuration change.
Resolved Disk Server Issues |
- GDSS750 (LHCbDst – D1T0) was taken out of service having reported 'FSProbe' problems on Sunday morning (20th Nov). The server was put back in service the following day. Two disks were replaced.
Current operational status and issues |
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
- The intermittent, low-level, load-related packet loss that has been seen over external connections is still being tracked. The replacement of the UKLight router appears to have reduced this - but we are allowing more time to pass before drawing any conclusions.
Ongoing Disk Server Issues |
- None
Notable Changes made since the last meeting. |
- Additional disk servers have been added to Castor: For Alice 5 extra servers, each 100TB. For LHCb 12 additional servers, each 120TB. This will enable both an increase in capacity and the withdrawal of some older (smaller capacity) disk servers.
- There was an intervention on the ECHO Ceph system last week to enable a reconfiguration of its underlying network.
- LHCb writing to the 'D' tapes. The migration of their data from 'C' to 'D' tapes is underway - with around 200 (some 20%) of the tapes done.
Declared in the GOC DB |
None
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Castor:
- Merge AtlasScratchDisk and LhcbUser into larger disk pools
- Update to Castor version 2.1.15. Planning to roll out January 2017. (Proposed dates: 10th Jan: Nameserver; 17th Jan: First stager (LHCb); 24th Jan: Stager (Atlas); 26th Jan: Stager (GEN); 31st Jan: Final stager (CMS)).
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Fabric
- Firmware updates on older disk servers.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
lcgfts3.gridpp.rl.ac.uk, | SCHEDULED | WARNING | 30/11/2016 11:00 | 30/11/2016 13:00 | 2 hours | Upgrade of FTS3 service |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
125126 | Green | Urgent | In Progress | 2016-11-22 | 2016-11-23 | MICE | Problems connecting to srm-mice.gridpp.rl.ac.uk |
125116 | Green | Less Urgent | In Progress | 2016-11-21 | 2016-11-23 | SNO+ | DNS configuration problem |
124876 | Green | Less Urgent | On Hold | 2016-11-07 | 2016-11-21 | OPS | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
124785 | Red | Urgent | Reopened | 2016-11-02 | 2016-11-09 | CMS | Configuration updated AAA - CMS Site Name missing |
124606 | Red | Top Priority | In Progress | 2016-10-24 | 2016-11-01 | CMS | Consistency Check for T1_UK_RAL |
124487 | Green | Less Urgent | Waiting for Reply | 2016-11-18 | 2016-11-18 | Jobs submitted via RAL WMS stuck in state READY forever and ever and ever | |
122827 | Green | Less Urgent | In Progress | 2016-07-12 | 2016-10-11 | SNO+ | Disk area at RAL |
117683 | Red | Less Urgent | On Hold | 2015-11-18 | 2016-10-05 | CASTOR at RAL not publishing GLUE 2 (Updated. There are ongoing discussions with GLUE & WLCG) |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 808); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
23/11/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
24/11/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
25/11/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
26/11/16 | 100 | 100 | 100 | 98 | 100 | N/A | 98 | Two SRM 'GET' test failures. Both user timeout error |
27/11/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
28/11/16 | 100 | 100 | 100 | 100 | 100 | N/A | 98 | |
29/11/16 | 96.5 | 100 | 100 | 100 | 100 | N/A | 99 | Central monitoring problem affected other sites too. |
Notes from Meeting. |
- Some work is needed in the Castor configuration to separate the storage of the files from the different Dirac sites (Durham, Leicester etc.).