Difference between revisions of "Tier1 Operations Report 2016-11-09"
From GridPP Wiki
(→) |
(→) |
||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 2nd to 9th November 2016. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 2nd to 9th November 2016. | ||
|} | |} | ||
− | * | + | * There has been some mopping up of systems for patching against CVE-2016-5195. |
− | * | + | * On Friday (4th November) there was a problem with the "test" FTS3 service. The disk area hosting the back end database filled up. Atlas (the only users of this) were asked to move to the "production" FTS service. |
+ | * There was a problem with the Atlas Frontier service Tuesday/Wednesday (8/9 Nov). Resolved by reverting the version of the frontier-squid code - an update having been picked up during the latest security patching. | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> |
Revision as of 12:43, 9 November 2016
RAL Tier1 Operations Report for 9th November 2016
Review of Issues during the week 2nd to 9th November 2016. |
- There has been some mopping up of systems for patching against CVE-2016-5195.
- On Friday (4th November) there was a problem with the "test" FTS3 service. The disk area hosting the back end database filled up. Atlas (the only users of this) were asked to move to the "production" FTS service.
- There was a problem with the Atlas Frontier service Tuesday/Wednesday (8/9 Nov). Resolved by reverting the version of the frontier-squid code - an update having been picked up during the latest security patching.
Resolved Disk Server Issues |
- None
Current operational status and issues |
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
- The intermittent, low-level, load-related packet loss that has been seen over external connections is still being tracked. The replacement of the UKLight router appears to have reduced this - but we are allowing more time to pass before drawing any conclusions.
Ongoing Disk Server Issues |
- None
Notable Changes made since the last meeting. |
- Security updates in response to CVE-2016-5195.
- Our CVMFS Stratum0 srever has been replaced with new hardware.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk | SCHEDULED | OUTAGE | 16/11/2016 10:30 | 16/11/2016 14:30 | 4 hours | Upgrading backend network behind Echo Storage service |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Castor:
- Merge AtlasScratchDisk and LhcbUser into larger disk pools
- Update to Castor version 2.1.15. Planning to roll out January 2017. (Proposed dates: 10th Jan: Nameserver; 17th Jan: First stager (LHCb); 24th Jan: Stager (Atlas); 26th Jan: Stager (GEN); 31st Jan: Final stager (CMS)).
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Migration of LHCb data from T10KC to T10KD tapes. The additional 'D' tape drives have now been installed. Plan to start migration after this week's intervention on the tape libraries.
- Fabric
- Firmware updates on older disk servers.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
All Castor tape | SCHEDULED | WARNING | 02/11/2016 07:00 | 02/11/2016 16:00 | 9 hours | Tape Library not available during work on the mechanics. Tape access for read will stop. Writes will be buffered on disk and flushed to tape after the work has completed. (Delayed one day from previous announcement). |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
124606 | Green | Urgent | In Progress | 2016-10-24 | 2016-11-01 | CMS | Consistency Check for T1_UK_RAL |
124478 | Green | Urgent | On Hold | 2016-10-17 | 2016-11-01 | Jobs submitted via RAL WMS stuck in state READY forever and ever and ever | |
123504 | Yellow | Less Urgent | Waiting for Reply | 2016-08-19 | 2016-09-20 | T2K | proxy expiration |
122827 | Green | Less Urgent | Waiting for Reply | 2016-07-12 | 2016-10-11 | SNO+ | Disk area at RAL |
121687 | Red | Less Urgent | In Progress | 2016-05-20 | 2016-10-26 | packet loss problems seen on RAL-LCG perfsonar | |
120350 | Yellow | Less Urgent | On Hold | 2016-03-22 | 2016-08-09 | LSST | Enable LSST at RAL |
117683 | Amber | Less Urgent | On Hold | 2015-11-18 | 2016-10-05 | CASTOR at RAL not publishing GLUE 2 (Updated. There are ongoing discussions with GLUE & WLCG) |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
02/11/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
03/11/16 | 100 | 100 | 100 | 100 | 100 | N/A | N/A | |
04/11/16 | 100 | 100 | 100 | 100 | 96 | N/A | N/A | Single SRM test failure on list (No such file or directory) |
05/11/16 | 100 | 100 | 100 | 100 | 100 | N/A | N/A | |
06/11/16 | 100 | 100 | 100 | 100 | 100 | N/A | N/a | |
07/11/16 | 100 | 100 | 100 | 100 | 100 | N/A | 93 | |
08/11/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 |