Difference between revisions of "Tier1 Operations Report 2016-11-02"
From GridPP Wiki
(→) |
(→) |
||
(11 intermediate revisions by one user not shown) | |||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 26th October to 2nd November 2016. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 26th October to 2nd November 2016. | ||
|} | |} | ||
− | * The main issue this week has been the | + | * The main issue this week has been the patching of systems in response to CVE-2016-5195. At the time of the last meeting the batch systems - plus some others that were more exposed - were stopped since the Monday. The batch system and most of the others were brought back up by the end of Wednesday afternoon (26th Oct) following patching. There was an outage of Castor yesterday (1st Nov) for all its systems to be rebooted to pick up the new kernel. |
− | * | + | * (Added after meeting). One of the pair of OPN links failed this morning. We have been running for some hours with just one 10Gbit link. Note: This was fixed at around 15:00 today (2nd Nov). The problem (as reported by JANET) was a fibre break. The main RAL connection to Janet had also failed over to the backup. |
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 22: | Line 22: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | ||
|} | |} | ||
− | * | + | * GDSS896 (CMSTape - D0T1) was taken out of service on 25th Oct to investigate memory errors. It was returned to service two days later (27th). The memory modules were swapped over. On re-test the fault had cleared. However, the system crashed on Friday 28th Oct. It was returned to service yesterday (1st Nov). Cause of crash unclear - although it was being used to check a new OS kernel. |
− | + | ||
<!-- ***************************************************** -----> | <!-- ***************************************************** -----> | ||
Line 45: | Line 44: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues | ||
|} | |} | ||
− | * | + | * None |
<!-- ***************End Ongoing Disk Server Issues**************** -----> | <!-- ***************End Ongoing Disk Server Issues**************** -----> | ||
<!-- ************************************************************* -----> | <!-- ************************************************************* -----> | ||
Line 56: | Line 55: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | ||
|} | |} | ||
− | * | + | * Security updates in response to CVE-2016-5195. |
− | * | + | * Our CVMFS Stratum0 srever has been replaced with new hardware. |
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 78: | Line 77: | ||
! Reason | ! Reason | ||
|- | |- | ||
− | | All Castor | + | | All Castor tape |
| SCHEDULED | | SCHEDULED | ||
| WARNING | | WARNING | ||
− | | | + | | 02/11/2016 07:00 |
− | | | + | | 02/11/2016 16:00 |
| 9 hours | | 9 hours | ||
− | | Tape Library not available during work on the mechanics. Tape access for read will stop. Writes will be buffered on disk and flushed to tape after the work has completed. | + | | Tape Library not available during work on the mechanics. Tape access for read will stop. Writes will be buffered on disk and flushed to tape after the work has completed. (Delayed one day from previous announcement). |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
|} | |} | ||
<!-- **********************End GOC DB Entries************************** -----> | <!-- **********************End GOC DB Entries************************** -----> | ||
Line 117: | Line 100: | ||
'''Listing by category:''' | '''Listing by category:''' | ||
* Castor: | * Castor: | ||
+ | ** Merge AtlasScratchDisk and LhcbUser into larger disk pools | ||
** Update to Castor version 2.1.15. Planning to roll out January 2017. (Proposed dates: 10th Jan: Nameserver; 17th Jan: First stager (LHCb); 24th Jan: Stager (Atlas); 26th Jan: Stager (GEN); 31st Jan: Final stager (CMS)). | ** Update to Castor version 2.1.15. Planning to roll out January 2017. (Proposed dates: 10th Jan: Nameserver; 17th Jan: First stager (LHCb); 24th Jan: Stager (Atlas); 26th Jan: Stager (GEN); 31st Jan: Final stager (CMS)). | ||
** Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update. | ** Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update. | ||
− | ** Migration of LHCb data from T10KC to T10KD tapes. The additional 'D' tape drives have now been installed. Plan to start migration after | + | ** Migration of LHCb data from T10KC to T10KD tapes. The additional 'D' tape drives have now been installed. Plan to start migration after this week's intervention on the tape libraries. |
* Fabric | * Fabric | ||
** Firmware updates on older disk servers. | ** Firmware updates on older disk servers. | ||
Line 142: | Line 126: | ||
! Reason | ! Reason | ||
|- | |- | ||
− | | arc-ce01, arc-ce02, arc-ce03, arc-ce04, lcgvo07, lcgvo08, lcgwms04, lcgwms05 | + | | All Castor tape |
+ | | SCHEDULED | ||
+ | | WARNING | ||
+ | | 02/11/2016 07:00 | ||
+ | | 02/11/2016 16:00 | ||
+ | | 9 hours | ||
+ | | Tape Library not available during work on the mechanics. Tape access for read will stop. Writes will be buffered on disk and flushed to tape after the work has completed. (Delayed one day from previous announcement). | ||
+ | |- | ||
+ | | All Castor | ||
+ | | SCHEDULED | ||
+ | | OUTAGE | ||
+ | | 01/11/2016 10:00 | ||
+ | | 01/11/2016 12:47 | ||
+ | | 2 hours and 47 minutes | ||
+ | | patching storage for CVE-2016-5195 | ||
+ | |- | ||
+ | | ECHO (gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk) | ||
+ | | UNSCHEDULED | ||
+ | | OUTAGE | ||
+ | | 27/10/2016 14:00 | ||
+ | | 27/10/2016 18:00 | ||
+ | | 4 hours | ||
+ | | Updating and Testing ECHO (Ceph) in response to EGI-SVG-CVE-2016-5195. Ongoing. | ||
+ | |- | ||
+ | | ECHO (gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk) | ||
+ | | UNSCHEDULED | ||
+ | | OUTAGE | ||
+ | | 26/10/2016 16:30 | ||
+ | | 27/10/2016 14:00 | ||
+ | | 21 hours and 30 minutes | ||
+ | | Updating and Testing in response to EGI-SVG-CVE-2016-5195 | ||
+ | |- | ||
+ | | arc-ce01, arc-ce02, arc-ce03, arc-ce04, lcgvo07, lcgvo08, lcgwms04, lcgwms05 | ||
| UNSCHEDULED | | UNSCHEDULED | ||
| OUTAGE | | OUTAGE | ||
| 24/10/2016 15:32 | | 24/10/2016 15:32 | ||
− | | | + | | 26/10/2016 16:33 |
− | | | + | | 2 days, 1 hour and 1 minutes |
| EGI-SVG-CVE-2016-5195, vulnerability handling in progress | | EGI-SVG-CVE-2016-5195, vulnerability handling in progress | ||
|- | |- | ||
− | | arc-ce01, gridftp.echo.stfc.ac.uk, ip6tb-ps01, ip6tb-ps01, lcgps01, lcgps02, s3.echo.stfc.ac.uk, vacuum.gridpp.rl.ac.uk, xrootd.echo.stfc.ac.uk | + | | arc-ce01, gridftp.echo.stfc.ac.uk, ip6tb-ps01, ip6tb-ps01, lcgps01, lcgps02, s3.echo.stfc.ac.uk, vacuum.gridpp.rl.ac.uk, xrootd.echo.stfc.ac.uk |
| UNSCHEDULED | | UNSCHEDULED | ||
| OUTAGE | | OUTAGE | ||
| 24/10/2016 15:00 | | 24/10/2016 15:00 | ||
− | | | + | | 26/10/2016 16:33 |
− | | | + | | 2 days, 1 hour and 33 minutes |
| EGI-SVG-CVE-2016-5195, vulnerability handling in progress | | EGI-SVG-CVE-2016-5195, vulnerability handling in progress | ||
|} | |} | ||
Line 178: | Line 194: | ||
| In Progress | | In Progress | ||
| 2016-10-24 | | 2016-10-24 | ||
− | | 2016- | + | | 2016-11-01 |
| CMS | | CMS | ||
| Consistency Check for T1_UK_RAL | | Consistency Check for T1_UK_RAL | ||
Line 185: | Line 201: | ||
| Green | | Green | ||
| Urgent | | Urgent | ||
− | | | + | | On Hold |
| 2016-10-17 | | 2016-10-17 | ||
− | | 2016- | + | | 2016-11-01 |
| | | | ||
| Jobs submitted via RAL WMS stuck in state READY forever and ever and ever | | Jobs submitted via RAL WMS stuck in state READY forever and ever and ever | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
| 123504 | | 123504 | ||
Line 223: | Line 230: | ||
| In Progress | | In Progress | ||
| 2016-05-20 | | 2016-05-20 | ||
− | | 2016-10- | + | | 2016-10-26 |
| | | | ||
| packet loss problems seen on RAL-LCG perfsonar | | packet loss problems seen on RAL-LCG perfsonar |
Latest revision as of 17:12, 2 November 2016
RAL Tier1 Operations Report for 2nd November 2016
Review of Issues during the week 26th October to 2nd November 2016. |
- The main issue this week has been the patching of systems in response to CVE-2016-5195. At the time of the last meeting the batch systems - plus some others that were more exposed - were stopped since the Monday. The batch system and most of the others were brought back up by the end of Wednesday afternoon (26th Oct) following patching. There was an outage of Castor yesterday (1st Nov) for all its systems to be rebooted to pick up the new kernel.
- (Added after meeting). One of the pair of OPN links failed this morning. We have been running for some hours with just one 10Gbit link. Note: This was fixed at around 15:00 today (2nd Nov). The problem (as reported by JANET) was a fibre break. The main RAL connection to Janet had also failed over to the backup.
Resolved Disk Server Issues |
- GDSS896 (CMSTape - D0T1) was taken out of service on 25th Oct to investigate memory errors. It was returned to service two days later (27th). The memory modules were swapped over. On re-test the fault had cleared. However, the system crashed on Friday 28th Oct. It was returned to service yesterday (1st Nov). Cause of crash unclear - although it was being used to check a new OS kernel.
Current operational status and issues |
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
- The intermittent, low-level, load-related packet loss that has been seen over external connections is still being tracked. The replacement of the UKLight router appears to have reduced this - but we are allowing more time to pass before drawing any conclusions.
Ongoing Disk Server Issues |
- None
Notable Changes made since the last meeting. |
- Security updates in response to CVE-2016-5195.
- Our CVMFS Stratum0 srever has been replaced with new hardware.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
All Castor tape | SCHEDULED | WARNING | 02/11/2016 07:00 | 02/11/2016 16:00 | 9 hours | Tape Library not available during work on the mechanics. Tape access for read will stop. Writes will be buffered on disk and flushed to tape after the work has completed. (Delayed one day from previous announcement). |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Castor:
- Merge AtlasScratchDisk and LhcbUser into larger disk pools
- Update to Castor version 2.1.15. Planning to roll out January 2017. (Proposed dates: 10th Jan: Nameserver; 17th Jan: First stager (LHCb); 24th Jan: Stager (Atlas); 26th Jan: Stager (GEN); 31st Jan: Final stager (CMS)).
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Migration of LHCb data from T10KC to T10KD tapes. The additional 'D' tape drives have now been installed. Plan to start migration after this week's intervention on the tape libraries.
- Fabric
- Firmware updates on older disk servers.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
All Castor tape | SCHEDULED | WARNING | 02/11/2016 07:00 | 02/11/2016 16:00 | 9 hours | Tape Library not available during work on the mechanics. Tape access for read will stop. Writes will be buffered on disk and flushed to tape after the work has completed. (Delayed one day from previous announcement). |
All Castor | SCHEDULED | OUTAGE | 01/11/2016 10:00 | 01/11/2016 12:47 | 2 hours and 47 minutes | patching storage for CVE-2016-5195 |
ECHO (gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk) | UNSCHEDULED | OUTAGE | 27/10/2016 14:00 | 27/10/2016 18:00 | 4 hours | Updating and Testing ECHO (Ceph) in response to EGI-SVG-CVE-2016-5195. Ongoing. |
ECHO (gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk) | UNSCHEDULED | OUTAGE | 26/10/2016 16:30 | 27/10/2016 14:00 | 21 hours and 30 minutes | Updating and Testing in response to EGI-SVG-CVE-2016-5195 |
arc-ce01, arc-ce02, arc-ce03, arc-ce04, lcgvo07, lcgvo08, lcgwms04, lcgwms05 | UNSCHEDULED | OUTAGE | 24/10/2016 15:32 | 26/10/2016 16:33 | 2 days, 1 hour and 1 minutes | EGI-SVG-CVE-2016-5195, vulnerability handling in progress |
arc-ce01, gridftp.echo.stfc.ac.uk, ip6tb-ps01, ip6tb-ps01, lcgps01, lcgps02, s3.echo.stfc.ac.uk, vacuum.gridpp.rl.ac.uk, xrootd.echo.stfc.ac.uk | UNSCHEDULED | OUTAGE | 24/10/2016 15:00 | 26/10/2016 16:33 | 2 days, 1 hour and 33 minutes | EGI-SVG-CVE-2016-5195, vulnerability handling in progress |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
124606 | Green | Urgent | In Progress | 2016-10-24 | 2016-11-01 | CMS | Consistency Check for T1_UK_RAL |
124478 | Green | Urgent | On Hold | 2016-10-17 | 2016-11-01 | Jobs submitted via RAL WMS stuck in state READY forever and ever and ever | |
123504 | Yellow | Less Urgent | Waiting for Reply | 2016-08-19 | 2016-09-20 | T2K | proxy expiration |
122827 | Green | Less Urgent | Waiting for Reply | 2016-07-12 | 2016-10-11 | SNO+ | Disk area at RAL |
121687 | Red | Less Urgent | In Progress | 2016-05-20 | 2016-10-26 | packet loss problems seen on RAL-LCG perfsonar | |
120350 | Yellow | Less Urgent | On Hold | 2016-03-22 | 2016-08-09 | LSST | Enable LSST at RAL |
117683 | Amber | Less Urgent | On Hold | 2015-11-18 | 2016-10-05 | CASTOR at RAL not publishing GLUE 2 (Updated. There are ongoing discussions with GLUE & WLCG) |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
26/10/16 | 38.8 | 48 | 48 | 47 | 50 | N/A | N/A | Systems (especially batch) down for CVE-2016-5195 |
27/10/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
28/10/16 | 100 | 100 | 100 | 100 | 100 | N/A | N/A | |
29/10/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
30/10/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
31/10/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
01/11/16 | 94.0 | 100 | 88 | 87 | 87 | N/A | 100 | Castor down for security patching/rebooting for CVE-2016-5195 |