Difference between revisions of "Tier1 Operations Report 2016-11-09"

From GridPP Wiki
Jump to: navigation, search
(Created page with "==RAL Tier1 Operations Report for 9th November 2016== __NOTOC__ ====== ====== <!-- ************************************************************* -----> <!-- ***********Start...")
 
()
 
(6 intermediate revisions by one user not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 2nd to 9th November 2016.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 2nd to 9th November 2016.
 
|}  
 
|}  
* The main issue this week has been the patching of systems in response to CVE-2016-5195. At the time of the last meeting the batch systems - plus some others that were more exposed - were stopped since the Monday. The batch system and most of the others were brought back up by the end of Wednesday afternoon (26th Oct) following patching. There was an outage of Castor yesterday (1st Nov) for all its systems to be rebooted to pick up the new kernel.
+
* There has been some mopping up of systems for patching against CVE-2016-5195.
* (Added after meeting). One of the pair of OPN links failed this morning. We have been running for some hours with just one 10Gbit link. Note: This was fixed at around 15:00 today (2nd Nov). The problem (as reported by JANET) was a fibre break. The main RAL connection to Janet had also failed over to the backup.
+
* On Friday (4th November) there was a problem with the "test" FTS3 service. The disk area hosting the back end database filled up. Atlas (the only users of this) were asked to move to the "production" FTS service.
 +
* ATLAS switched to the new site mover (a pilot feature). There was a configuration error so analysis jobs broke initially on Tuesday. This was fixed a couple of hours later. ''(This point added to report after the meeting).''
 +
* There was a problem with the Atlas Frontier service Tuesday/Wednesday (8/9 Nov). A new version of the frontier-squid software had been picked up automatically, but this led to a configuration problem. ''(Report corrected on this point after the meeting).''
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 22: Line 24:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS896 (CMSTape - D0T1) was taken out of service on 25th Oct to investigate memory errors. It was returned to service two days later (27th). The memory modules were swapped over. On re-test the fault had cleared. However, the system crashed on Friday 28th Oct. It was returned to service yesterday (1st Nov). Cause of crash unclear - although it was being used to check a new OS kernel.
+
* None
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 55: Line 57:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* Security updates in response to CVE-2016-5195.
+
* Further work in response to CVE-2016-5195.
* Our CVMFS Stratum0 srever has been replaced with new hardware.
+
* LHCb now writing to the 'D' tapes.
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 77: Line 79:
 
! Reason
 
! Reason
 
|-
 
|-
| All Castor tape
+
| gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk
 
| SCHEDULED
 
| SCHEDULED
| WARNING
+
| OUTAGE
| 02/11/2016 07:00
+
| 16/11/2016 10:30
| 02/11/2016 16:00
+
| 16/11/2016 14:30
| 9 hours
+
| 4 hours
| Tape Library not available during work on the mechanics. Tape access for read will stop. Writes will be buffered on disk and flushed to tape after the work has completed. (Delayed one day from previous announcement).
+
| Upgrading backend network behind Echo Storage service
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 103: Line 105:
 
** Update to Castor version 2.1.15. Planning to roll out January 2017. (Proposed dates: 10th Jan: Nameserver; 17th Jan: First stager (LHCb); 24th Jan: Stager (Atlas); 26th Jan: Stager (GEN); 31st Jan: Final stager (CMS)).
 
** Update to Castor version 2.1.15. Planning to roll out January 2017. (Proposed dates: 10th Jan: Nameserver; 17th Jan: First stager (LHCb); 24th Jan: Stager (Atlas); 26th Jan: Stager (GEN); 31st Jan: Final stager (CMS)).
 
** Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
 
** Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
** Migration of LHCb data from T10KC to T10KD tapes. The additional 'D' tape drives have now been installed. Plan to start migration after this week's intervention on the tape libraries.
 
 
* Fabric
 
* Fabric
 
** Firmware updates on older disk servers.
 
** Firmware updates on older disk servers.
Line 133: Line 134:
 
| 9 hours
 
| 9 hours
 
| Tape Library not available during work on the mechanics. Tape access for read will stop. Writes will be buffered on disk and flushed to tape after the work has completed. (Delayed one day from previous announcement).
 
| Tape Library not available during work on the mechanics. Tape access for read will stop. Writes will be buffered on disk and flushed to tape after the work has completed. (Delayed one day from previous announcement).
|-
 
| All Castor
 
| SCHEDULED
 
| OUTAGE
 
| 01/11/2016 10:00
 
| 01/11/2016 12:47
 
| 2 hours and 47 minutes
 
| patching storage for CVE-2016-5195
 
|-
 
| ECHO (gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk)
 
| UNSCHEDULED
 
| OUTAGE
 
| 27/10/2016 14:00
 
| 27/10/2016 18:00
 
| 4 hours
 
| Updating and Testing ECHO (Ceph) in response to EGI-SVG-CVE-2016-5195. Ongoing.
 
|-
 
| ECHO (gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk)
 
| UNSCHEDULED
 
| OUTAGE
 
| 26/10/2016 16:30
 
| 27/10/2016 14:00
 
| 21 hours and 30 minutes
 
| Updating and Testing in response to EGI-SVG-CVE-2016-5195
 
|-
 
| arc-ce01, arc-ce02, arc-ce03, arc-ce04, lcgvo07, lcgvo08, lcgwms04, lcgwms05
 
| UNSCHEDULED
 
| OUTAGE
 
| 24/10/2016 15:32
 
| 26/10/2016 16:33
 
| 2 days, 1 hour and 1 minutes
 
| EGI-SVG-CVE-2016-5195, vulnerability handling in progress
 
|-
 
| arc-ce01, gridftp.echo.stfc.ac.uk, ip6tb-ps01, ip6tb-ps01, lcgps01, lcgps02, s3.echo.stfc.ac.uk, vacuum.gridpp.rl.ac.uk, xrootd.echo.stfc.ac.uk
 
| UNSCHEDULED
 
| OUTAGE
 
| 24/10/2016 15:00
 
| 26/10/2016 16:33
 
| 2 days, 1 hour and 33 minutes
 
| EGI-SVG-CVE-2016-5195, vulnerability handling in progress
 
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 188: Line 149:
 
|-style="background:#b7f1ce"
 
|-style="background:#b7f1ce"
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 +
 +
|-
 +
| 124877
 +
| Green
 +
| Less Urgent
 +
| In Progress
 +
| 2016-11-07
 +
| 2016-11-07
 +
| OPS
 +
| [Rod Dashboard] Issue detected : org.nordugrid.ARC-CE-SRM-result-ops@arc-ce04.gridpp.rl.ac.uk
 +
|-
 +
| 124876
 +
| Green
 +
| Less Urgent
 +
| In Progress
 +
| 2016-11-07
 +
| 2016-11-08
 +
| OPS
 +
| [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
 +
|-
 +
| 124785
 +
| Green
 +
| Urgent
 +
| Waiting for Reply
 +
| 2016-11-02
 +
| 2016-11-02
 +
| CMS
 +
| Configuration updated AAA - CMS Site Name missing
 
|-
 
|-
 
| 124606
 
| 124606
 
| Green
 
| Green
| Urgent
+
| Very Urgent
 
| In Progress
 
| In Progress
 
| 2016-10-24
 
| 2016-10-24
Line 208: Line 197:
 
|-
 
|-
 
| 123504
 
| 123504
| Yellow
+
| Amber
 
| Less Urgent
 
| Less Urgent
 
| Waiting for Reply
 
| Waiting for Reply
Line 224: Line 213:
 
| SNO+
 
| SNO+
 
| Disk area at RAL
 
| Disk area at RAL
|-
 
| 121687
 
| Red
 
| Less Urgent
 
| In Progress
 
| 2016-05-20
 
| 2016-10-26
 
|
 
| packet loss problems seen on RAL-LCG perfsonar
 
 
|-
 
|-
 
| 120350
 
| 120350
Line 244: Line 224:
 
|-
 
|-
 
| 117683
 
| 117683
| Amber
+
| Red
 
| Less Urgent
 
| Less Urgent
 
| On Hold
 
| On Hold

Latest revision as of 12:48, 16 November 2016

RAL Tier1 Operations Report for 9th November 2016

Review of Issues during the week 2nd to 9th November 2016.
  • There has been some mopping up of systems for patching against CVE-2016-5195.
  • On Friday (4th November) there was a problem with the "test" FTS3 service. The disk area hosting the back end database filled up. Atlas (the only users of this) were asked to move to the "production" FTS service.
  • ATLAS switched to the new site mover (a pilot feature). There was a configuration error so analysis jobs broke initially on Tuesday. This was fixed a couple of hours later. (This point added to report after the meeting).
  • There was a problem with the Atlas Frontier service Tuesday/Wednesday (8/9 Nov). A new version of the frontier-squid software had been picked up automatically, but this led to a configuration problem. (Report corrected on this point after the meeting).
Resolved Disk Server Issues
  • None
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
  • The intermittent, low-level, load-related packet loss that has been seen over external connections is still being tracked. The replacement of the UKLight router appears to have reduced this - but we are allowing more time to pass before drawing any conclusions.
Ongoing Disk Server Issues
  • None
Notable Changes made since the last meeting.
  • Further work in response to CVE-2016-5195.
  • LHCb now writing to the 'D' tapes.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk SCHEDULED OUTAGE 16/11/2016 10:30 16/11/2016 14:30 4 hours Upgrading backend network behind Echo Storage service
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • Castor:
    • Merge AtlasScratchDisk and LhcbUser into larger disk pools
    • Update to Castor version 2.1.15. Planning to roll out January 2017. (Proposed dates: 10th Jan: Nameserver; 17th Jan: First stager (LHCb); 24th Jan: Stager (Atlas); 26th Jan: Stager (GEN); 31st Jan: Final stager (CMS)).
    • Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
  • Fabric
    • Firmware updates on older disk servers.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
All Castor tape SCHEDULED WARNING 02/11/2016 07:00 02/11/2016 16:00 9 hours Tape Library not available during work on the mechanics. Tape access for read will stop. Writes will be buffered on disk and flushed to tape after the work has completed. (Delayed one day from previous announcement).
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
124877 Green Less Urgent In Progress 2016-11-07 2016-11-07 OPS [Rod Dashboard] Issue detected : org.nordugrid.ARC-CE-SRM-result-ops@arc-ce04.gridpp.rl.ac.uk
124876 Green Less Urgent In Progress 2016-11-07 2016-11-08 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
124785 Green Urgent Waiting for Reply 2016-11-02 2016-11-02 CMS Configuration updated AAA - CMS Site Name missing
124606 Green Very Urgent In Progress 2016-10-24 2016-11-01 CMS Consistency Check for T1_UK_RAL
124478 Green Urgent On Hold 2016-10-17 2016-11-01 Jobs submitted via RAL WMS stuck in state READY forever and ever and ever
123504 Amber Less Urgent Waiting for Reply 2016-08-19 2016-09-20 T2K proxy expiration
122827 Green Less Urgent Waiting for Reply 2016-07-12 2016-10-11 SNO+ Disk area at RAL
120350 Yellow Less Urgent On Hold 2016-03-22 2016-08-09 LSST Enable LSST at RAL
117683 Red Less Urgent On Hold 2015-11-18 2016-10-05 CASTOR at RAL not publishing GLUE 2 (Updated. There are ongoing discussions with GLUE & WLCG)
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
02/11/16 100 100 100 100 100 N/A 100
03/11/16 100 100 100 100 100 N/A N/A
04/11/16 100 100 100 100 96 N/A N/A Single SRM test failure on list (No such file or directory)
05/11/16 100 100 100 100 100 N/A N/A
06/11/16 100 100 100 100 100 N/A N/a
07/11/16 100 100 100 100 100 N/A 93
08/11/16 100 100 100 100 100 N/A 100