Difference between revisions of "Tier1 Operations Report 2016-09-14"

From GridPP Wiki
Jump to: navigation, search
(Created page with "==RAL Tier1 Operations Report for 14th September 2016== __NOTOC__ ====== ====== <!-- ************************************************************* -----> <!-- ***********Sta...")
 
()
 
(7 intermediate revisions by one user not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 9th to 14th September 2016.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 9th to 14th September 2016.
 
|}  
 
|}  
* The intermittent packet loss that was reported across a part of the Tier1 network has gone away for the last week. The cause was not understood but the replacement of a transceiver on the 24th August correlates with the improvement. (This note updated after the meeting).
+
* Atlas reported a problem with the batch system last Friday (9th Sep). It turned out that there was a problem on one particular worker node (json module missing in python).
* On Friday evening 24th August there was a problem with the squids that are used for cvmfs. This in turn caused problems for the cvmfs clients on many worker nodes. This problem was cleaned up during the next day.
+
* There was a problem for xroot traffic into the RAL Tier1 for two hours this morning. One of the steps made during the tightening of access controls on the data path had to be reverted.  
+
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 23: Line 21:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS776 (LHCbDst - D1T0) failed with a read-only file system on Thursday 1st September, It was put back in service the following day - initially read-only as a RAID rebuild was still taking place. 16 files being written when the server crashed were lost.
+
* GDSS665 (AtlasTape - D0T1) failed with a read-only filesystem yesterday on Sunday 4th Sep. All files that were awaiting migration were copied to tape. Following investigation a faulty drive was replaced and the server returned to service yesterday (13th Sep).
 +
* GDSS730 (CMSDisk - D1T0) failed in the early hours of Tuesday morning (13th Sep). Following the replacement of a drive the server was put back in read-only mode later that day.
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 45: Line 44:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
|}
 
|}
* GDSS665 (AtlasTape - D0T1) failed with a read-only filesystem yesterday on Sunday 4th Sep. All files that were awaiting migration have now been copied to tape. Server still under investigation.
+
* GDSS779 (LHCbDst - D1T0) reported problems earlier this morning (14th Sep). It is currently our of production while the cause is being investigated.
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 56: Line 55:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* A number of services have been moved to the Hyper-V 2012 infrastructure.
+
* A number more services have been moved to the Hyper-V 2012 infrastructure.
* Access controls for network traffic coming in via the bypass link has been tightened.
+
* Oracle carried out a preventative maintenance and firmware update on the tape libraries yesterday (Tuesday 13th Sep).
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 78: Line 77:
 
! Reason
 
! Reason
 
|-
 
|-
| All Castor that uses tape
+
| arc-ce04.gridpp.rl.ac.uk
 
| SCHEDULED
 
| SCHEDULED
| WARNING
+
| OUTAGE
| 13/09/2016 08:00
+
| 23/09/2016 10:00
| 13/09/2016 17:00
+
| 30/09/2016 18:00
| 9 hours
+
| 7 days, 8 hours
| Maintenance on Tape Library. Tape access for read will stop. Writes will be buffered on disk and flushed to tape after the maintenance has completed.
+
| ARC-CE04 being drained ahead of a reconfiguration and move to run on different infrastructure.
 +
|-
 +
| arc-ce03.gridpp.rl.ac.uk
 +
| SCHEDULED
 +
| OUTAGE
 +
| 15/09/2016 13:00
 +
| 22/09/2016 18:00
 +
| 7 days, 5 hours
 +
| ARC-CE03 being drained ahead of a reconfiguration and move to run on different infrastructure.
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 130: Line 137:
 
! Reason
 
! Reason
 
|-
 
|-
| arc-ce01.gridpp.rl.ac.uk
+
| All Castor Tape
 
| SCHEDULED
 
| SCHEDULED
| OUTAGE
+
| WARNING
| 25/08/2016 10:00
+
| 13/09/2016 08:00
| 02/09/2016 09:59
+
| 13/09/2016 17:00
| 7 days, 23 hours and 59 minutes
+
| 9 hours
| ARC-CE01 being drained ahead of a reconfiguration and move to run on different infrastructure.
+
| Maintenance on Tape Library. Tape access for read will stop. Writes will be buffered on disk and flushed to tape after the maintenance has completed.
|-
+
| srm-biomed.gridpp.rl.ac.uk
+
| SCHEDULED
+
| OUTAGE
+
| 04/08/2016 14:00
+
| 05/09/2016 14:00
+
| 32 days,
+
| Storage for BIOMED is no longer supported since the removal of the GENScratch storage area.
+
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 169: Line 168:
 
| T2K
 
| T2K
 
| proxy expiration
 
| proxy expiration
|-
 
| 123403
 
| Green
 
| Less Urgent
 
| Waiting Reply
 
| 2016-08-15
 
| 2016-08-17
 
|
 
| FTS gets a SIGSEGV during a transfer
 
 
|-
 
|-
 
| 122827
 
| 122827
Line 209: Line 199:
 
| Yellow
 
| Yellow
 
| Less Urgent
 
| Less Urgent
| Waiting Reply
+
| On Hold
 
| 2016-03-22
 
| 2016-03-22
 
| 2016-08-09
 
| 2016-08-09
Line 240: Line 230:
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! CMS HC !! Comment
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! CMS HC !! Comment
 
|-
 
|-
| 24/08/16 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
| 07/09/16 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || N/A || 100 || Single SRM test failure: [SRM_FAILURE] Unable to issue PrepareToPut request to Castor  
|-
+
| 25/08/16 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 26/08/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
+
|-
+
| 27/08/16 || 100 || 100 || 100 || 100 ||  style="background-color: lightgrey;" | 96 || N/A || 100 || Single SRM error on listing: [SRM_INVALID_PATH] No such file or directory
+
|-
+
| 28/08/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
+
|-
+
| 29/08/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
+
|-
+
| 30/08/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
+
|-
+
| 31/08/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
+
|-
+
| 01/09/16 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || N/A || 100 || Single SRM test failure because of a user timeout error
+
|-
+
| 02/09/16 || 100 || 100 || 100 ||  style="background-color: lightgrey;" | 96 || 100 || N/A || 100 || Two SRM test failure because of a user timeout errors
+
|-
+
| 03/09/16 || 100 || 100 || 100 ||  style="background-color: lightgrey;" | 94 || 100 || N/A || 100 || Single SRM test failure because of a user timeout error - but next test did not run for a while.
+
|-
+
| 04/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
+
|-
+
| 05/09/16 || 100 || 100 || 100 ||  style="background-color: lightgrey;" | 98 || 100 || N/A || 100 || Single SRM test failure because of a user timeout error
+
|-
+
| 07/09/16 || 100 || 100 || 100 || 98 || 100 || N/A || 100 || Single SRM test failure: [SRM_FAILURE] Unable to issue PrepareToPut request to Castor  
+
 
|-
 
|-
 
| 08/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
 
| 08/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
Line 272: Line 236:
 
| 09/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
 
| 09/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
 
|-
 
|-
| 10/09/16 || 100 || 100 || 100 || 100 || 96 || N/A || 100 || Single SRM error on listing: [SRM_INVALID_PATH] No such file or directory
+
| 10/09/16 || 100 || 100 || 100 || 100 || style="background-color: lightgrey;" | 96 || N/A || 100 || Single SRM error on listing: [SRM_INVALID_PATH] No such file or directory
 
|-
 
|-
| 11/09/16 || 100 || 100 || 100 || 98 || 100 || N/A || 100 || Single SRM test failure because of a user timeout error
+
| 11/09/16 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || N/A || 100 || Single SRM test failure because of a user timeout error
 
|-
 
|-
| 12/09/16 || 100 || 100 || 100 || 98 || 100 || N/A || 100 || Single SRM test failure because of a user timeout error
+
| 12/09/16 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || N/A || 100 || Single SRM test failure because of a user timeout error
 
|-
 
|-
 
| 13/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
 
| 13/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||

Latest revision as of 11:29, 14 September 2016

RAL Tier1 Operations Report for 14th September 2016

Review of Issues during the week 9th to 14th September 2016.
  • Atlas reported a problem with the batch system last Friday (9th Sep). It turned out that there was a problem on one particular worker node (json module missing in python).
Resolved Disk Server Issues
  • GDSS665 (AtlasTape - D0T1) failed with a read-only filesystem yesterday on Sunday 4th Sep. All files that were awaiting migration were copied to tape. Following investigation a faulty drive was replaced and the server returned to service yesterday (13th Sep).
  • GDSS730 (CMSDisk - D1T0) failed in the early hours of Tuesday morning (13th Sep). Following the replacement of a drive the server was put back in read-only mode later that day.
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
  • The intermittent, low-level, load-related packet loss seen over external connections is still being tracked.
Ongoing Disk Server Issues
  • GDSS779 (LHCbDst - D1T0) reported problems earlier this morning (14th Sep). It is currently our of production while the cause is being investigated.
Notable Changes made since the last meeting.
  • A number more services have been moved to the Hyper-V 2012 infrastructure.
  • Oracle carried out a preventative maintenance and firmware update on the tape libraries yesterday (Tuesday 13th Sep).
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
arc-ce04.gridpp.rl.ac.uk SCHEDULED OUTAGE 23/09/2016 10:00 30/09/2016 18:00 7 days, 8 hours ARC-CE04 being drained ahead of a reconfiguration and move to run on different infrastructure.
arc-ce03.gridpp.rl.ac.uk SCHEDULED OUTAGE 15/09/2016 13:00 22/09/2016 18:00 7 days, 5 hours ARC-CE03 being drained ahead of a reconfiguration and move to run on different infrastructure.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • Preventative Maintenance on the Tape Libraries. Tuesday 13th September.

Listing by category:

  • Castor:
    • Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
    • Update to Castor version 2.1.15. This awaits successful resolution and testing of the new version.
    • Migration of LHCb data from T10KC to T10KD tapes.
  • Networking:
    • Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 2*40Gbit.
  • Fabric
    • Firmware updates on older disk servers.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
All Castor Tape SCHEDULED WARNING 13/09/2016 08:00 13/09/2016 17:00 9 hours Maintenance on Tape Library. Tape access for read will stop. Writes will be buffered on disk and flushed to tape after the maintenance has completed.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
123504 Green Less Urgent In Progress 2016-08-19 2016-08-23 T2K proxy expiration
122827 Green Less Urgent In Progress 2016-07-12 2016-08-22 SNO+ Disk area at RAL
122364 Green Less Urgent On Hold 2016-06-27 2016-08-23 cvmfs support at RAL-LCG2 for solidexperiment.org
121687 Red Less Urgent On Hold 2016-05-20 2016-05-23 packet loss problems seen on RAL-LCG perfsonar
120350 Yellow Less Urgent On Hold 2016-03-22 2016-08-09 LSST Enable LSST at RAL
117683 Amber Less Urgent On Hold 2015-11-18 2016-04-05 CASTOR at RAL not publishing GLUE 2
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
07/09/16 100 100 100 98 100 N/A 100 Single SRM test failure: [SRM_FAILURE] Unable to issue PrepareToPut request to Castor
08/09/16 100 100 100 100 100 N/A 100
09/09/16 100 100 100 100 100 N/A 100
10/09/16 100 100 100 100 96 N/A 100 Single SRM error on listing: [SRM_INVALID_PATH] No such file or directory
11/09/16 100 100 100 98 100 N/A 100 Single SRM test failure because of a user timeout error
12/09/16 100 100 100 98 100 N/A 100 Single SRM test failure because of a user timeout error
13/09/16 100 100 100 100 100 N/A 100