Difference between revisions of "Tier1 Operations Report 2016-09-21"

From GridPP Wiki
Jump to: navigation, search
(Created page with "==RAL Tier1 Operations Report for 21st September 2016== __NOTOC__ ====== ====== <!-- ************************************************************* -----> <!-- ***********Sta...")
 
()
 
(13 intermediate revisions by one user not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 14th to 21st September 2016.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 14th to 21st September 2016.
 
|}  
 
|}  
* Atlas reported a problem with the batch system last Friday (9th Sep). It turned out that there was a problem on one particular worker node (json module missing in python).
+
* A brief power dip affected parts of the RAL site on Wedesday afternoon, 14th Sep. The R89 machine room was OK. However, it was seen by equipment in the Atlas building. This had no affect on our services.
 +
* On Sunday (18th Sep) there was a problem with the CVMFS squids. High access rates were seen on the stratum1 server that correlate with the problems. Tuning of some parameters on the stratum1 has taken place and the version of cvmfs on all worker nodes was reverted to CVMFS 2.1.20.
 +
* There is a problem on the OPN link with one of the links appearing to not work outbound. This is being investigated.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 21: Line 23:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS665 (AtlasTape - D0T1) failed with a read-only filesystem yesterday on Sunday 4th Sep. All files that were awaiting migration were copied to tape. Following investigation a faulty drive was replaced and the server returned to service yesterday (13th Sep).
+
* GDSS779 (LHCbDst - D1T0) reported problems on 14th September. It was returned to service in read-only mode the following day, then back to full service on Monday (19th). Three disks showing media errors were replaced.
* GDSS730 (CMSDisk - D1T0) failed in the early hours of Tuesday morning (13th Sep). Following the replacement of a drive the server was put back in read-only mode later that day.
+
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 44: Line 45:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
|}
 
|}
* GDSS779 (LHCbDst - D1T0) reported problems earlier this morning (14th Sep). It is currently our of production while the cause is being investigated.
+
* None
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 55: Line 56:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* A number more services have been moved to the Hyper-V 2012 infrastructure.
+
* We continue to move services (including arc-ce03) to the Hyper-V 2012 infrastructure.
* Oracle carried out a preventative maintenance and firmware update on the tape libraries yesterday (Tuesday 13th Sep).
+
* Upgraded condor to 8.4.8 on some systems.
 +
* HPE worker nodes have completed testing and other actions. They are now being used to to check their behaviour under SL7.
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 84: Line 86:
 
| 7 days, 8 hours
 
| 7 days, 8 hours
 
| ARC-CE04 being drained ahead of a reconfiguration and move to run on different infrastructure.
 
| ARC-CE04 being drained ahead of a reconfiguration and move to run on different infrastructure.
|-
 
| arc-ce03.gridpp.rl.ac.uk
 
| SCHEDULED
 
| OUTAGE
 
| 15/09/2016 13:00
 
| 22/09/2016 18:00
 
| 7 days, 5 hours
 
| ARC-CE03 being drained ahead of a reconfiguration and move to run on different infrastructure.
 
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 107: Line 101:
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
'''Listing by category:'''
 
'''Listing by category:'''
* Preventative Maintenance on the Tape Libraries. Tuesday 13th September.
+
* Intervention on Tape Libraries - early November.
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Castor:
 
* Castor:
Line 137: Line 131:
 
! Reason
 
! Reason
 
|-
 
|-
| All Castor Tape
+
| arc-ce03.gridpp.rl.ac.uk
 
| SCHEDULED
 
| SCHEDULED
| WARNING
+
| OUTAGE
| 13/09/2016 08:00
+
| 15/09/2016 13:00
| 13/09/2016 17:00
+
| 22/09/2016 18:00
| 9 hours
+
| 7 days, 5 hours
| Maintenance on Tape Library. Tape access for read will stop. Writes will be buffered on disk and flushed to tape after the maintenance has completed.
+
| ARC-CE03 being drained ahead of a reconfiguration and move to run on different infrastructure.
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 163: Line 157:
 
| Green
 
| Green
 
| Less Urgent
 
| Less Urgent
| In Progress
+
| Waiting for Reply
 
| 2016-08-19
 
| 2016-08-19
| 2016-08-23
+
| 2016-09-20
 
| T2K
 
| T2K
 
| proxy expiration
 
| proxy expiration
Line 172: Line 166:
 
| Green
 
| Green
 
| Less Urgent
 
| Less Urgent
| In Progress
+
| Waiting for Reply
 
| 2016-07-12
 
| 2016-07-12
| 2016-08-22
+
| 2016-09-14
 
| SNO+
 
| SNO+
 
| Disk area at RAL
 
| Disk area at RAL
Line 230: Line 224:
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! CMS HC !! Comment
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! CMS HC !! Comment
 
|-
 
|-
| 07/09/16 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || N/A || 100 || Single SRM test failure: [SRM_FAILURE] Unable to issue PrepareToPut request to Castor
+
| 14/09/16 || 100 || style="background-color: lightgrey;" | 99 || 100 || 100 || 100 || N/A || 100 || AliEn-SE test failures. Problem seen at other sites too.
|-
+
| 08/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
+
|-
+
| 09/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
+
|-
+
| 10/09/16 || 100 || 100 || 100 || 100 || style="background-color: lightgrey;" | 96 || N/A || 100 || Single SRM error on listing: [SRM_INVALID_PATH] No such file or directory
+
|-
+
| 11/09/16 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || N/A || 100 || Single SRM test failure because of a user timeout error
+
|-
+
| 12/09/16 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || N/A || 100 || Single SRM test failure because of a user timeout error
+
 
+
|-
+
| 14/09/16 || 100 || 99 || 100 || 100 || 100 || N/A || 100 || AliEn-SE test failures. Problem seen at other sites too.
+
 
|-
 
|-
| 15/09/16 || 100 || 81 || 100 || 92 || 100 || N/A || 100 || ALICE: AliEn-SE test failures. Problem seen at other sites too; CMS: Several SRM test failures because of a user timeout error
+
| 15/09/16 || 100 || style="background-color: lightgrey;" | 81 || 100 || style="background-color: lightgrey;" | 92 || 100 || N/A || 100 || ALICE: AliEn-SE test failures. Problem seen at other sites too; CMS: Several SRM test failures because of a user timeout error
 
|-
 
|-
 
| 16/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
 
| 16/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
Line 253: Line 234:
 
| 18/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
 
| 18/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
 
|-
 
|-
| 19/09/16 || 100 || 100 || 100 || 96 || 100 || N/A || 100 || Two  SRM test failures (could not open connection to srm-cms.gridpp.rl.ac.uk) plus CE test failures.
+
| 19/09/16 || 100 || 100 || 100 || style="background-color: lightgrey;" | 96 || 100 || N/A || 100 || Two  SRM test failures (could not open connection to srm-cms.gridpp.rl.ac.uk) plus CE test failures.
 
|-
 
|-
| 20/09/16 || 100 || 100 || 100 || 98 || 50 || N/A || 100 || CMS: Single SRM test failure (user timeout) plus CE test failures; LHCb: We have one CE down, others working OK. But the SAM results don't seem to allow for it correctly.
+
| 20/09/16 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || N/A || 100 || CMS: Single SRM test failure (user timeout) plus CE test failures; (Corrected for LHCb after the meeting. Had originally reported 50%.).
 
|}
 
|}
 
<!-- **********************End Availability Report************************** ----->
 
<!-- **********************End Availability Report************************** ----->
 
<!-- *********************************************************************** ----->
 
<!-- *********************************************************************** ----->

Latest revision as of 13:22, 28 September 2016

RAL Tier1 Operations Report for 21st September 2016

Review of Issues during the week 14th to 21st September 2016.
  • A brief power dip affected parts of the RAL site on Wedesday afternoon, 14th Sep. The R89 machine room was OK. However, it was seen by equipment in the Atlas building. This had no affect on our services.
  • On Sunday (18th Sep) there was a problem with the CVMFS squids. High access rates were seen on the stratum1 server that correlate with the problems. Tuning of some parameters on the stratum1 has taken place and the version of cvmfs on all worker nodes was reverted to CVMFS 2.1.20.
  • There is a problem on the OPN link with one of the links appearing to not work outbound. This is being investigated.
Resolved Disk Server Issues
  • GDSS779 (LHCbDst - D1T0) reported problems on 14th September. It was returned to service in read-only mode the following day, then back to full service on Monday (19th). Three disks showing media errors were replaced.
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
  • The intermittent, low-level, load-related packet loss seen over external connections is still being tracked.
Ongoing Disk Server Issues
  • None
Notable Changes made since the last meeting.
  • We continue to move services (including arc-ce03) to the Hyper-V 2012 infrastructure.
  • Upgraded condor to 8.4.8 on some systems.
  • HPE worker nodes have completed testing and other actions. They are now being used to to check their behaviour under SL7.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
arc-ce04.gridpp.rl.ac.uk SCHEDULED OUTAGE 23/09/2016 10:00 30/09/2016 18:00 7 days, 8 hours ARC-CE04 being drained ahead of a reconfiguration and move to run on different infrastructure.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • Intervention on Tape Libraries - early November.

Listing by category:

  • Castor:
    • Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
    • Update to Castor version 2.1.15. This awaits successful resolution and testing of the new version.
    • Migration of LHCb data from T10KC to T10KD tapes.
  • Networking:
    • Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 2*40Gbit.
  • Fabric
    • Firmware updates on older disk servers.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
arc-ce03.gridpp.rl.ac.uk SCHEDULED OUTAGE 15/09/2016 13:00 22/09/2016 18:00 7 days, 5 hours ARC-CE03 being drained ahead of a reconfiguration and move to run on different infrastructure.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
123504 Green Less Urgent Waiting for Reply 2016-08-19 2016-09-20 T2K proxy expiration
122827 Green Less Urgent Waiting for Reply 2016-07-12 2016-09-14 SNO+ Disk area at RAL
122364 Green Less Urgent On Hold 2016-06-27 2016-08-23 cvmfs support at RAL-LCG2 for solidexperiment.org
121687 Red Less Urgent On Hold 2016-05-20 2016-05-23 packet loss problems seen on RAL-LCG perfsonar
120350 Yellow Less Urgent On Hold 2016-03-22 2016-08-09 LSST Enable LSST at RAL
117683 Amber Less Urgent On Hold 2015-11-18 2016-04-05 CASTOR at RAL not publishing GLUE 2
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
14/09/16 100 99 100 100 100 N/A 100 AliEn-SE test failures. Problem seen at other sites too.
15/09/16 100 81 100 92 100 N/A 100 ALICE: AliEn-SE test failures. Problem seen at other sites too; CMS: Several SRM test failures because of a user timeout error
16/09/16 100 100 100 100 100 N/A 100
17/09/16 100 100 100 100 100 N/A 100
18/09/16 100 100 100 100 100 N/A 100
19/09/16 100 100 100 96 100 N/A 100 Two SRM test failures (could not open connection to srm-cms.gridpp.rl.ac.uk) plus CE test failures.
20/09/16 100 100 100 98 100 N/A 100 CMS: Single SRM test failure (user timeout) plus CE test failures; (Corrected for LHCb after the meeting. Had originally reported 50%.).