Difference between revisions of "Tier1 Operations Report 2016-10-05"

From GridPP Wiki
Jump to: navigation, search
(Created page with "==RAL Tier1 Operations Report for 5th October 2016== __NOTOC__ ====== ====== <!-- ************************************************************* -----> <!-- ***********Start ...")
 
m ()
 
(15 intermediate revisions by 2 users not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 28th September to 5th October 2016.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 28th September to 5th October 2016.
 
|}  
 
|}  
* At last week's meeting we reported a problem with one of the OPN link with one of the links not working outbound. This was resolved on Thursday morning (22nd Sep). It was found the relevant protocol (ECMP) had been reset in UKLight Router. This was re-configured and the problem fixed.
+
* On Sunday morning (2nd Oct) there was a problem with the LHCb Cator instance with access to LHCb-DISK not working. LHCb raised a GGUS alarm ticket. The problems was resolved during Sunday by the Castor on-call. Although there was space in the disk pool as a whole, some of the disks in there had become full which led to the problems. Since then some re-balancing of the disks has been carried out.
* On Sunday morning, there were problems with all three top-level BDIIs that lasted an hour or so. The automatic re-starters fixed it although the cause is not understood.
+
* On Tuesday morning there was a recurrence of problems seen just over a week ago with all three top-level BDIIs not working. This lasted until early afternoon - when the problem went away (not fixed by us). An outage was declared on the Top-BDII alias in the GOC DB. A similar problem affected at least one other site at the same time. Following discussions on the e-mail list the BDIIs will be upgraded.
 +
* Yesterday (Tuesday 4th Oct) morning there was a problem with a very large queue on AtlasScratch in Castor. (This did not affect other areas in Castor). Initial attempts to flush the queue did not help. The load on Atlas Scratch was reduced (the number of Atlas pilot batch jobs was reduced during the afternoon) and, whether by this or the particular jobs ending, AtlasScratch in Castor recovered. The limits on the Atlas pilot jobs was removed this morning (Wed 5th Oct).
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 22: Line 23:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS617 (AliceDisk - D1T0) failed in the early hours of Saturday morning (24th Sep). It was returned to service, initially read-only on the evening of the same day. No specific fault found.
+
* GDSS738 (LHCbDst - D1T0) failed late on Friday evening (30th Sep). A single faulty disk drive was found. It was returned to service initially read-only around lunchtime on Sunday (2nd Oct) and to full production on Tuesday (4th Oct).
* GDSS739 (LHCbDst - D1T0) failed in the early hours of Sunday morning (25th Sep). It was returned to service, initially read-only, at the end of Monday afternoon (26th Sep). Two faulty disks were replaced.
+
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 45: Line 45:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
|}
 
|}
* None
+
* GDSS677 (CMSTape - D0T1) failed on Thursday afternoon, 29th Sep. There are no files awaiting migration to tape on the server.
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 56: Line 56:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* We continue to move services (including arc-ce03) to the Hyper-V 2012 infrastructure.
+
* Firmware updates were carried out those remaining Castor disk servers from the Clustervision '11 batch for which this had not yet been done.
* arc-ce03 was re-installed on this infrastructure with a bigger disk.
+
* Arc-ce04 was re-installed on the Windows Hyper-V 2012 infrastructure with a bigger disk.
 +
* (Ongoing at time of meeting - replace UKLight router)
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 68: Line 69:
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
|}
 
|}
None
+
{| border=1 align=center
 +
|- bgcolor="#7c8aaf"
 +
! Service
 +
! Scheduled?
 +
! Outage/At Risk
 +
! Start
 +
! End
 +
! Duration
 +
! Reason
 +
|-
 +
| All Storage (all SRM endpoints)
 +
| SCHEDULED
 +
| WARNING
 +
| 05/10/2016 09:00
 +
| 05/10/2016 15:00
 +
| 6 hours
 +
| Series of breaks in the external data path to/from our storage while the network router is replaced and tests carried out on the resilience of the links. Internal access to storage unaffected.
 +
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 88: Line 106:
 
* Castor:
 
* Castor:
 
** Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
 
** Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
** Update to Castor version 2.1.15. This awaits successful resolution and testing of the new version.
+
** Update to Castor version 2.1.15. Planning to roll out January 2017.
 
** Migration of LHCb data from T10KC to T10KD tapes.
 
** Migration of LHCb data from T10KC to T10KD tapes.
 
* Networking:
 
* Networking:
Line 113: Line 131:
 
! Duration
 
! Duration
 
! Reason
 
! Reason
 +
|-
 +
| All Storage (all SRM endpoints)
 +
| SCHEDULED
 +
| WARNING
 +
| 05/10/2016 09:00
 +
| 05/10/2016 15:00
 +
| 6 hours
 +
| Series of breaks in the external data path to/from our storage while the network router is replaced and tests carried out on the resilience of the links. Internal access to storage unaffected.
 +
|-
 +
| lcgbdii.gridpp.rl.ac.uk
 +
| UNSCHEDULED
 +
| OUTAGE
 +
| 04/10/2016 07:00
 +
| 04/10/2016 13:29
 +
| 6 hours and 29 minutes
 +
| We have an ongoing problem affecting all three of our production Top-BDII systems that are behind the alias.
 
|-
 
|-
 
| arc-ce04.gridpp.rl.ac.uk
 
| arc-ce04.gridpp.rl.ac.uk
Line 121: Line 155:
 
| 7 days, 8 hours
 
| 7 days, 8 hours
 
| ARC-CE04 being drained ahead of a reconfiguration and move to run on different infrastructure.
 
| ARC-CE04 being drained ahead of a reconfiguration and move to run on different infrastructure.
|-
 
| arc-ce03.gridpp.rl.ac.uk
 
| SCHEDULED
 
| OUTAGE
 
| 15/09/2016 13:00
 
| 22/09/2016 18:00
 
| 7 days, 5 hours
 
| ARC-CE03 being drained ahead of a reconfiguration and move to run on different infrastructure.
 
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 145: Line 171:
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
|-
 
|-
| 123504
+
| 124188
 
| Green
 
| Green
 +
| Less Urgent
 +
| In Progress
 +
| 2016-10-03
 +
| 2016-10-03
 +
| Atlas
 +
| UK Lpad-RAL-LCG224 : Frontier squid down
 +
|-
 +
| 123504
 +
| Yellow
 
| Less Urgent
 
| Less Urgent
 
| Waiting for Reply
 
| Waiting for Reply
Line 162: Line 197:
 
| SNO+
 
| SNO+
 
| Disk area at RAL
 
| Disk area at RAL
|-
 
| 122364
 
| Green
 
| Less Urgent
 
| On Hold
 
| 2016-06-27
 
| 2016-08-23
 
|
 
| cvmfs support at RAL-LCG2 for solidexperiment.org
 
 
|-
 
|-
 
| 121687
 
| 121687
Line 177: Line 203:
 
| On Hold
 
| On Hold
 
| 2016-05-20
 
| 2016-05-20
| 2016-05-23
+
| 2016-09-30
 
|  
 
|  
 
| packet loss problems seen on RAL-LCG perfsonar
 
| packet loss problems seen on RAL-LCG perfsonar
Line 197: Line 223:
 
| 2016-04-05
 
| 2016-04-05
 
|  
 
|  
| CASTOR at RAL not publishing GLUE 2
+
| CASTOR at RAL not publishing GLUE 2 (Rob & Jens will discuss & update ticket later today)
 
|}
 
|}
 
<!-- **********************End GGUS Tickets************************** ----->
 
<!-- **********************End GGUS Tickets************************** ----->
Line 215: Line 241:
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! CMS HC !! Comment
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! CMS HC !! Comment
 
|-
 
|-
| 21/09/16 || 100 || 100 || 100 || style="background-color: lightgrey;" | 93 || 100 || N/A || 100 || Several SRM test failures because of a user timeout error
+
| 28/09/16 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || N/A || 100 || Single SRM test failure because of a user timeout error
|-
+
| 22/09/16 || 100 || 100 || 100 || style="background-color: lightgrey;" | 89 || 100 || N/A || 100 || Block of problems with CE tests around lunchtime.
+
|-
+
| 23/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
+
|-
+
| 24/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
+
|-
+
| 25/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
+
|-
+
| 26/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
+
|-
+
| 27/09/16 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || N/A || 100 || Single SRM test failure because of a user timeout error
+
 
+
|-
+
| 28/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
+
 
|-
 
|-
| 29/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
+
| 29/09/16 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || N/A || 100 || Single SRM test failure because of a user timeout error
 
|-
 
|-
 
| 30/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
 
| 30/09/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
Line 240: Line 251:
 
| 02/10/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
 
| 02/10/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
 
|-
 
|-
| 03/10/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
+
| 03/10/16 || 100 || 100 || 100 || 100 || 100 || N/A || N/A ||
 
|-
 
|-
 
| 04/10/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||
 
| 04/10/16 || 100 || 100 || 100 || 100 || 100 || N/A || 100 ||

Latest revision as of 08:06, 6 October 2016

RAL Tier1 Operations Report for 5th October 2016

Review of Issues during the week 28th September to 5th October 2016.
  • On Sunday morning (2nd Oct) there was a problem with the LHCb Cator instance with access to LHCb-DISK not working. LHCb raised a GGUS alarm ticket. The problems was resolved during Sunday by the Castor on-call. Although there was space in the disk pool as a whole, some of the disks in there had become full which led to the problems. Since then some re-balancing of the disks has been carried out.
  • On Tuesday morning there was a recurrence of problems seen just over a week ago with all three top-level BDIIs not working. This lasted until early afternoon - when the problem went away (not fixed by us). An outage was declared on the Top-BDII alias in the GOC DB. A similar problem affected at least one other site at the same time. Following discussions on the e-mail list the BDIIs will be upgraded.
  • Yesterday (Tuesday 4th Oct) morning there was a problem with a very large queue on AtlasScratch in Castor. (This did not affect other areas in Castor). Initial attempts to flush the queue did not help. The load on Atlas Scratch was reduced (the number of Atlas pilot batch jobs was reduced during the afternoon) and, whether by this or the particular jobs ending, AtlasScratch in Castor recovered. The limits on the Atlas pilot jobs was removed this morning (Wed 5th Oct).
Resolved Disk Server Issues
  • GDSS738 (LHCbDst - D1T0) failed late on Friday evening (30th Sep). A single faulty disk drive was found. It was returned to service initially read-only around lunchtime on Sunday (2nd Oct) and to full production on Tuesday (4th Oct).
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
  • The intermittent, low-level, load-related packet loss seen over external connections is still being tracked.
Ongoing Disk Server Issues
  • GDSS677 (CMSTape - D0T1) failed on Thursday afternoon, 29th Sep. There are no files awaiting migration to tape on the server.
Notable Changes made since the last meeting.
  • Firmware updates were carried out those remaining Castor disk servers from the Clustervision '11 batch for which this had not yet been done.
  • Arc-ce04 was re-installed on the Windows Hyper-V 2012 infrastructure with a bigger disk.
  • (Ongoing at time of meeting - replace UKLight router)
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
All Storage (all SRM endpoints) SCHEDULED WARNING 05/10/2016 09:00 05/10/2016 15:00 6 hours Series of breaks in the external data path to/from our storage while the network router is replaced and tests carried out on the resilience of the links. Internal access to storage unaffected.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • Replace the UKLight Router including upgrading the 'bypass' link to the RAL border routers to 40Gbit. Being scheduled for Wednesday 5th October.
  • Intervention on Tape Libraries - early November.

Listing by category:

  • Castor:
    • Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
    • Update to Castor version 2.1.15. Planning to roll out January 2017.
    • Migration of LHCb data from T10KC to T10KD tapes.
  • Networking:
    • Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 40Gbit.
  • Fabric
    • Firmware updates on older disk servers.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
All Storage (all SRM endpoints) SCHEDULED WARNING 05/10/2016 09:00 05/10/2016 15:00 6 hours Series of breaks in the external data path to/from our storage while the network router is replaced and tests carried out on the resilience of the links. Internal access to storage unaffected.
lcgbdii.gridpp.rl.ac.uk UNSCHEDULED OUTAGE 04/10/2016 07:00 04/10/2016 13:29 6 hours and 29 minutes We have an ongoing problem affecting all three of our production Top-BDII systems that are behind the alias.
arc-ce04.gridpp.rl.ac.uk SCHEDULED OUTAGE 23/09/2016 10:00 30/09/2016 18:00 7 days, 8 hours ARC-CE04 being drained ahead of a reconfiguration and move to run on different infrastructure.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
124188 Green Less Urgent In Progress 2016-10-03 2016-10-03 Atlas UK Lpad-RAL-LCG224 : Frontier squid down
123504 Yellow Less Urgent Waiting for Reply 2016-08-19 2016-09-20 T2K proxy expiration
122827 Green Less Urgent Waiting for Reply 2016-07-12 2016-09-14 SNO+ Disk area at RAL
121687 Red Less Urgent On Hold 2016-05-20 2016-09-30 packet loss problems seen on RAL-LCG perfsonar
120350 Yellow Less Urgent On Hold 2016-03-22 2016-08-09 LSST Enable LSST at RAL
117683 Amber Less Urgent On Hold 2015-11-18 2016-04-05 CASTOR at RAL not publishing GLUE 2 (Rob & Jens will discuss & update ticket later today)
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
28/09/16 100 100 100 98 100 N/A 100 Single SRM test failure because of a user timeout error
29/09/16 100 100 100 98 100 N/A 100 Single SRM test failure because of a user timeout error
30/09/16 100 100 100 100 100 N/A 100
01/10/16 100 100 100 100 100 N/A 100
02/10/16 100 100 100 100 100 N/A 100
03/10/16 100 100 100 100 100 N/A N/A
04/10/16 100 100 100 100 100 N/A 100