Difference between revisions of "Tier1 Operations Report 2017-04-12"

From GridPP Wiki
Jump to: navigation, search
(Created page with "==RAL Tier1 Operations Report for 12th April2017== __NOTOC__ ====== ====== <!-- ************************************************************* -----> <!-- ***********Start Re...")
 
()
 
(22 intermediate revisions by one user not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 5th to 12th April 2017.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 5th to 12th April 2017.
 
|}
 
|}
* LHCb Castor instance has been running with problems all this last week. Initially it appeared the new SRM version was causing a bottleneck. This was fixed but it then appears the stager was also struggling. Work has been ongoing to resolve this.
+
* Problems with the LHCb Castor instance have continued this last week. Under sustained load the service (the SRMs) fail. We are working with LHCb on the problem and are currently running with a reduced job rate of the LHCb merging jobs with are a particular cause of load on Castor. I note that a GGUS alarm ticket was received from LHCb about this problem. Note added just after meeting. As agreed we will revert the SRM update that was made on 23rd March.
* Some batch job submission errors have been seen by CMS and LHCb. These are not yet understood. ?? Ongoing
+
* We have also seen problems with the Atlas Castor instance (see availability report below). This Castor instance is running the same Castor SRM version as LHCb.
* Over the weekend there were problems with the Atlas Frontier systems. Lyon were also affected.
+
* On Saturday 8th April there was a problem with one of the Windows Hyper-V 2012 hyperviors. A small number of machines were left in a bad state. This included one of the CEs and an argus system. This affected batch jobs submission during the day. The problem was resolved by the oncall team. We received a GGUS Alarm ticket from LHCb about this problem. There is also an ongoing problem on the Alice VO box that may be related.
* On Monday problems were reported on one of the ARC CEs (AC-CE4) and its services were restarted.
+
* There was a problem with the CMS Castor instance that arose on Saturday. This was caused, at least in part, by the a memory leak in the Castor Transfer Manager. This problem was missed in the noise caused by the above hypervisor problem and was not resolved until Monday morning.
 +
* A transceiver was replaced in one of the four links between the Tier1 network and the OPNR router. Errors were seen in the network monitoring for this specific physical link.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 24: Line 25:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS780 (LHCbDst - D1T0) had failed on Sunday (26th) and was put back in service the following day. However, it crashed again on the 29th March. Following that its disks were swapped into new chassis and that was brought into service (as GDSS780) on Monday 3rd April.
+
* None
* GDSS733 (LHCbDst - D1T0) was taken out of service on Thursday 30th March. The server seemed to be behaving badly in Castor and was taken out of production and checked over by Fabric Team although nothing was found. It was returned to service the following afternoon.
+
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 46: Line 46:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
|}
 
|}
* gdss673 (LHCb-Tape) was removed from production this morning (05/04/2017) due to it having a double disk failure.  
+
* gdss673 (LHCb-Tape) was removed from production last Wednesday (5th April) due to it having a double disk failure. Fixing this is ongoing.
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 58: Line 58:
 
|}
 
|}
 
* Atlas Pilot (Analysis) 1500
 
* Atlas Pilot (Analysis) 1500
* CMS Multicore 460
+
* CMS Multicore 550
* LHCb 1000
+
* LHCb 500 (In response to ongoing problems with LHCb Castor.)
 
<!-- ******************End Limits On Batch System Jobs***************** ----->
 
<!-- ******************End Limits On Batch System Jobs***************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 70: Line 70:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* Six 2014 batch of disk servers have been installed in CMSDisk.
+
* Increased limit on number of CMS multicore jobs from 460 to 550 due to increased pledge for 2017.
* Perfsonar nodes are now working over IPv6 on the production Tier1 network.
+
* The second chiller to be replaced was craned out on last Thursday and the replacement put in its place.
+
 
* Out of Hours cover for the CEPH ECHO service is being piloted.
 
* Out of Hours cover for the CEPH ECHO service is being piloted.
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
Line 84: Line 82:
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
|}
 
|}
* None
+
{| border=1 align=center
 +
|- bgcolor="#7c8aaf"
 +
! Service
 +
! Scheduled?
 +
! Outage/At Risk
 +
! Start
 +
! End
 +
! Duration
 +
! Reason
 +
|-
 +
| lcgwms04.gridpp.rl.ac.uk
 +
| SCHEDULED
 +
| OUTAGE
 +
| 12/04/2017 09:05
 +
| 18/04/2017 12:00
 +
| 6 days, 2 hours and 55 minutes
 +
| server migration
 +
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 100: Line 115:
 
'''Pending - but not yet formally announced:'''
 
'''Pending - but not yet formally announced:'''
 
* Update Castor SRMs - CMS & GEN still to do. This is awaiting a full understanding of the problem seen with LHCb.
 
* Update Castor SRMs - CMS & GEN still to do. This is awaiting a full understanding of the problem seen with LHCb.
* Chiller replacement - work ongoing.
+
* Chiller replacement - work ongoing. (First chiller replacement completed, second in place).
 
* Merge AtlasScratchDisk into larger Atlas disk pool.
 
* Merge AtlasScratchDisk into larger Atlas disk pool.
 
'''Listing by category:'''
 
'''Listing by category:'''
Line 130: Line 145:
 
! Reason
 
! Reason
 
|-
 
|-
| srm-lhcb.gridpp.rl.ac.uk,
+
| lcgwms04.gridpp.rl.ac.uk
| UNSCHEDULED
+
| SCHEDULED
 
| OUTAGE
 
| OUTAGE
| 31/03/2017 16:00
+
| 12/04/2017 09:05
| 01/04/2017 10:43
+
| 18/04/2017 12:00
| 18 hours and 43 minutes
+
| 6 days, 2 hours and 55 minutes
| There are problems with the LHCb Castor instance.
+
| server migration
 
|-
 
|-
| srm-atlas.gridpp.rl.ac.uk  
+
| srm-lhcb.gridpp.rl.ac.uk  
 
| UNSCHEDULED
 
| UNSCHEDULED
| WARNING
+
| OUTAGE
| 29/03/2017 09:00
+
| 09/04/2017 12:00
| 29/03/2017 14:12
+
| 10/04/2017 12:00
| 5 hours and 12 minutes
+
| 24 hours
| upgrading Atlas srm software to 2.1.16
+
| Problems with LHCb transfers
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 160: Line 175:
 
|-style="background:#b7f1ce"
 
|-style="background:#b7f1ce"
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 +
 +
|-
 +
| 127617
 +
| Green
 +
| Alarm
 +
| In Progress
 +
| 2017-04-09
 +
| 2017-04-09
 +
| LHCb
 +
| RAL srm-s are down for LHCb
 +
|-
 +
| 127612
 +
| Green
 +
| Alarm
 +
| In Progress
 +
| 2017-04-08
 +
| 2017-04-10
 +
| LHCb
 +
| CEs at RAL not responding
 +
|-
 +
| 127611
 +
| Green
 +
| Top Priority
 +
| In Progress
 +
| 2017-04-08
 +
| 2017-04-11
 +
| Alice
 +
| ALICE VOBOX network problem
 +
|-
 +
| 127598
 +
| Green
 +
| Urgent
 +
| In Progress
 +
| 2017-04-07
 +
| 2017-04-07
 +
| CMS
 +
| UK XRootD Redirector
 +
|-
 +
| 127597
 +
| Green
 +
| Urgent
 +
| In Progress
 +
| 2017-04-07
 +
| 2017-04-10
 +
| CMS
 +
| Check networking and xrootd RAL-CERN performance
 
|-
 
|-
 
| 127388
 
| 127388
Line 175: Line 236:
 
| In Progress
 
| In Progress
 
| 2017-03-21
 
| 2017-03-21
| 2017-03-27
+
| 2017-04-05
 
| CMS
 
| CMS
 
| Staging Test at UK_RAL for Run2
 
| Staging Test at UK_RAL for Run2
Line 231: Line 292:
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! Atlas HC ECHO !! CMS HC !! Comment
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! Atlas HC ECHO !! CMS HC !! Comment
 
|-
 
|-
| 29/03/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 97 || style="background-color: lightgrey;" | 83 || 100 || 99 || 100 || SRM test failures
+
| 05/04/17 || 100 || 100 || 100 || 100 || style="background-color: lightgrey;" | 88 || 100 || 100 || N/A || SRM Test failures.
 
|-
 
|-
| 30/03/17 || 100 || 100 || 100 || 100 || style="background-color: lightgrey;" | 88 || 100 || 100 || 100 || SRM test failures
+
| 06/04/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 99 || style="background-color: lightgrey;" | 84 || 100 || 100 || 100 || SRM test failures for both CMS and LHCb.
 
|-
 
|-
| 31/03/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || style="background-color: lightgrey;" | 50 || 96 || 100 || 100 || SRM test failures
+
| 07/04/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||  
 
|-
 
|-
| 01/04/17 || 100 || 100 || style="background-color: lightgrey;" | 67 || 100 || style="background-color: lightgrey;" | 79 || 100 || 100 || 100 || Atlas: Missing data; LHCb: SRM test failures
+
| 08/04/17 || 100 || style="background-color: lightgrey;" | 96 || style="background-color: lightgrey;" | 88 || style="background-color: lightgrey;" | 75 || style="background-color: lightgrey;" | 94 || 100 || 100 || 100 || A hypervisor failure led to problems for one of the CEs and argus.
 
|-
 
|-
| 02/034/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || 100 || 89 || 100 || SRM test failures
+
| 09/04/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 45 || style="background-color: lightgrey;" | 88 || 100 || 100 || 100 || CMS: Problem with CMS Castor (transfer manager problems); LHCb - SRM test failures.
 
|-
 
|-
| 03/04/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 97 || style="background-color: lightgrey;" | 82 || 98 || 94 || 99 || SRM test failures
+
| 10/04/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 60 || 100 || 100 || 99 || 99 || CMS: Ongoing problem with CMS Castor (above) fixed during morning.
 
+
 
|-
 
|-
| 05/04/17 || 100 || 100 || 100 || 100 || 88 | 100 || 100 || 100 || 100 || SRM Test failures.
+
| 11/04/17 || 100 || 100 || style="background-color: lightgrey;" | 58 || style="background-color: lightgrey;" | 93 || style="background-color: lightgrey;" | 80 || 98 || 98 || 99 || All three are SRM test failures caused by Castor problems.
|-
+
| 06/04/17 || 100 || 100 || 100 || 99 || 84 | 100 || 100 || 100 || 100 || SRM test failures for both CMS and LHCb.
+
|-
+
| 07/04/17 || 100 || 100 || 100 || 1700 || 100 | 100 || 100 || 100 || 100 ||
+
|-
+
| 08/04/17 || 100 || 96 || 88 || 75 || 94 | 100 || 100 || 100 || 100 || A hypervisor failure led to problems for one of the CEs and argus.
+
|-
+
| 09/04/17 || 100 || 100 || 100 || 45 ||88 | 100 || 100 || 100 || 100 || CMS: Problem with CMS Castor (transfer manager problems); LHCb - SRM test failures.
+
|-
+
| 10/04/17 || 100 || 100 || 100 || 60 || 100 | 100 || 100 || 100 || 100 || CMS: Ongoing problem with CMS Castor (above) fixed during morning.
+
|-
+
| 11/04/17 || 100 || 100 || 100 || 100 || 100 | 100 || 100 || 100 || 100 ||
+
 
+
 
+
 
+
 
+
 
+
 
|}
 
|}
 
<!-- **********************End Availability Report************************** ----->
 
<!-- **********************End Availability Report************************** ----->

Latest revision as of 13:18, 12 April 2017

RAL Tier1 Operations Report for 12th April2017

Review of Issues during the week 5th to 12th April 2017.
  • Problems with the LHCb Castor instance have continued this last week. Under sustained load the service (the SRMs) fail. We are working with LHCb on the problem and are currently running with a reduced job rate of the LHCb merging jobs with are a particular cause of load on Castor. I note that a GGUS alarm ticket was received from LHCb about this problem. Note added just after meeting. As agreed we will revert the SRM update that was made on 23rd March.
  • We have also seen problems with the Atlas Castor instance (see availability report below). This Castor instance is running the same Castor SRM version as LHCb.
  • On Saturday 8th April there was a problem with one of the Windows Hyper-V 2012 hyperviors. A small number of machines were left in a bad state. This included one of the CEs and an argus system. This affected batch jobs submission during the day. The problem was resolved by the oncall team. We received a GGUS Alarm ticket from LHCb about this problem. There is also an ongoing problem on the Alice VO box that may be related.
  • There was a problem with the CMS Castor instance that arose on Saturday. This was caused, at least in part, by the a memory leak in the Castor Transfer Manager. This problem was missed in the noise caused by the above hypervisor problem and was not resolved until Monday morning.
  • A transceiver was replaced in one of the four links between the Tier1 network and the OPNR router. Errors were seen in the network monitoring for this specific physical link.
Resolved Disk Server Issues
  • None
Current operational status and issues
  • We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities but the level of failures is reduced as compared to a few weeks ago.
Ongoing Disk Server Issues
  • gdss673 (LHCb-Tape) was removed from production last Wednesday (5th April) due to it having a double disk failure. Fixing this is ongoing.
Limits on concurrent batch system jobs.
  • Atlas Pilot (Analysis) 1500
  • CMS Multicore 550
  • LHCb 500 (In response to ongoing problems with LHCb Castor.)
Notable Changes made since the last meeting.
  • Increased limit on number of CMS multicore jobs from 460 to 550 due to increased pledge for 2017.
  • Out of Hours cover for the CEPH ECHO service is being piloted.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
lcgwms04.gridpp.rl.ac.uk SCHEDULED OUTAGE 12/04/2017 09:05 18/04/2017 12:00 6 days, 2 hours and 55 minutes server migration
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Update Castor SRMs - CMS & GEN still to do. This is awaiting a full understanding of the problem seen with LHCb.
  • Chiller replacement - work ongoing. (First chiller replacement completed, second in place).
  • Merge AtlasScratchDisk into larger Atlas disk pool.

Listing by category:

  • Castor:
    • Update SRMs to new version, including updating to SL6.
    • Bring some newer disk servers ('14 generation) into service, replacing some older ('12 generation) servers.
  • Networking
    • Enable first services on production network with IPv6 once addressing scheme agreed.
  • Infrastructure:
    • Two of the chillers supplying the air-conditioning for the R89 machine room are being replaced.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
lcgwms04.gridpp.rl.ac.uk SCHEDULED OUTAGE 12/04/2017 09:05 18/04/2017 12:00 6 days, 2 hours and 55 minutes server migration
srm-lhcb.gridpp.rl.ac.uk UNSCHEDULED OUTAGE 09/04/2017 12:00 10/04/2017 12:00 24 hours Problems with LHCb transfers
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
127617 Green Alarm In Progress 2017-04-09 2017-04-09 LHCb RAL srm-s are down for LHCb
127612 Green Alarm In Progress 2017-04-08 2017-04-10 LHCb CEs at RAL not responding
127611 Green Top Priority In Progress 2017-04-08 2017-04-11 Alice ALICE VOBOX network problem
127598 Green Urgent In Progress 2017-04-07 2017-04-07 CMS UK XRootD Redirector
127597 Green Urgent In Progress 2017-04-07 2017-04-10 CMS Check networking and xrootd RAL-CERN performance
127388 Green Less urgent In Progress 2017-03-29 2017-04-03 LHCb [FATAL] Connection error for some file
127240 Green Urgent In Progress 2017-03-21 2017-04-05 CMS Staging Test at UK_RAL for Run2
126905 Green Less Urgent Waiting Reply 2017-03-02 2017-04-03 solid finish commissioning cvmfs server for solidexperiment.org
126184 Amber Less Urgent In Progress 2017-01-26 2017-02-07 Atlas Request of inputs for new sites monitoring
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2017-03-02 CASTOR at RAL not publishing GLUE 2.
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC ECHO = Atlas ECHO (Template 841);CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC Atlas HC ECHO CMS HC Comment
05/04/17 100 100 100 100 88 100 100 N/A SRM Test failures.
06/04/17 100 100 100 99 84 100 100 100 SRM test failures for both CMS and LHCb.
07/04/17 100 100 100 100 100 100 100 100
08/04/17 100 96 88 75 94 100 100 100 A hypervisor failure led to problems for one of the CEs and argus.
09/04/17 100 100 100 45 88 100 100 100 CMS: Problem with CMS Castor (transfer manager problems); LHCb - SRM test failures.
10/04/17 100 100 100 60 100 100 99 99 CMS: Ongoing problem with CMS Castor (above) fixed during morning.
11/04/17 100 100 58 93 80 98 98 99 All three are SRM test failures caused by Castor problems.
Notes from Meeting.
  • None yet