Difference between revisions of "Tier1 Operations Report 2017-11-08"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(21 intermediate revisions by 2 users not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 1st to 8th November 2017.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 1st to 8th November 2017.
 
|}
 
|}
* At last week's meeting we reported that CMS were switched over to using xrootd.echo.stfc.ac.uk meaning that CMS jobs used xrootd.echo.stfc.ac.uk as the primary means of accessing local data via xrootd. That change was reverted - but has since been re-applied. There was a further issue where long-lived CMS jobs needed to complete before the changes were picked up by all CMS batch jobs.
+
* There was a failure of the CMS Castor stager headnode early morning yesterday (7th Nov) - the processor failed. The physical box was replaced (using one from the Castor 'preprod) system and CMS Castor service resumed during the afternoon.
 +
* There was a short (15minute) network problem affecting some connectivity to the Tier1 from the RAL core network this morning (8th Nov). It was traced to the failure of a router that was taken out of service.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 22: Line 23:
 
|}
 
|}
 
* There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. This is having an effect on our data that flows through the firewall (such as to/from worker nodes).
 
* There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. This is having an effect on our data that flows through the firewall (such as to/from worker nodes).
 
* CASTOR CMS Stager down. Investigation implies CPU error on physical box – “E1442 CPU2 machine check error. Power cycle AC.”)
 
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 66: Line 65:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* Re-distribution of data in Echo onto the 2015 capacity hardware is ongoing. This is expected to complete in a few weeks.
+
* Re-distribution of data in Echo onto the 2015 capacity hardware is ongoing. This is expected to complete in a week or two.
* CMS batch switched to use Echo as the primary means of accessing local data via xrootd. If data not found there it will fail over to Castor.
+
* The Echo Gateways have had a parameter change that means the GridFTP gateways make better use of memory. This will enable the number of connections to each gateway server to be increased.
+
 
* A start has been made on updating Castor tape servers to SL7.
 
* A start has been made on updating Castor tape servers to SL7.
 +
* The "Test" FTS service (used by Atlas) has been enabled for IPv6 (i.e. dual stack enabled).
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 80: Line 78:
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting since the last report.
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting since the last report.
 
|}
 
|}
* None
+
{| border=1 align=center
 +
|- bgcolor="#7c8aaf"
 +
! Service
 +
! Scheduled?
 +
! Outage/At Risk
 +
! Start
 +
! End
 +
! Duration
 +
! Reason
 +
|-
 +
|srm-cms.gridpp.rl.ac.uk,
 +
| UNSCHEDULED
 +
| OUTAGE
 +
| 07/11/2017 06:00
 +
| 07/11/2017 16:00
 +
| 10 hours
 +
|Hardware problems with Stager server - CMS Castor instance
 +
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 108: Line 123:
 
* Re-distribute the data in Echo onto the 2015 capacity hardware. (Ongoing)
 
* Re-distribute the data in Echo onto the 2015 capacity hardware. (Ongoing)
 
* Update the LHCb Castor SRMs so as to be able to configure timeouts.
 
* Update the LHCb Castor SRMs so as to be able to configure timeouts.
 +
* The Production FTS service will be enabled for IPv6 (i.e. dual stack enabled).
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Castor:  
 
* Castor:  
Line 119: Line 135:
 
* Services
 
* Services
 
** The Production and "Test" (Atlas) FTS3 services will be merged and will make use of a resilient distributed database.
 
** The Production and "Test" (Atlas) FTS3 services will be merged and will make use of a resilient distributed database.
 +
* Internal
 +
** DNS servers will be rolled out within the Tier1 network.
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ************************************************************************** ----->
 
<!-- ************************************************************************** ----->
 
  
 
====== ======
 
====== ======
Line 135: Line 152:
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
|-
 
|-
| 131695
+
| 131688
 
| Green
 
| Green
| Less Urgent
+
| Urgent
 
| Assigned
 
| Assigned
 +
| 2017-11-07
 
| 2017-11-08
 
| 2017-11-08
| 2017-11-08
+
| CMS
| OPS
+
| T1_UK_RAL has SAM3 CE critical > 2hours
| [Rod Dashboard] Issue detected : org.bdii.GLUE2-Validate@ce-enmr.cerm.unifi.it
+
 
|-
 
|-
| 131694
+
| 131299
 
| Green
 
| Green
 
| Urgent
 
| Urgent
 
| Assigned
 
| Assigned
| 2017-11-08
+
| 2017-10-24
| 2017-11-08
+
| 2017-10-24
 
| CMS
 
| CMS
| PhEDEx agents down for site T2_FR_GRIF_IRFU
+
| T1_UK_RAL HammerCloud failures
 
|-
 
|-
| 131693
+
| 131213
 
| Green
 
| Green
| Less Urgent
+
| Urgent
 
| In Progress
 
| In Progress
| 2017-11-08
+
| 2017-10-19
| 2017-11-08
+
| 2017-11-06
| ATLAS
+
| CMS
| GRIF: squid service degraded for the past ~1.5 days
+
| Issues with fallback requests
 
|-
 
|-
| 131692
+
| 130949
 
| Amber
 
| Amber
| Less Urgent
+
| Urgent
| Assigned
+
| Waiting for Reply
| 2017-11-08
+
| 2017-10-06
| 2017-11-08
+
| 2017-10-25
 
| CMS
 
| CMS
| CMSWEB: Authorisation required
+
| Transfers failing to T1_UK_RAL_Disk
 
|-
 
|-
| 131690
+
| 130207
 
| Red
 
| Red
| Less Urgent
+
| Urgent
| Assigned
+
| In Progress
| 2017-11-07
+
| 2017-08-24
| 2017-11-08
+
| 2017-10-25
| ATLAS
+
| MICE
| IN2P3-CPPM: Transfer canceled because the gsiftp performance marker timeout
+
| Timeouts when copyiing MICE reco data to CASTOR
 
|-
 
|-
| 131689
+
| 127597
 
| Red
 
| Red
| Less Urgent
+
| Urgent
 
| On Hold
 
| On Hold
| 2017-11-07
+
| 2017-04-07
| 2017-11-07
+
| 2017-10-05
 
| OPS
 
| OPS
| [Rod Dashboard] Issue detected : org.nagios.BDII-Check@top-bdii.grid.unam.mx
+
| Check networking and xrootd RAL-CERN performance
 
|-
 
|-
| 131688
+
| 124876
 
| Red
 
| Red
| Urgent
+
| Less Urgent
| Assigned
+
| On Hold
| 2017-11-07
+
| 2016-11-07
| 2017-11-07
+
| 2017-01-01
 
| CMS
 
| CMS
| T1_UK_RAL has SAM3 CE critical > 2hours
+
| [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
 +
|-
 +
| 124876
 +
| Red
 +
| Less Urgent
 +
| On Hold
 +
| 2015-11-18
 +
| 2017-11-06
 +
| NONE
 +
| CASTOR at RAL not publishing GLUE 2
 
|}
 
|}
 
<!-- **********************End GGUS Tickets************************** ----->
 
<!-- **********************End GGUS Tickets************************** ----->
Line 213: Line 239:
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas Echo !!  Comment
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas Echo !!  Comment
 
|-
 
|-
| 01/11/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 0 || 100 || 100 || SRM Test failures.
+
| 01/11/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | N/A || 100 || 100 || Problem with monitoring system
 
|-
 
|-
| 02/10/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 0 || 100 || 100 || SRM Test failures.
+
| 02/11/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | N/A || 100 || 100 || Problem with monitoring system
 
|-
 
|-
| 03/10/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 0 || 100 || 100 || SRM Test failures.
+
| 03/11/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | N/A || 100 || 100 || Problem with monitoring system
 
|-
 
|-
| 04/10/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 0 || 100 || 100 || SRM Test failures.
+
| 04/11/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | N/A || 100 || 100 || Problem with monitoring system
 
|-
 
|-
| 05/10/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 0 || 100 || 100 || SRM Test failures.
+
| 05/11/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | N/A || 100 || 100 || Problem with monitoring system
 
|-
 
|-
| 06/10/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 0 || 100 || 100 || SRM Test failures.
+
| 06/11/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | N/A || 100 || 100 || Problem with monitoring system; CMS Castor down for a while owing to headnode failure.
 
|-
 
|-
| 07/10/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 0 || 100 || 100 ||CMS: CE test failures caused by xroot data access problems to echo.
+
| 07/11/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | N/A || 100 || 100 || Problem with monitoring system; CMS: CE test failures caused by xroot data access problems to echo.
 
|}
 
|}
 
<!-- **********************End Availability Report************************** ----->
 
<!-- **********************End Availability Report************************** ----->
Line 243: Line 269:
 
! Day !! Atlas HC !! Atlas HC Echo !! CMS HC !! Comment
 
! Day !! Atlas HC !! Atlas HC Echo !! CMS HC !! Comment
 
|-
 
|-
| 01/11/17 || 100 || style="background-color: lightgrey;" | 94 || 100 ||  
+
| 01/11/17 || style="background-color: lightgrey;" | 98 || style="background-color: lightgrey;" | 97 || style="background-color: lightgrey;" |99 ||  
 
|-
 
|-
| 02/11/17 || style="background-color: lightgrey;" | 96 || 100 || 100 ||  
+
| 02/11/17 || style="background-color: lightgrey;" | 95 || style="background-color: lightgrey;" | 99 || style="background-color: lightgrey;" | 98 ||  
 
|-
 
|-
| 03/11/17 || 100 || style="background-color: lightgrey;" | 98 || 100 ||  
+
| 03/11/17 || 100 || style="background-color: lightgrey;" | 95 || 100 ||  
 
|-
 
|-
| 04/11/17 || 100 || 100 || 100 ||  
+
| 04/11/17 || 100 || style="background-color: lightgrey;" | 98 || 100 ||  
 
|-
 
|-
| 05/11/17 || 100 || 100 || 100 ||  
+
| 05/11/17 || 100 || style="background-color: lightgrey;" | 95 || 100 ||  
 
|-
 
|-
| 06/11/17 || 100 || 100 || style="background-color: lightgrey;" | 99 ||  
+
| 06/11/17 || 100 || style="background-color: lightgrey;" | 96 || style="background-color: lightgrey;" | 98 ||  
 
|-
 
|-
| 07/11/17 || 100 || 100 || style="background-color: lightgrey;" | 99 ||  
+
| 07/11/17 || 100 || style="background-color: lightgrey;" | 0 || style="background-color: lightgrey;" | 39 ||  
 
|}
 
|}
 
<!-- **********************End Hammercloud Test Report************************** ----->
 
<!-- **********************End Hammercloud Test Report************************** ----->
Line 267: Line 293:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
|}
 
|}
* None yet
+
* Following the merger of AtlasScratchDisk with AtlasDataDisk in Castor some months ago - all data sitting on the older disk servers that made up AtlsScratchDisk has now been moved (or removed). These servers will be decommissioned.

Latest revision as of 16:32, 8 November 2017

RAL Tier1 Operations Report for 8th November 2017

Review of Issues during the week 1st to 8th November 2017.
  • There was a failure of the CMS Castor stager headnode early morning yesterday (7th Nov) - the processor failed. The physical box was replaced (using one from the Castor 'preprod) system and CMS Castor service resumed during the afternoon.
  • There was a short (15minute) network problem affecting some connectivity to the Tier1 from the RAL core network this morning (8th Nov). It was traced to the failure of a router that was taken out of service.
Current operational status and issues
  • There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. This is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Resolved Disk Server Issues
  • None
Ongoing Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • Re-distribution of data in Echo onto the 2015 capacity hardware is ongoing. This is expected to complete in a week or two.
  • A start has been made on updating Castor tape servers to SL7.
  • The "Test" FTS service (used by Atlas) has been enabled for IPv6 (i.e. dual stack enabled).
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-cms.gridpp.rl.ac.uk, UNSCHEDULED OUTAGE 07/11/2017 06:00 07/11/2017 16:00 10 hours Hardware problems with Stager server - CMS Castor instance
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Ongoing or Pending - but not yet formally announced:

  • Re-distribute the data in Echo onto the 2015 capacity hardware. (Ongoing)
  • Update the LHCb Castor SRMs so as to be able to configure timeouts.
  • The Production FTS service will be enabled for IPv6 (i.e. dual stack enabled).

Listing by category:

  • Castor:
    • Update systems (initially tape servers) to use SL7 and configured by Quattor/Aquilon.
    • Move to generic Castor headnodes.
  • Echo:
    • Re-distribute the data in Echo onto the remaining 2015 capacity hardware.
    • Update to next CEPH version ("Luminous").
  • Networking
    • Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, all squids and the CVMFS Stratum-1 servers).
  • Services
    • The Production and "Test" (Atlas) FTS3 services will be merged and will make use of a resilient distributed database.
  • Internal
    • DNS servers will be rolled out within the Tier1 network.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
131688 Green Urgent Assigned 2017-11-07 2017-11-08 CMS T1_UK_RAL has SAM3 CE critical > 2hours
131299 Green Urgent Assigned 2017-10-24 2017-10-24 CMS T1_UK_RAL HammerCloud failures
131213 Green Urgent In Progress 2017-10-19 2017-11-06 CMS Issues with fallback requests
130949 Amber Urgent Waiting for Reply 2017-10-06 2017-10-25 CMS Transfers failing to T1_UK_RAL_Disk
130207 Red Urgent In Progress 2017-08-24 2017-10-25 MICE Timeouts when copyiing MICE reco data to CASTOR
127597 Red Urgent On Hold 2017-04-07 2017-10-05 OPS Check networking and xrootd RAL-CERN performance
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 CMS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
124876 Red Less Urgent On Hold 2015-11-18 2017-11-06 NONE CASTOR at RAL not publishing GLUE 2
Availability Report
Day OPS Alice Atlas CMS LHCb Atlas Echo Comment
01/11/17 100 100 100 N/A 100 100 Problem with monitoring system
02/11/17 100 100 100 N/A 100 100 Problem with monitoring system
03/11/17 100 100 100 N/A 100 100 Problem with monitoring system
04/11/17 100 100 100 N/A 100 100 Problem with monitoring system
05/11/17 100 100 100 N/A 100 100 Problem with monitoring system
06/11/17 100 100 100 N/A 100 100 Problem with monitoring system; CMS Castor down for a while owing to headnode failure.
07/11/17 100 100 100 N/A 100 100 Problem with monitoring system; CMS: CE test failures caused by xroot data access problems to echo.
Hammercloud Test Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud

Day Atlas HC Atlas HC Echo CMS HC Comment
01/11/17 98 97 99
02/11/17 95 99 98
03/11/17 100 95 100
04/11/17 100 98 100
05/11/17 100 95 100
06/11/17 100 96 98
07/11/17 100 0 39
Notes from Meeting.
  • Following the merger of AtlasScratchDisk with AtlasDataDisk in Castor some months ago - all data sitting on the older disk servers that made up AtlsScratchDisk has now been moved (or removed). These servers will be decommissioned.