Difference between revisions of "Tier1 Operations Report 2017-08-16"

From GridPP Wiki
Jump to: navigation, search
()
 
(12 intermediate revisions by 2 users not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 9th to 16th August 2017.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 9th to 16th August 2017.
 
|}
 
|}
* The problem with file transfers initiated by the CERN FTS3 service to/from our Castor storage was ongoing at the time of the last meeting (26th July). This was traced to an update to the CERN FTS3 services. CERN reverted the change and the problem was resolved.
+
* There was a problem with Atlas Castor during the afternoon / early evening of Thursday 10th August. Atlas Castor was restarted and the problem went away during the evening. However, the cause is not understood. We have previously had some problems with the Atlas Castor SRMs but the symptoms of this failure appeared different to those.
* There was a problem with the Atlas Frontier service on Thursday 27th July. We saw high load on the back end database systems.
+
* There was a problem with the test FTS3 sertvice on Friday 28th July. The system hit a limit of having done 2 billion file transfers. An emergency update was applied.
+
* There was a network break during the morning of Wednesday 2nd August. Unfortunately coinciding with staff being at a divisional meeting. There had been a problem with one of the RAL core network stacks on the 25th July. We had set our router pair (the Extreme X670s) to not flip back to use the link to this failing stack. However, during work to resolve the problem on the failed core stack our second link to another core stack went down - it appears our routers thought there was a network loop. This caused the Extreme x670 router pair to try switching back to the other connection. The upshot was a complete break in Tier1 connectivity to the core for around an hour. All network systems have since been fully restored and the fail-over configuration returned to its normal state. There was some delay in re-establishing IPv6 connectivity.
+
* There was a problem with the Atlas Castor SRMs in the early hours of Saturday 5th August. For reasons not understood there was an increase in the query rate to the SRMs from Atlas work. This overwhelmed the SRMs. After some work by both the database and Castor on-call staff an outage was declared for Atlas in the GOC DB. Once the load had reduced the SRMs were able to recover and the services has run normally since. It is possible the problem was related to the small number of (old) disk servers in the AtlasScratch pool causing poor performance for Castor. The merger of this pool into the larger AtlasDataDisk pool may reduce the chance of this problem recurring.
+
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 25: Line 21:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* None.
+
* GDSS753 (AtlasDataDisk - D1T0) failed in the early hours of Thursday 10th August. It was returned to production early Friday afternoon (11th). Following a disk drive failure the RAID card tried a rebuild which also failed (either the spare drive was bad or the failed drive came back and initially appeared good).
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 35: Line 31:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
|}
 
|}
* We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
+
* We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. Our investigations are ongoing.
 
* There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. This is having an effect on our data that flows through the firewall (such as to/from worker nodes).
 
* There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. This is having an effect on our data that flows through the firewall (such as to/from worker nodes).
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
Line 69: Line 65:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* On Monday (7th August) the AtlasScratchDisk was merged into the larger AtlasDataDisk pool in Castor.
+
* On Tuesday (15th August) a new version of XRootD was installed on the Echo gateways (v20170724-06470a6).
* The planned increases in the number of placement groups in the Echo CEPH Atlas pool has been completed. The remaining third of the 2015 storage purchases have been placed into Echo and the process of moving data so that use is made of this hardware has been started.
+
* The CVMFS Stratum-1 service (a 2-node High-Availability cluster) is now dual stack with IPv6.  
* All squid nodes are now IPv4/IPv6 dual stack.
+
* This morning (16th Aug) one of the three links that make up the OPN connection was moved to a new circuit that uses a different route - improving resilience of the overall OPN link..  
* "Test" FTS3 instance (used by Atlas) updated to 3.6.10 (emergency update - as this was the first server to reach 2 billion transfers. Due to an internal 32-bit integer being used it completely stopped working at this point.)
+
* Power work was carried out in building R26 (the Atlas building) over the weekend of 29/30 July. This had no impact on our operational services.
+
* There was a successful UPS/Generator load test this morning (9th Aug). These are done quarterly and this was the first regular test since the building UPS was replaced. (It had been tested shortly after installation).
+
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 126: Line 119:
 
'''Pending - but not yet formally announced:'''
 
'''Pending - but not yet formally announced:'''
 
* Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface was disabled on Monday (17th July).
 
* Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface was disabled on Monday (17th July).
* Re-distribute the date in Echo onto the 2016 capacity hardware. (Ongoing)
+
* Re-distribute the data in Echo onto the 2015 capacity hardware. (Ongoing)
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Castor:  
 
* Castor:  
 
** Move to generic Castor headnodes.
 
** Move to generic Castor headnodes.
 
* Echo:
 
* Echo:
** Re-distribute the date in Echo onto the remaining 2015 capacity hardware.
+
** Re-distribute the data in Echo onto the remaining 2015 capacity hardware.
 
* Networking
 
* Networking
** Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar and all squids now working over IPv6).
+
** Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, all squids and the CVMFS Stratum-1 servers).
 
* Services
 
* Services
 
** The production FTS will be updated now the requirement to support the deprecated SOAP interface has been removed.
 
** The production FTS will be updated now the requirement to support the deprecated SOAP interface has been removed.
Line 155: Line 148:
 
! Duration
 
! Duration
 
! Reason
 
! Reason
|-
 
| srm-atlas.gridpp.rl.ac.uk,
 
| UNSCHEDULED
 
| OUTAGE
 
| 05/08/2017 07:30
 
| 05/08/2017 12:00
 
| 4 hours and 30 minutes
 
| Ongoing problems with Atlas SRMs.
 
|-
 
| Whole site.
 
| UNSCHEDULED
 
| WARNING
 
| 25/07/2017 10:57
 
| 26/07/2017 12:00
 
| 1 day, 1 hour and 3 minutes
 
| warning after network problems and castor reboot
 
 
|-
 
|-
 
| srm-hone.gridpp.rl.ac.uk,  
 
| srm-hone.gridpp.rl.ac.uk,  
Line 203: Line 180:
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
|-
 
|-
| 129883
+
| 130093
 
| Green
 
| Green
| Urgent
+
| Less Urgent
 
| In Progress
 
| In Progress
| 2017-08-01
+
| 2017-08-16
| 2017-08-03
+
| 2017-08-16
 
| CMS
 
| CMS
| Low HC xrootd success rates at T1_UK_RAL
+
| IPv6 address for CMS UK xrootd redirector at RAL?
 +
|-
 +
| 129998
 +
| Green
 +
| Less Urgent
 +
| In Progress
 +
| 2017-08-09
 +
| 2017-08-15
 +
| Atlas
 +
| High job failure rate at RAL-LCG2-ECHO_MCORE caused by lost heartbeats
 
|-
 
|-
 
| 128991
 
| 128991
Line 317: Line 303:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
|}
 
|}
* None yet
+
* It was noted that all the 2015 capacity storage is now in Echo. As a result Echo now has around 13.4Petabytes of space raw. This gives approximately 10PBytes of usable space. Work is ongoing for Echo (CEPH) to re-balance its existing data across all the storage. This will take around three weeks in total. After that is completed the additional capacity can be used by the VOs.

Latest revision as of 13:58, 16 August 2017

RAL Tier1 Operations Report for 16th August 2017

Review of Issues during the week 9th to 16th August 2017.
  • There was a problem with Atlas Castor during the afternoon / early evening of Thursday 10th August. Atlas Castor was restarted and the problem went away during the evening. However, the cause is not understood. We have previously had some problems with the Atlas Castor SRMs but the symptoms of this failure appeared different to those.
Resolved Disk Server Issues
  • GDSS753 (AtlasDataDisk - D1T0) failed in the early hours of Thursday 10th August. It was returned to production early Friday afternoon (11th). Following a disk drive failure the RAID card tried a rebuild which also failed (either the spare drive was bad or the failed drive came back and initially appeared good).
Current operational status and issues
  • We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. Our investigations are ongoing.
  • There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. This is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • On Tuesday (15th August) a new version of XRootD was installed on the Echo gateways (v20170724-06470a6).
  • The CVMFS Stratum-1 service (a 2-node High-Availability cluster) is now dual stack with IPv6.
  • This morning (16th Aug) one of the three links that make up the OPN connection was moved to a new circuit that uses a different route - improving resilience of the overall OPN link..
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-superb.gridpp.rl.ac.uk, SCHEDULED OUTAGE 20/07/2017 16:00 30/08/2017 13:00 40 days, 21 hours SuperB no longer supported on Castor storage. Retiring endpoint.
srm-hone.gridpp.rl.ac.uk, SCHEDULED OUTAGE 20/07/2017 16:00 30/08/2017 13:00 40 days, 21 hours H1 no longer supported on Castor storage. Retiring endpoint.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface was disabled on Monday (17th July).
  • Re-distribute the data in Echo onto the 2015 capacity hardware. (Ongoing)

Listing by category:

  • Castor:
    • Move to generic Castor headnodes.
  • Echo:
    • Re-distribute the data in Echo onto the remaining 2015 capacity hardware.
  • Networking
    • Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, all squids and the CVMFS Stratum-1 servers).
  • Services
    • The production FTS will be updated now the requirement to support the deprecated SOAP interface has been removed.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-hone.gridpp.rl.ac.uk, SCHEDULED OUTAGE 20/07/2017 16:00 30/08/2017 13:00 40 days, 21 hours H1 no longer supported on Castor storage. Retiring endpoint.
srm-superb.gridpp.rl.ac.uk, SCHEDULED OUTAGE 20/07/2017 16:00 30/08/2017 13:00 40 days, 21 hours SuperB no longer supported on Castor storage. Retiring endpoint.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
130093 Green Less Urgent In Progress 2017-08-16 2017-08-16 CMS IPv6 address for CMS UK xrootd redirector at RAL?
129998 Green Less Urgent In Progress 2017-08-09 2017-08-15 Atlas High job failure rate at RAL-LCG2-ECHO_MCORE caused by lost heartbeats
128991 Green Less Urgent On Hold 2017-06-16 2017-07-20 Solid solidexperiment.org CASTOR tape support
127597 Red Urgent On Hold 2017-04-07 2017-06-14 CMS Check networking and xrootd RAL-CERN performance
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2017-07-06 CASTOR at RAL not publishing GLUE 2.
Availability Report
Day OPS Alice Atlas CMS LHCb Atlas Echo Comment
09/08/17 100 100 100 92 100 100 Sporadic SRM test failures with "user timeout".
10/08/17 100 100 77 76 100 100 Atlas Castor problems during the evening; CMS Sporadic SRM test failures with "user timeout".
11/08/17 100 100 100 83 100 100 Sporadic SRM test failures with "user timeout".
12/08/17 100 100 100 81 100 100 Sporadic SRM test failures with "user timeout".
13/08/17 100 100 100 86 100 100 Sporadic SRM test failures with "user timeout".
14/08/17 100 100 100 84 100 100 A lot of SRM test failures with "user timeout"
15/08/17 96.2 100 100 97 100 100 CMS: A small number of SRM test failures with "user timeout".
Hammercloud Test Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud

Day Atlas HC Atlas HC Echo CMS HC Comment
09/08/17 95 96 100
10/08/17 71 88 100
11/08/17 96 100 100
12/08/17 96 100 100
13/08/17 100 100 100
14/08/17 100 100 100
15/08/17 100 100 100
Notes from Meeting.
  • It was noted that all the 2015 capacity storage is now in Echo. As a result Echo now has around 13.4Petabytes of space raw. This gives approximately 10PBytes of usable space. Work is ongoing for Echo (CEPH) to re-balance its existing data across all the storage. This will take around three weeks in total. After that is completed the additional capacity can be used by the VOs.