Difference between revisions of "Tier1 Operations Report 2017-08-09"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(17 intermediate revisions by one user not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the fortnight 26th July to 9th August 2017.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the fortnight 26th July to 9th August 2017.
 
|}
 
|}
* There have been problems with the Atlas Castor instance that appear to be within the SRM. AtlasScratch shows some high load - which may be a combination of the actual request rate plus the small number of (old) disk servers in the pool.
+
* The problem with file transfers initiated by the CERN FTS3 service to/from our Castor storage was ongoing at the time of the last meeting (26th July). This was traced to an update to the CERN FTS3 services. CERN reverted the change and the problem was resolved.
* In the early hours of Monday morning (24th July) there was a problem with one of the site BDII nodes. This was fixed by the on-call team.
+
* There was a problem with the Atlas Frontier service on Thursday 27th July. We saw high load on the back end database systems.
* There was a site networking problem in the early hours of Tuesday morning. One of the site core stacks stopped working correctly. The Tier1 core network connects via two routers into two of the site core network stacks to give resilience via a failover. Overnight the connection flipped between the connections to the two core stacks several times. However, it appears that the one failing stack was in a bad shape and even when nominally up was not working correctly. This failing stack was stopped in the morning which restored network connectivity and later the Tier1 router pair were set to run only through the good second stack. The central networking team await input from the vendors before intervening further on the problematic switch/router stack.
+
* There was a problem with the test FTS3 sertvice on Friday 28th July. The system hit a limit of having done 2 billion file transfers. An emergency update was applied.
* Since yesterday morning there has been a problem with Castor file transfers for those transfers initiated by the CERN FTS3 service. This is not yet understood. GGUS tickets have been received from both CMS and LHCb about this problem which is ongoing at the time of the meeting.
+
* There was a network break during the morning of Wednesday 2nd August. Unfortunately coinciding with staff being at a divisional meeting. There had been a problem with one of the RAL core network stacks on the 25th July. We had set our router pair (the Extreme X670s) to not flip back to use the link to this failing stack. However, during work to resolve the problem on the failed core stack our second link to another core stack went down - it appears our routers thought there was a network loop. This caused the Extreme x670 router pair to try switching back to the other connection. The upshot was a complete break in Tier1 connectivity to the core for around an hour. All network systems have since been fully restored and the fail-over configuration returned to its normal state. There was some delay in re-establishing IPv6 connectivity.
* At the end of last week CMSDisk became full. This affected the SAM SRM tests and hence CMS availability.
+
* There was a problem with the Atlas Castor SRMs in the early hours of Saturday 5th August. For reasons not understood there was an increase in the query rate to the SRMs from Atlas work. This overwhelmed the SRMs. After some work by both the database and Castor on-call staff an outage was declared for Atlas in the GOC DB. Once the load had reduced the SRMs were able to recover and the services has run normally since. It is possible the problem was related to the small number of (old) disk servers in the AtlasScratch pool causing poor performance for Castor. The merger of this pool into the larger AtlasDataDisk pool may reduce the chance of this problem recurring.
* There is a power outage scheduled for the Atlas Building (R26) over the weekend of 29/30 July. Preparations are being made to remove any impact this may have on operational services.
+
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 26: Line 25:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS731 (LHCbUser - D1T0) failed in the early hours of Monday 24th July. The server was put back in service later that day, initially read-only, after a disk drive was replaced.
+
* None.
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 37: Line 36:
 
|}
 
|}
 
* We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
 
* We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
* There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
+
* There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. This is having an effect on our data that flows through the firewall (such as to/from worker nodes).
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 70: Line 69:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* The number of placement groups in the Echo CEPH Atlas pool contiues to be increased. This is in preparation for the increase in storage capacity when new hardware is added.
+
* On Monday (7th August) the AtlasScratchDisk was merged into the larger AtlasDataDisk pool in Castor.
* Security patching being carried out across systems.
+
* The planned increases in the number of placement groups in the Echo CEPH Atlas pool has been completed. The remaining third of the 2015 storage purchases have been placed into Echo and the process of moving data so that use is made of this hardware has been started.
* Updating of RAID card firmware in one batch of disk servers (OCF '14) was completed on Thursday (20th July).
+
* All squid nodes are now IPv4/IPv6 dual stack.
* 6 additional disk servers have been deployed into AtlasTape
+
* "Test" FTS3 instance (used by Atlas) updated to 3.6.10 (emergency update - as this was the first server to reach 2 billion transfers. Due to an internal 32-bit integer being used it completely stopped working at this point.)
* xrootd gateway & proxy added to another batch of worker nodes .
+
* Power work was carried out in building R26 (the Atlas building) over the weekend of 29/30 July. This had no impact on our operational services.
 +
* There was a successful UPS/Generator load test this morning (9th Aug). These are done quarterly and this was the first regular test since the building UPS was replaced. (It had been tested shortly after installation).
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 94: Line 94:
 
! Duration
 
! Duration
 
! Reason
 
! Reason
|-
 
| Whole site
 
| UNSCHEDULED
 
| WARNING
 
| 25/07/2017 10:57
 
| 26/07/2017 12:00
 
| 1 day, 1 hour and 3 minutes
 
| Warning after network problems and castor reboot
 
 
|-
 
|-
 
| srm-superb.gridpp.rl.ac.uk,  
 
| srm-superb.gridpp.rl.ac.uk,  
Line 134: Line 126:
 
'''Pending - but not yet formally announced:'''
 
'''Pending - but not yet formally announced:'''
 
* Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface was disabled on Monday (17th July).
 
* Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface was disabled on Monday (17th July).
* Increase the number of placement groups in the Atlas Echo CEPH pool. (Ongoing)
+
* Re-distribute the date in Echo onto the 2016 capacity hardware. (Ongoing)
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Castor:  
 
* Castor:  
 
** Move to generic Castor headnodes.
 
** Move to generic Castor headnodes.
** Merge AtlasScratchDisk into larger Atlas disk pool.
 
 
* Echo:
 
* Echo:
** Increase the number of placement groups in the Atlas Echo CEPH pool.
+
** Re-distribute the date in Echo onto the remaining 2015 capacity hardware.
 
* Networking
 
* Networking
** Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
+
** Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar and all squids now working over IPv6).
 
* Services
 
* Services
** The production FTS needs updating. This will no longer support the SOAP interface. (The "test" FTS , used by Atlas, has already been upgraded.)
+
** The production FTS will be updated now the requirement to support the deprecated SOAP interface has been removed.
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ************************************************************************** ----->
 
<!-- ************************************************************************** ----->
Line 165: Line 156:
 
! Reason
 
! Reason
 
|-
 
|-
| Whole site  
+
| srm-atlas.gridpp.rl.ac.uk,
 +
| UNSCHEDULED
 +
| OUTAGE
 +
| 05/08/2017 07:30
 +
| 05/08/2017 12:00
 +
| 4 hours and 30 minutes
 +
| Ongoing problems with Atlas SRMs.
 +
|-
 +
| Whole site.
 
| UNSCHEDULED
 
| UNSCHEDULED
 
| WARNING
 
| WARNING
Line 171: Line 170:
 
| 26/07/2017 12:00
 
| 26/07/2017 12:00
 
| 1 day, 1 hour and 3 minutes
 
| 1 day, 1 hour and 3 minutes
| Warning after network problems and castor reboot
+
| warning after network problems and castor reboot
 
|-
 
|-
| All Castor
+
| srm-hone.gridpp.rl.ac.uk,  
| SCHEDULED
+
| OUTAGE
+
| 25/07/2017 09:30
+
| 25/07/2017 11:54
+
| 2 hours and 24 minutes
+
| Castor Storage Unavailable during OS patching.
+
|-
+
| Whole site
+
| UNSCHEDULED
+
| OUTAGE
+
| 25/07/2017 06:00
+
| 25/07/2017 11:54
+
| 5 hours and 54 minutes
+
| Network problem at RAL - under investigation
+
|-
+
| srm-superb.gridpp.rl.ac.uk,  
+
 
| SCHEDULED
 
| SCHEDULED
 
| OUTAGE
 
| OUTAGE
Line 195: Line 178:
 
| 30/08/2017 13:00
 
| 30/08/2017 13:00
 
| 40 days, 21 hours
 
| 40 days, 21 hours
| SuperB no longer supported on Castor storage. Retiring endpoint.
+
| H1 no longer supported on Castor storage. Retiring endpoint.
 
|-
 
|-
| srm-hone.gridpp.rl.ac.uk,  
+
| srm-superb.gridpp.rl.ac.uk,  
 
| SCHEDULED
 
| SCHEDULED
 
| OUTAGE
 
| OUTAGE
Line 203: Line 186:
 
| 30/08/2017 13:00
 
| 30/08/2017 13:00
 
| 40 days, 21 hours
 
| 40 days, 21 hours
| H1 no longer supported on Castor storage. Retiring endpoint.
+
| SuperB no longer supported on Castor storage. Retiring endpoint.
|-
+
| srm-lhcb.gridpp.rl.ac.uk,
+
| SCHEDULED
+
| WARNING
+
| 19/07/2017 13:00
+
| 19/07/2017 16:30
+
| 3 hours and 30 minutes
+
| Rebooting some disk server to update firmware, causing some interruptions in service
+
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 228: Line 203:
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
|-
 
|-
| 129777
+
| 129883
 
| Green
 
| Green
 
| Urgent
 
| Urgent
 
| In Progress
 
| In Progress
| 2017-07-26
+
| 2017-08-01
| 2017-07-26
+
| 2017-08-03
 
| CMS
 
| CMS
| Transfer failing from T1_UK_RAL
+
| Low HC xrootd success rates at T1_UK_RAL
|-
+
| 129769
+
| Green
+
| Less Urgent
+
| In Progress
+
| 2017-07-26
+
| 2017-07-26
+
| LHCb
+
| FTS failure at RAL
+
|-
+
| 129748
+
| Green
+
| Less Urgeny
+
| Waiting Reply
+
| 2017-07-25
+
| 2017-07-26
+
| Atlas
+
| RAL-LCG2: deletion errors
+
|-
+
| 129573
+
| Green
+
| Urgent
+
| In Progress
+
| 2017-07-16
+
| 2017-07-21
+
| Atlas
+
| RAL-LCG2: DDM transfer failure with Connection to gridpp.rl.ac.uk refused
+
|-
+
| 129342
+
| Green
+
| Urgent
+
| In Progress
+
| 2017-07-04
+
| 2017-07-19
+
|
+
| [Rod Dashboard] Issue detected : org.sam.SRM-Put-ops@srm-mice.gridpp.rl.ac.uk
+
 
|-
 
|-
 
| 128991
 
| 128991
 
| Green
 
| Green
 
| Less Urgent
 
| Less Urgent
| In Progress
+
| On Hold
 
| 2017-06-16
 
| 2017-06-16
 
| 2017-07-20
 
| 2017-07-20

Latest revision as of 13:05, 9 August 2017

RAL Tier1 Operations Report for 9th August 2017

Review of Issues during the fortnight 26th July to 9th August 2017.
  • The problem with file transfers initiated by the CERN FTS3 service to/from our Castor storage was ongoing at the time of the last meeting (26th July). This was traced to an update to the CERN FTS3 services. CERN reverted the change and the problem was resolved.
  • There was a problem with the Atlas Frontier service on Thursday 27th July. We saw high load on the back end database systems.
  • There was a problem with the test FTS3 sertvice on Friday 28th July. The system hit a limit of having done 2 billion file transfers. An emergency update was applied.
  • There was a network break during the morning of Wednesday 2nd August. Unfortunately coinciding with staff being at a divisional meeting. There had been a problem with one of the RAL core network stacks on the 25th July. We had set our router pair (the Extreme X670s) to not flip back to use the link to this failing stack. However, during work to resolve the problem on the failed core stack our second link to another core stack went down - it appears our routers thought there was a network loop. This caused the Extreme x670 router pair to try switching back to the other connection. The upshot was a complete break in Tier1 connectivity to the core for around an hour. All network systems have since been fully restored and the fail-over configuration returned to its normal state. There was some delay in re-establishing IPv6 connectivity.
  • There was a problem with the Atlas Castor SRMs in the early hours of Saturday 5th August. For reasons not understood there was an increase in the query rate to the SRMs from Atlas work. This overwhelmed the SRMs. After some work by both the database and Castor on-call staff an outage was declared for Atlas in the GOC DB. Once the load had reduced the SRMs were able to recover and the services has run normally since. It is possible the problem was related to the small number of (old) disk servers in the AtlasScratch pool causing poor performance for Castor. The merger of this pool into the larger AtlasDataDisk pool may reduce the chance of this problem recurring.
Resolved Disk Server Issues
  • None.
Current operational status and issues
  • We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
  • There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. This is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • On Monday (7th August) the AtlasScratchDisk was merged into the larger AtlasDataDisk pool in Castor.
  • The planned increases in the number of placement groups in the Echo CEPH Atlas pool has been completed. The remaining third of the 2015 storage purchases have been placed into Echo and the process of moving data so that use is made of this hardware has been started.
  • All squid nodes are now IPv4/IPv6 dual stack.
  • "Test" FTS3 instance (used by Atlas) updated to 3.6.10 (emergency update - as this was the first server to reach 2 billion transfers. Due to an internal 32-bit integer being used it completely stopped working at this point.)
  • Power work was carried out in building R26 (the Atlas building) over the weekend of 29/30 July. This had no impact on our operational services.
  • There was a successful UPS/Generator load test this morning (9th Aug). These are done quarterly and this was the first regular test since the building UPS was replaced. (It had been tested shortly after installation).
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-superb.gridpp.rl.ac.uk, SCHEDULED OUTAGE 20/07/2017 16:00 30/08/2017 13:00 40 days, 21 hours SuperB no longer supported on Castor storage. Retiring endpoint.
srm-hone.gridpp.rl.ac.uk, SCHEDULED OUTAGE 20/07/2017 16:00 30/08/2017 13:00 40 days, 21 hours H1 no longer supported on Castor storage. Retiring endpoint.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface was disabled on Monday (17th July).
  • Re-distribute the date in Echo onto the 2016 capacity hardware. (Ongoing)

Listing by category:

  • Castor:
    • Move to generic Castor headnodes.
  • Echo:
    • Re-distribute the date in Echo onto the remaining 2015 capacity hardware.
  • Networking
    • Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar and all squids now working over IPv6).
  • Services
    • The production FTS will be updated now the requirement to support the deprecated SOAP interface has been removed.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-atlas.gridpp.rl.ac.uk, UNSCHEDULED OUTAGE 05/08/2017 07:30 05/08/2017 12:00 4 hours and 30 minutes Ongoing problems with Atlas SRMs.
Whole site. UNSCHEDULED WARNING 25/07/2017 10:57 26/07/2017 12:00 1 day, 1 hour and 3 minutes warning after network problems and castor reboot
srm-hone.gridpp.rl.ac.uk, SCHEDULED OUTAGE 20/07/2017 16:00 30/08/2017 13:00 40 days, 21 hours H1 no longer supported on Castor storage. Retiring endpoint.
srm-superb.gridpp.rl.ac.uk, SCHEDULED OUTAGE 20/07/2017 16:00 30/08/2017 13:00 40 days, 21 hours SuperB no longer supported on Castor storage. Retiring endpoint.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
129883 Green Urgent In Progress 2017-08-01 2017-08-03 CMS Low HC xrootd success rates at T1_UK_RAL
128991 Green Less Urgent On Hold 2017-06-16 2017-07-20 Solid solidexperiment.org CASTOR tape support
127597 Red Urgent On Hold 2017-04-07 2017-06-14 CMS Check networking and xrootd RAL-CERN performance
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2017-07-06 CASTOR at RAL not publishing GLUE 2.
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas Echo Comment
26/07/17 100 100 100 100 100 100
27/07/17 100 100 100 100 100 100
28/07/17 100 100 100 100 100 100
29/07/17 100 100 100 100 100 100
30/07/17 100 100 100 100 100 100
31/07/17 100 100 100 100 100 100
01/08/17 100 100 100 91 100 100 SRM test failures. Mainly user timeouts.
02/08/17 95.5 100 100 86 94 100 Attributing all to network break.
03/08/17 100 100 100 79 100 100 SRM test failures. Mainly user timeouts.
04/08/17 100 100 100 93 100 100 Some ‘User timeout over’ errors on the SRM test. One or two failed CE tests.
05/08/17 100 100 100 96 100 100 At least two ‘Unable to issue PrepareToPut request to Castor’ failures on the SRM tests. A few scattered CE test failures.
06/08/17 100 100 100 99 100 100 There were two ‘Unable to issue PrepareToPut request to Castor’ failures on the SRM tests.
07/08/17 100 100 100 94 100 100 SRM test failures. Mainly user timeouts.
08/08/17 100 100 100 98 100 100 Two SRM test failures onGET. Both “User Timeout”s
Hammercloud Test Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud

Day Atlas HC Atlas HC Echo CMS HC Comment
26/07/17 99 100 100
27/07/17 97 96 100
28/07/17 85 91 100
29/07/17 100 96 100
30/07/17 100 100 100
31/07/17 100 100 100
01/08/17 100 98 100
02/08/17 97 96 100
03/08/17 99 100 100
04/08/17 100 100 100
05/08/17 57 100 94
06/08/17 100 100 38
07/08/17 80 98 67
08/08/17 68 96 100
Notes from Meeting.
  • None yet