Difference between revisions of "Tier1 Operations Report 2017-08-09"
From GridPP Wiki
(→) |
(→) |
||
(14 intermediate revisions by one user not shown) | |||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the fortnight 26th July to 9th August 2017. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the fortnight 26th July to 9th August 2017. | ||
|} | |} | ||
− | * | + | * The problem with file transfers initiated by the CERN FTS3 service to/from our Castor storage was ongoing at the time of the last meeting (26th July). This was traced to an update to the CERN FTS3 services. CERN reverted the change and the problem was resolved. |
− | * | + | * There was a problem with the Atlas Frontier service on Thursday 27th July. We saw high load on the back end database systems. |
− | * There was a | + | * There was a problem with the test FTS3 sertvice on Friday 28th July. The system hit a limit of having done 2 billion file transfers. An emergency update was applied. |
− | * | + | * There was a network break during the morning of Wednesday 2nd August. Unfortunately coinciding with staff being at a divisional meeting. There had been a problem with one of the RAL core network stacks on the 25th July. We had set our router pair (the Extreme X670s) to not flip back to use the link to this failing stack. However, during work to resolve the problem on the failed core stack our second link to another core stack went down - it appears our routers thought there was a network loop. This caused the Extreme x670 router pair to try switching back to the other connection. The upshot was a complete break in Tier1 connectivity to the core for around an hour. All network systems have since been fully restored and the fail-over configuration returned to its normal state. There was some delay in re-establishing IPv6 connectivity. |
− | + | * There was a problem with the Atlas Castor SRMs in the early hours of Saturday 5th August. For reasons not understood there was an increase in the query rate to the SRMs from Atlas work. This overwhelmed the SRMs. After some work by both the database and Castor on-call staff an outage was declared for Atlas in the GOC DB. Once the load had reduced the SRMs were able to recover and the services has run normally since. It is possible the problem was related to the small number of (old) disk servers in the AtlasScratch pool causing poor performance for Castor. The merger of this pool into the larger AtlasDataDisk pool may reduce the chance of this problem recurring. | |
− | + | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 37: | Line 36: | ||
|} | |} | ||
* We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded. | * We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded. | ||
− | * There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. | + | * There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. This is having an effect on our data that flows through the firewall (such as to/from worker nodes). |
<!-- ***********End Current operational status and issues*********** -----> | <!-- ***********End Current operational status and issues*********** -----> | ||
<!-- *************************************************************** -----> | <!-- *************************************************************** -----> | ||
Line 71: | Line 70: | ||
|} | |} | ||
* On Monday (7th August) the AtlasScratchDisk was merged into the larger AtlasDataDisk pool in Castor. | * On Monday (7th August) the AtlasScratchDisk was merged into the larger AtlasDataDisk pool in Castor. | ||
− | * The planned increases in the number of placement groups in the Echo CEPH Atlas pool has been completed. The | + | * The planned increases in the number of placement groups in the Echo CEPH Atlas pool has been completed. The remaining third of the 2015 storage purchases have been placed into Echo and the process of moving data so that use is made of this hardware has been started. |
* All squid nodes are now IPv4/IPv6 dual stack. | * All squid nodes are now IPv4/IPv6 dual stack. | ||
* "Test" FTS3 instance (used by Atlas) updated to 3.6.10 (emergency update - as this was the first server to reach 2 billion transfers. Due to an internal 32-bit integer being used it completely stopped working at this point.) | * "Test" FTS3 instance (used by Atlas) updated to 3.6.10 (emergency update - as this was the first server to reach 2 billion transfers. Due to an internal 32-bit integer being used it completely stopped working at this point.) | ||
+ | * Power work was carried out in building R26 (the Atlas building) over the weekend of 29/30 July. This had no impact on our operational services. | ||
+ | * There was a successful UPS/Generator load test this morning (9th Aug). These are done quarterly and this was the first regular test since the building UPS was replaced. (It had been tested shortly after installation). | ||
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 93: | Line 94: | ||
! Duration | ! Duration | ||
! Reason | ! Reason | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
| srm-superb.gridpp.rl.ac.uk, | | srm-superb.gridpp.rl.ac.uk, | ||
Line 133: | Line 126: | ||
'''Pending - but not yet formally announced:''' | '''Pending - but not yet formally announced:''' | ||
* Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface was disabled on Monday (17th July). | * Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface was disabled on Monday (17th July). | ||
− | * | + | * Re-distribute the date in Echo onto the 2016 capacity hardware. (Ongoing) |
'''Listing by category:''' | '''Listing by category:''' | ||
* Castor: | * Castor: | ||
** Move to generic Castor headnodes. | ** Move to generic Castor headnodes. | ||
− | |||
* Echo: | * Echo: | ||
− | ** | + | ** Re-distribute the date in Echo onto the remaining 2015 capacity hardware. |
* Networking | * Networking | ||
− | ** Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar | + | ** Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar and all squids now working over IPv6). |
* Services | * Services | ||
− | ** The production FTS | + | ** The production FTS will be updated now the requirement to support the deprecated SOAP interface has been removed. |
<!-- ***************End Advanced warning for other interventions*************** -----> | <!-- ***************End Advanced warning for other interventions*************** -----> | ||
<!-- ************************************************************************** -----> | <!-- ************************************************************************** -----> | ||
Line 164: | Line 156: | ||
! Reason | ! Reason | ||
|- | |- | ||
− | | Whole site | + | | srm-atlas.gridpp.rl.ac.uk, |
+ | | UNSCHEDULED | ||
+ | | OUTAGE | ||
+ | | 05/08/2017 07:30 | ||
+ | | 05/08/2017 12:00 | ||
+ | | 4 hours and 30 minutes | ||
+ | | Ongoing problems with Atlas SRMs. | ||
+ | |- | ||
+ | | Whole site. | ||
| UNSCHEDULED | | UNSCHEDULED | ||
| WARNING | | WARNING | ||
Line 170: | Line 170: | ||
| 26/07/2017 12:00 | | 26/07/2017 12:00 | ||
| 1 day, 1 hour and 3 minutes | | 1 day, 1 hour and 3 minutes | ||
− | | | + | | warning after network problems and castor reboot |
|- | |- | ||
− | + | | srm-hone.gridpp.rl.ac.uk, | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | | srm- | + | |
| SCHEDULED | | SCHEDULED | ||
| OUTAGE | | OUTAGE | ||
Line 194: | Line 178: | ||
| 30/08/2017 13:00 | | 30/08/2017 13:00 | ||
| 40 days, 21 hours | | 40 days, 21 hours | ||
− | | | + | | H1 no longer supported on Castor storage. Retiring endpoint. |
|- | |- | ||
− | | srm- | + | | srm-superb.gridpp.rl.ac.uk, |
| SCHEDULED | | SCHEDULED | ||
| OUTAGE | | OUTAGE | ||
Line 202: | Line 186: | ||
| 30/08/2017 13:00 | | 30/08/2017 13:00 | ||
| 40 days, 21 hours | | 40 days, 21 hours | ||
− | | | + | | SuperB no longer supported on Castor storage. Retiring endpoint. |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
|} | |} | ||
<!-- **********************End GOC DB Entries************************** -----> | <!-- **********************End GOC DB Entries************************** -----> | ||
Line 227: | Line 203: | ||
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ||
|- | |- | ||
− | | | + | | 129883 |
| Green | | Green | ||
| Urgent | | Urgent | ||
| In Progress | | In Progress | ||
− | | 2017- | + | | 2017-08-01 |
− | | 2017- | + | | 2017-08-03 |
| CMS | | CMS | ||
− | | | + | | Low HC xrootd success rates at T1_UK_RAL |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
|- | |- | ||
| 128991 | | 128991 | ||
| Green | | Green | ||
| Less Urgent | | Less Urgent | ||
− | | | + | | On Hold |
| 2017-06-16 | | 2017-06-16 | ||
| 2017-07-20 | | 2017-07-20 |
Latest revision as of 13:05, 9 August 2017
RAL Tier1 Operations Report for 9th August 2017
Review of Issues during the fortnight 26th July to 9th August 2017. |
- The problem with file transfers initiated by the CERN FTS3 service to/from our Castor storage was ongoing at the time of the last meeting (26th July). This was traced to an update to the CERN FTS3 services. CERN reverted the change and the problem was resolved.
- There was a problem with the Atlas Frontier service on Thursday 27th July. We saw high load on the back end database systems.
- There was a problem with the test FTS3 sertvice on Friday 28th July. The system hit a limit of having done 2 billion file transfers. An emergency update was applied.
- There was a network break during the morning of Wednesday 2nd August. Unfortunately coinciding with staff being at a divisional meeting. There had been a problem with one of the RAL core network stacks on the 25th July. We had set our router pair (the Extreme X670s) to not flip back to use the link to this failing stack. However, during work to resolve the problem on the failed core stack our second link to another core stack went down - it appears our routers thought there was a network loop. This caused the Extreme x670 router pair to try switching back to the other connection. The upshot was a complete break in Tier1 connectivity to the core for around an hour. All network systems have since been fully restored and the fail-over configuration returned to its normal state. There was some delay in re-establishing IPv6 connectivity.
- There was a problem with the Atlas Castor SRMs in the early hours of Saturday 5th August. For reasons not understood there was an increase in the query rate to the SRMs from Atlas work. This overwhelmed the SRMs. After some work by both the database and Castor on-call staff an outage was declared for Atlas in the GOC DB. Once the load had reduced the SRMs were able to recover and the services has run normally since. It is possible the problem was related to the small number of (old) disk servers in the AtlasScratch pool causing poor performance for Castor. The merger of this pool into the larger AtlasDataDisk pool may reduce the chance of this problem recurring.
Resolved Disk Server Issues |
- None.
Current operational status and issues |
- We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
- There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. This is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues |
- None
Limits on concurrent batch system jobs. |
- CMS Multicore 550
Notable Changes made since the last meeting. |
- On Monday (7th August) the AtlasScratchDisk was merged into the larger AtlasDataDisk pool in Castor.
- The planned increases in the number of placement groups in the Echo CEPH Atlas pool has been completed. The remaining third of the 2015 storage purchases have been placed into Echo and the process of moving data so that use is made of this hardware has been started.
- All squid nodes are now IPv4/IPv6 dual stack.
- "Test" FTS3 instance (used by Atlas) updated to 3.6.10 (emergency update - as this was the first server to reach 2 billion transfers. Due to an internal 32-bit integer being used it completely stopped working at this point.)
- Power work was carried out in building R26 (the Atlas building) over the weekend of 29/30 July. This had no impact on our operational services.
- There was a successful UPS/Generator load test this morning (9th Aug). These are done quarterly and this was the first regular test since the building UPS was replaced. (It had been tested shortly after installation).
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
srm-superb.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 20/07/2017 16:00 | 30/08/2017 13:00 | 40 days, 21 hours | SuperB no longer supported on Castor storage. Retiring endpoint. |
srm-hone.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 20/07/2017 16:00 | 30/08/2017 13:00 | 40 days, 21 hours | H1 no longer supported on Castor storage. Retiring endpoint. |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Pending - but not yet formally announced:
- Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface was disabled on Monday (17th July).
- Re-distribute the date in Echo onto the 2016 capacity hardware. (Ongoing)
Listing by category:
- Castor:
- Move to generic Castor headnodes.
- Echo:
- Re-distribute the date in Echo onto the remaining 2015 capacity hardware.
- Networking
- Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar and all squids now working over IPv6).
- Services
- The production FTS will be updated now the requirement to support the deprecated SOAP interface has been removed.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
srm-atlas.gridpp.rl.ac.uk, | UNSCHEDULED | OUTAGE | 05/08/2017 07:30 | 05/08/2017 12:00 | 4 hours and 30 minutes | Ongoing problems with Atlas SRMs. |
Whole site. | UNSCHEDULED | WARNING | 25/07/2017 10:57 | 26/07/2017 12:00 | 1 day, 1 hour and 3 minutes | warning after network problems and castor reboot |
srm-hone.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 20/07/2017 16:00 | 30/08/2017 13:00 | 40 days, 21 hours | H1 no longer supported on Castor storage. Retiring endpoint. |
srm-superb.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 20/07/2017 16:00 | 30/08/2017 13:00 | 40 days, 21 hours | SuperB no longer supported on Castor storage. Retiring endpoint. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
129883 | Green | Urgent | In Progress | 2017-08-01 | 2017-08-03 | CMS | Low HC xrootd success rates at T1_UK_RAL |
128991 | Green | Less Urgent | On Hold | 2017-06-16 | 2017-07-20 | Solid | solidexperiment.org CASTOR tape support |
127597 | Red | Urgent | On Hold | 2017-04-07 | 2017-06-14 | CMS | Check networking and xrootd RAL-CERN performance |
124876 | Red | Less Urgent | On Hold | 2016-11-07 | 2017-01-01 | OPS | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
117683 | Red | Less Urgent | On Hold | 2015-11-18 | 2017-07-06 | CASTOR at RAL not publishing GLUE 2. |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas Echo | Comment |
---|---|---|---|---|---|---|---|
26/07/17 | 100 | 100 | 100 | 100 | 100 | 100 | |
27/07/17 | 100 | 100 | 100 | 100 | 100 | 100 | |
28/07/17 | 100 | 100 | 100 | 100 | 100 | 100 | |
29/07/17 | 100 | 100 | 100 | 100 | 100 | 100 | |
30/07/17 | 100 | 100 | 100 | 100 | 100 | 100 | |
31/07/17 | 100 | 100 | 100 | 100 | 100 | 100 | |
01/08/17 | 100 | 100 | 100 | 91 | 100 | 100 | SRM test failures. Mainly user timeouts. |
02/08/17 | 95.5 | 100 | 100 | 86 | 94 | 100 | Attributing all to network break. |
03/08/17 | 100 | 100 | 100 | 79 | 100 | 100 | SRM test failures. Mainly user timeouts. |
04/08/17 | 100 | 100 | 100 | 93 | 100 | 100 | Some ‘User timeout over’ errors on the SRM test. One or two failed CE tests. |
05/08/17 | 100 | 100 | 100 | 96 | 100 | 100 | At least two ‘Unable to issue PrepareToPut request to Castor’ failures on the SRM tests. A few scattered CE test failures. |
06/08/17 | 100 | 100 | 100 | 99 | 100 | 100 | There were two ‘Unable to issue PrepareToPut request to Castor’ failures on the SRM tests. |
07/08/17 | 100 | 100 | 100 | 94 | 100 | 100 | SRM test failures. Mainly user timeouts. |
08/08/17 | 100 | 100 | 100 | 98 | 100 | 100 | Two SRM test failures onGET. Both “User Timeout”s |
Hammercloud Test Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud
Day | Atlas HC | Atlas HC Echo | CMS HC | Comment |
---|---|---|---|---|
26/07/17 | 99 | 100 | 100 | |
27/07/17 | 97 | 96 | 100 | |
28/07/17 | 85 | 91 | 100 | |
29/07/17 | 100 | 96 | 100 | |
30/07/17 | 100 | 100 | 100 | |
31/07/17 | 100 | 100 | 100 | |
01/08/17 | 100 | 98 | 100 | |
02/08/17 | 97 | 96 | 100 | |
03/08/17 | 99 | 100 | 100 | |
04/08/17 | 100 | 100 | 100 | |
05/08/17 | 57 | 100 | 94 | |
06/08/17 | 100 | 100 | 38 | |
07/08/17 | 80 | 98 | 67 | |
08/08/17 | 68 | 96 | 100 |
Notes from Meeting. |
- None yet