Difference between revisions of "Tier1 Operations Report 2017-06-28"
From GridPP Wiki
(Created page with "==RAL Tier1 Operations Report for 28th June 2017== __NOTOC__ ====== ====== <!-- ************************************************************* -----> <!-- ***********Start Re...") |
(→) |
||
(11 intermediate revisions by one user not shown) | |||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 21st to 28th June 2017. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 21st to 28th June 2017. | ||
|} | |} | ||
− | * | + | * There were severe problems with the Atlas SRMs at the end of last week. On Thursday afternoon one of the SRM back end daemon process started crashing on each of the Atlas SRMs. A greatly increased number of SRM requests was also seen. Work went on through the remainder of Thursday and Friday but failed to resolve the problem. On Sunday a correction was applied to the Atlas SRMs to filter out double-slashes ("//") in the incoming requests. This was re-instating a fix that had been applied to the old SRMs back in 2014. Since then the Atlas SRMs have worked OK. Work is going on the confirm this really is the solution before applying the fix to the SRMs for the other Castor instances. The high SRM request rate seen is possibly (probably) the response of the Atlas software as it tried to query the status of files and transfers during the problem. Atlas Castor was declared down in the GOC DB from Friday afternoon to Sunday morning when the fix was applied. |
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 21: | Line 21: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | ||
|} | |} | ||
− | * | + | * (This pint added retrospectively as it was missed:) GDSS764 (AtlasDataDisk - D1T0) was taken out of service in the early hours of Thursday 22nd June. One disk was replaced and it was returned to service later that day. |
− | + | ||
<!-- ***************************************************** -----> | <!-- ***************************************************** -----> | ||
Line 66: | Line 65: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | ||
|} | |} | ||
− | * | + | * Yesterday (Tuesday 27th June) the paired link to R26 was switched from 2*10Gb/s to 2*40Gb/s. |
+ | * This morning the OPN link to CERN was increased from 2*10Gb/s to 3*10Gb/s. | ||
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 93: | Line 93: | ||
<!-- ******* still to be formally scheduled and/or announced ******* -----> | <!-- ******* still to be formally scheduled and/or announced ******* -----> | ||
'''Pending - but not yet formally announced:''' | '''Pending - but not yet formally announced:''' | ||
− | |||
* Firmware updates in OCF 14 disk servers. | * Firmware updates in OCF 14 disk servers. | ||
* Upgrade the FTS3 service to a version that will no longer support the SOAP interface. | * Upgrade the FTS3 service to a version that will no longer support the SOAP interface. | ||
Line 104: | Line 103: | ||
** Increase the number of placement groups in the Atlas Echo CEPH pool. | ** Increase the number of placement groups in the Atlas Echo CEPH pool. | ||
* Networking | * Networking | ||
− | |||
** Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6). | ** Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6). | ||
* Services | * Services | ||
Line 118: | Line 116: | ||
| style="background-color: #7c8aaf; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting since the last report. | | style="background-color: #7c8aaf; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting since the last report. | ||
|} | |} | ||
− | + | {| border=1 align=center | |
+ | |- bgcolor="#7c8aaf" | ||
+ | ! Service | ||
+ | ! Scheduled? | ||
+ | ! Outage/At Risk | ||
+ | ! Start | ||
+ | ! End | ||
+ | ! Duration | ||
+ | ! Reason | ||
+ | |- | ||
+ | | srm-atlas.gridpp.rl.ac.uk, | ||
+ | | UNSCHEDULED | ||
+ | | OUTAGE | ||
+ | | 23/06/2017 16:00 | ||
+ | | 25/06/2017 11:59 | ||
+ | | 1 day, 19 hours and 59 minutes | ||
+ | | Ongoing problems with Atlas SRM nodes - GGUS 129098 | ||
+ | |} | ||
<!-- **********************End GOC DB Entries************************** -----> | <!-- **********************End GOC DB Entries************************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 133: | Line 148: | ||
|-style="background:#b7f1ce" | |-style="background:#b7f1ce" | ||
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ||
+ | |- | ||
+ | | 129098 | ||
+ | | Green | ||
+ | | Urgent | ||
+ | | In Progress | ||
+ | | 2017-06-22 | ||
+ | | 2017-06-27 | ||
+ | | Atlas | ||
+ | | RAL-LCG2: source / destination file transfer errors ("Connection timed out") | ||
|- | |- | ||
| 129072 | | 129072 | ||
Line 148: | Line 172: | ||
| In Progress | | In Progress | ||
| 2017-06-20 | | 2017-06-20 | ||
− | | 2017-06- | + | | 2017-06-27 |
| LHCb | | LHCb | ||
| Timeouts on RAL Storage | | Timeouts on RAL Storage | ||
Line 160: | Line 184: | ||
| Solid | | Solid | ||
| solidexperiment.org CASTOR tape support | | solidexperiment.org CASTOR tape support | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
| 127612 | | 127612 | ||
Line 184: | Line 190: | ||
| In Progress | | In Progress | ||
| 2017-04-08 | | 2017-04-08 | ||
− | | 2017- | + | | 2017-06-27 |
| LHCb | | LHCb | ||
| CEs at RAL not responding | | CEs at RAL not responding | ||
Line 231: | Line 237: | ||
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas Echo !! Atlas HC !! Atlas HC Echo !! CMS HC !! Comment | ! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas Echo !! Atlas HC !! Atlas HC Echo !! CMS HC !! Comment | ||
|- | |- | ||
− | | | + | | 21/06/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 99 || 100 || 100 || 97 || 100 || N/A || Single SRM test failure: Unable to issue PrepareToPut request to Castor. |
|- | |- | ||
− | | | + | | 22/06/17 || 100 || 100 || style="background-color: lightgrey;" | 88 || style="background-color: lightgrey;" | 95 || 100 || 100 || 14 || 98 || N/A || Atlas: SRM problems; CMS: A few ‘User timeout over’ errors on the SRM SAM tests. And there were a few ‘held job’ errors on the ARC-CEs. |
|- | |- | ||
− | | | + | | 23/06/17 || 100 || 100 || style="background-color: lightgrey;" | 67 || style="background-color: lightgrey;" | 89 || 100 || 100 || 28 || 100 || N/A || Atlas: SRM problems; CMS: Problems with a full cmsDisk. |
|- | |- | ||
− | | | + | | 24/06/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 61 || 100 || 100 || 0 || 100 || N/A || CMS: Problems with a full cmsDisk. |
|- | |- | ||
− | | | + | | 25/06/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 71 || 100 || 100 || 100 || 100 || N/A || CMS: Problems with a full cmsDisk. |
|- | |- | ||
− | | | + | | 26/06/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 97 || 100 || 100 || 100 || 100 || 100 || SRM test failures. (User timeout). |
|- | |- | ||
− | | | + | | 27/06/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || 100 || 100 || 100 || 100 || Three SRM test failures. Two were timeouts; One was “Error while searching for end of reply” |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
|} | |} | ||
<!-- **********************End Availability Report************************** -----> | <!-- **********************End Availability Report************************** -----> | ||
Line 273: | Line 261: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | ||
|} | |} | ||
− | * | + | * EGI have announced withdrawal of support for the WMS at the end of 2017. |
+ | * The capacity storage nodes (to go into Echo) have now had two weeks of acceptance testing. | ||
+ | * Data is now shipping from the Leicester Dirac site. |
Latest revision as of 09:48, 5 July 2017
RAL Tier1 Operations Report for 28th June 2017
Review of Issues during the week 21st to 28th June 2017. |
- There were severe problems with the Atlas SRMs at the end of last week. On Thursday afternoon one of the SRM back end daemon process started crashing on each of the Atlas SRMs. A greatly increased number of SRM requests was also seen. Work went on through the remainder of Thursday and Friday but failed to resolve the problem. On Sunday a correction was applied to the Atlas SRMs to filter out double-slashes ("//") in the incoming requests. This was re-instating a fix that had been applied to the old SRMs back in 2014. Since then the Atlas SRMs have worked OK. Work is going on the confirm this really is the solution before applying the fix to the SRMs for the other Castor instances. The high SRM request rate seen is possibly (probably) the response of the Atlas software as it tried to query the status of files and transfers during the problem. Atlas Castor was declared down in the GOC DB from Friday afternoon to Sunday morning when the fix was applied.
Resolved Disk Server Issues |
- (This pint added retrospectively as it was missed:) GDSS764 (AtlasDataDisk - D1T0) was taken out of service in the early hours of Thursday 22nd June. One disk was replaced and it was returned to service later that day.
Current operational status and issues |
- We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
- There is a problem on the site firewall which is causing problems for some specific data flows. This was being investigated in connection with videoconferencing problems. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues |
- None
Limits on concurrent batch system jobs. |
- CMS Multicore 550
Notable Changes made since the last meeting. |
- Yesterday (Tuesday 27th June) the paired link to R26 was switched from 2*10Gb/s to 2*40Gb/s.
- This morning the OPN link to CERN was increased from 2*10Gb/s to 3*10Gb/s.
Declared in the GOC DB |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Pending - but not yet formally announced:
- Firmware updates in OCF 14 disk servers.
- Upgrade the FTS3 service to a version that will no longer support the SOAP interface.
- Increase the number of placement groups in the Atlas Echo CEPH pool.
Listing by category:
- Castor:
- Move to generic Castor headnodes.
- Merge AtlasScratchDisk into larger Atlas disk pool.
- Echo:
- Increase the number of placement groups in the Atlas Echo CEPH pool.
- Networking
- Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
- Services
- The production FTS needs updating. This will no longer support the SOAP interface. (The "test" FTS , used by Atlas, has already been upgraded.)
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
srm-atlas.gridpp.rl.ac.uk, | UNSCHEDULED | OUTAGE | 23/06/2017 16:00 | 25/06/2017 11:59 | 1 day, 19 hours and 59 minutes | Ongoing problems with Atlas SRM nodes - GGUS 129098 |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
129098 | Green | Urgent | In Progress | 2017-06-22 | 2017-06-27 | Atlas | RAL-LCG2: source / destination file transfer errors ("Connection timed out") |
129072 | Green | Less Urgent | In Progress | 2017-06-20 | 2017-06-20 | Please remove vo.londongrid.ac.uk from RAL-LCG2 resources | |
129059 | Green | Very Urgent | In Progress | 2017-06-20 | 2017-06-27 | LHCb | Timeouts on RAL Storage |
128991 | Green | Less Urgent | In Progress | 2017-06-16 | 2017-06-16 | Solid | solidexperiment.org CASTOR tape support |
127612 | Red | Alarm | In Progress | 2017-04-08 | 2017-06-27 | LHCb | CEs at RAL not responding |
127597 | Red | Urgent | On Hold | 2017-04-07 | 2017-06-14 | CMS | Check networking and xrootd RAL-CERN performance |
124876 | Red | Less Urgent | On Hold | 2016-11-07 | 2017-01-01 | OPS | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
117683 | Red | Less Urgent | On Hold | 2015-11-18 | 2017-05-10 | CASTOR at RAL not publishing GLUE 2. |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas Echo | Atlas HC | Atlas HC Echo | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|---|---|
21/06/17 | 100 | 100 | 100 | 99 | 100 | 100 | 97 | 100 | N/A | Single SRM test failure: Unable to issue PrepareToPut request to Castor. |
22/06/17 | 100 | 100 | 88 | 95 | 100 | 100 | 14 | 98 | N/A | Atlas: SRM problems; CMS: A few ‘User timeout over’ errors on the SRM SAM tests. And there were a few ‘held job’ errors on the ARC-CEs. |
23/06/17 | 100 | 100 | 67 | 89 | 100 | 100 | 28 | 100 | N/A | Atlas: SRM problems; CMS: Problems with a full cmsDisk. |
24/06/17 | 100 | 100 | 100 | 61 | 100 | 100 | 0 | 100 | N/A | CMS: Problems with a full cmsDisk. |
25/06/17 | 100 | 100 | 100 | 71 | 100 | 100 | 100 | 100 | N/A | CMS: Problems with a full cmsDisk. |
26/06/17 | 100 | 100 | 100 | 97 | 100 | 100 | 100 | 100 | 100 | SRM test failures. (User timeout). |
27/06/17 | 100 | 100 | 100 | 98 | 100 | 100 | 100 | 100 | 100 | Three SRM test failures. Two were timeouts; One was “Error while searching for end of reply” |
Notes from Meeting. |
- EGI have announced withdrawal of support for the WMS at the end of 2017.
- The capacity storage nodes (to go into Echo) have now had two weeks of acceptance testing.
- Data is now shipping from the Leicester Dirac site.