Difference between revisions of "Tier1 Operations Report 2017-07-26"
From GridPP Wiki
(→) |
(→) |
||
(5 intermediate revisions by one user not shown) | |||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 19th to 26th July 2017. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 19th to 26th July 2017. | ||
|} | |} | ||
− | * There have been problems with the Atlas Castor instance. | + | * There have been problems with the Atlas Castor instance that appear to be within the SRM. AtlasScratch shows some high load - which may be a combination of the actual request rate plus the small number of (old) disk servers in the pool. |
* In the early hours of Monday morning (24th July) there was a problem with one of the site BDII nodes. This was fixed by the on-call team. | * In the early hours of Monday morning (24th July) there was a problem with one of the site BDII nodes. This was fixed by the on-call team. | ||
− | * There was a site networking problem in the early hours of Tuesday morning. One of the site core stacks stopped working correctly. The | + | * There was a site networking problem in the early hours of Tuesday morning. One of the site core stacks stopped working correctly. The Tier1 core network connects via two routers into two of the site core network stacks to give resilience via a failover. Overnight the connection flipped between the connections to the two core stacks several times. However, it appears that the one failing stack was in a bad shape and even when nominally up was not working correctly. This failing stack was stopped in the morning which restored network connectivity and later the Tier1 router pair were set to run only through the good second stack. The central networking team await input from the vendors before intervening further on the problematic switch/router stack. |
− | * Since yesterday morning there has been a problem with Castor file transfers for those transfers initiated by the CERN FTS3 service. This is not yet understood. | + | * Since yesterday morning there has been a problem with Castor file transfers for those transfers initiated by the CERN FTS3 service. This is not yet understood. GGUS tickets have been received from both CMS and LHCb about this problem which is ongoing at the time of the meeting. |
* At the end of last week CMSDisk became full. This affected the SAM SRM tests and hence CMS availability. | * At the end of last week CMSDisk became full. This affected the SAM SRM tests and hence CMS availability. | ||
* There is a power outage scheduled for the Atlas Building (R26) over the weekend of 29/30 July. Preparations are being made to remove any impact this may have on operational services. | * There is a power outage scheduled for the Atlas Building (R26) over the weekend of 29/30 July. Preparations are being made to remove any impact this may have on operational services. | ||
Line 72: | Line 72: | ||
* The number of placement groups in the Echo CEPH Atlas pool contiues to be increased. This is in preparation for the increase in storage capacity when new hardware is added. | * The number of placement groups in the Echo CEPH Atlas pool contiues to be increased. This is in preparation for the increase in storage capacity when new hardware is added. | ||
* Security patching being carried out across systems. | * Security patching being carried out across systems. | ||
− | * Updating of RAID card firmware in one batch of disk servers (OCF '14) | + | * Updating of RAID card firmware in one batch of disk servers (OCF '14) was completed on Thursday (20th July). |
− | * | + | * 6 additional disk servers have been deployed into AtlasTape |
− | * | + | * xrootd gateway & proxy added to another batch of worker nodes . |
− | + | ||
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 134: | Line 133: | ||
<!-- ******* still to be formally scheduled and/or announced ******* -----> | <!-- ******* still to be formally scheduled and/or announced ******* -----> | ||
'''Pending - but not yet formally announced:''' | '''Pending - but not yet formally announced:''' | ||
− | |||
* Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface was disabled on Monday (17th July). | * Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface was disabled on Monday (17th July). | ||
* Increase the number of placement groups in the Atlas Echo CEPH pool. (Ongoing) | * Increase the number of placement groups in the Atlas Echo CEPH pool. (Ongoing) | ||
Line 339: | Line 337: | ||
| 24/07/17 || 100 || 100 || style="background-color: lightgrey;" | 26 || style="background-color: lightgrey;" | 92 || style="background-color: lightgrey;" | 96 || 100 || 22 || 99 || 100 || Ongoing problems with Atlas Castor. | | 24/07/17 || 100 || 100 || style="background-color: lightgrey;" | 26 || style="background-color: lightgrey;" | 92 || style="background-color: lightgrey;" | 96 || 100 || 22 || 99 || 100 || Ongoing problems with Atlas Castor. | ||
|- | |- | ||
− | | 25/07/17 || style="background-color: lightgrey;" | 73.6 || style="background-color: lightgrey;" | 88 || style="background-color: lightgrey;" | 68 || style="background-color: lightgrey;" | 66 || style="background-color: lightgrey;" | 81 || | + | | 25/07/17 || style="background-color: lightgrey;" | 73.6 || style="background-color: lightgrey;" | 88 || style="background-color: lightgrey;" | 68 || style="background-color: lightgrey;" | 66 || style="background-color: lightgrey;" | 81 || style="background-color: lightgrey;" | 95 || 80 || 98 || 94 || Major network problem in the morning. |
|} | |} | ||
Line 352: | Line 350: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | ||
|} | |} | ||
− | * | + | * There was a discussion about the problem with File transfers in/out of Castor initiated by the CERN FTS3 service. |
+ | * Catalin has started work on enabling one of the CVMFS Squid systems for IPv6. | ||
+ | * The ECHO team are working with both CMS and LHCb regarding testing data transfers in/out of Echo. |
Latest revision as of 14:03, 26 July 2017
RAL Tier1 Operations Report for 26th July 2017
Review of Issues during the week 19th to 26th July 2017. |
- There have been problems with the Atlas Castor instance that appear to be within the SRM. AtlasScratch shows some high load - which may be a combination of the actual request rate plus the small number of (old) disk servers in the pool.
- In the early hours of Monday morning (24th July) there was a problem with one of the site BDII nodes. This was fixed by the on-call team.
- There was a site networking problem in the early hours of Tuesday morning. One of the site core stacks stopped working correctly. The Tier1 core network connects via two routers into two of the site core network stacks to give resilience via a failover. Overnight the connection flipped between the connections to the two core stacks several times. However, it appears that the one failing stack was in a bad shape and even when nominally up was not working correctly. This failing stack was stopped in the morning which restored network connectivity and later the Tier1 router pair were set to run only through the good second stack. The central networking team await input from the vendors before intervening further on the problematic switch/router stack.
- Since yesterday morning there has been a problem with Castor file transfers for those transfers initiated by the CERN FTS3 service. This is not yet understood. GGUS tickets have been received from both CMS and LHCb about this problem which is ongoing at the time of the meeting.
- At the end of last week CMSDisk became full. This affected the SAM SRM tests and hence CMS availability.
- There is a power outage scheduled for the Atlas Building (R26) over the weekend of 29/30 July. Preparations are being made to remove any impact this may have on operational services.
Resolved Disk Server Issues |
- GDSS731 (LHCbUser - D1T0) failed in the early hours of Monday 24th July. The server was put back in service later that day, initially read-only, after a disk drive was replaced.
Current operational status and issues |
- We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
- There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues |
- None
Limits on concurrent batch system jobs. |
- CMS Multicore 550
Notable Changes made since the last meeting. |
- The number of placement groups in the Echo CEPH Atlas pool contiues to be increased. This is in preparation for the increase in storage capacity when new hardware is added.
- Security patching being carried out across systems.
- Updating of RAID card firmware in one batch of disk servers (OCF '14) was completed on Thursday (20th July).
- 6 additional disk servers have been deployed into AtlasTape
- xrootd gateway & proxy added to another batch of worker nodes .
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Whole site | UNSCHEDULED | WARNING | 25/07/2017 10:57 | 26/07/2017 12:00 | 1 day, 1 hour and 3 minutes | Warning after network problems and castor reboot |
srm-superb.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 20/07/2017 16:00 | 30/08/2017 13:00 | 40 days, 21 hours | SuperB no longer supported on Castor storage. Retiring endpoint. |
srm-hone.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 20/07/2017 16:00 | 30/08/2017 13:00 | 40 days, 21 hours | H1 no longer supported on Castor storage. Retiring endpoint. |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Pending - but not yet formally announced:
- Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface was disabled on Monday (17th July).
- Increase the number of placement groups in the Atlas Echo CEPH pool. (Ongoing)
Listing by category:
- Castor:
- Move to generic Castor headnodes.
- Merge AtlasScratchDisk into larger Atlas disk pool.
- Echo:
- Increase the number of placement groups in the Atlas Echo CEPH pool.
- Networking
- Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
- Services
- The production FTS needs updating. This will no longer support the SOAP interface. (The "test" FTS , used by Atlas, has already been upgraded.)
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Whole site | UNSCHEDULED | WARNING | 25/07/2017 10:57 | 26/07/2017 12:00 | 1 day, 1 hour and 3 minutes | Warning after network problems and castor reboot |
All Castor | SCHEDULED | OUTAGE | 25/07/2017 09:30 | 25/07/2017 11:54 | 2 hours and 24 minutes | Castor Storage Unavailable during OS patching. |
Whole site | UNSCHEDULED | OUTAGE | 25/07/2017 06:00 | 25/07/2017 11:54 | 5 hours and 54 minutes | Network problem at RAL - under investigation |
srm-superb.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 20/07/2017 16:00 | 30/08/2017 13:00 | 40 days, 21 hours | SuperB no longer supported on Castor storage. Retiring endpoint. |
srm-hone.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 20/07/2017 16:00 | 30/08/2017 13:00 | 40 days, 21 hours | H1 no longer supported on Castor storage. Retiring endpoint. |
srm-lhcb.gridpp.rl.ac.uk, | SCHEDULED | WARNING | 19/07/2017 13:00 | 19/07/2017 16:30 | 3 hours and 30 minutes | Rebooting some disk server to update firmware, causing some interruptions in service |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
129777 | Green | Urgent | In Progress | 2017-07-26 | 2017-07-26 | CMS | Transfer failing from T1_UK_RAL |
129769 | Green | Less Urgent | In Progress | 2017-07-26 | 2017-07-26 | LHCb | FTS failure at RAL |
129748 | Green | Less Urgeny | Waiting Reply | 2017-07-25 | 2017-07-26 | Atlas | RAL-LCG2: deletion errors |
129573 | Green | Urgent | In Progress | 2017-07-16 | 2017-07-21 | Atlas | RAL-LCG2: DDM transfer failure with Connection to gridpp.rl.ac.uk refused |
129342 | Green | Urgent | In Progress | 2017-07-04 | 2017-07-19 | [Rod Dashboard] Issue detected : org.sam.SRM-Put-ops@srm-mice.gridpp.rl.ac.uk | |
128991 | Green | Less Urgent | In Progress | 2017-06-16 | 2017-07-20 | Solid | solidexperiment.org CASTOR tape support |
127597 | Red | Urgent | On Hold | 2017-04-07 | 2017-06-14 | CMS | Check networking and xrootd RAL-CERN performance |
124876 | Red | Less Urgent | On Hold | 2016-11-07 | 2017-01-01 | OPS | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
117683 | Red | Less Urgent | On Hold | 2015-11-18 | 2017-07-06 | CASTOR at RAL not publishing GLUE 2. |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas Echo | Atlas HC | Atlas HC Echo | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|---|---|
19/07/17 | 100 | 100 | 100 | 94 | 100 | 100 | 100 | 100 | 97 | SRM test failures. |
20/07/17 | 100 | 100 | 100 | 84 | 100 | 100 | 100 | 100 | 96 | Castor problems - likely to be cause by CMSDisk filling. |
21/07/17 | 100 | 100 | 100 | 42 | 100 | 100 | 100 | 100 | 89 | CMSDisk Full. Tests failing because of this. |
22/07/17 | 100 | 100 | 100 | 73 | 100 | 100 | 99 | 100 | 100 | CMSDisk Full. Tests failing because of this. |
23/07/17 | 100 | 100 | 90 | 85 | 100 | 100 | 100 | 100 | 100 | CMSDisk Full. Tests failing because of this. Atlas Castor problems. |
24/07/17 | 100 | 100 | 26 | 92 | 96 | 100 | 22 | 99 | 100 | Ongoing problems with Atlas Castor. |
25/07/17 | 73.6 | 88 | 68 | 66 | 81 | 95 | 80 | 98 | 94 | Major network problem in the morning. |
Notes from Meeting. |
- There was a discussion about the problem with File transfers in/out of Castor initiated by the CERN FTS3 service.
- Catalin has started work on enabling one of the CVMFS Squid systems for IPv6.
- The ECHO team are working with both CMS and LHCb regarding testing data transfers in/out of Echo.