Difference between revisions of "Tier1 Operations Report 2018-01-31"
From GridPP Wiki
(→) |
(→) |
||
(24 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
− | ==RAL Tier1 Operations Report for | + | ==RAL Tier1 Operations Report for 31st January 2018== |
__NOTOC__ | __NOTOC__ | ||
Line 9: | Line 9: | ||
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;" | {| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;" | ||
|- | |- | ||
− | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week | + | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 24th to 31st January 2018 |
|} | |} | ||
− | * | + | * Last week we reported on an ongoing problem with the Atlas Castor instance. Overnight Tuesday to Wednesday last week (23/24 Jan) the Atlas Castor instance was declared down and this table (the “diskcopy” table and the associated indexes) was rebuilt. During Wednesday the Atlas Castor instance returned to normal operation. There is a large amount of what is effectively dark data (around 1Pbyte) to be deleted of the disk servers and this will take some time as a steady background operation. |
+ | * There was a problem with the system that runs the tape library control software overnight Wed/Thu (24/25 Jan). Staff were called late Wednesday evening but were unable to get the system up then. Overnight we were unable to mount tapes - effectively blocking tape access (although writes to the disk buffers in front of tape, plus reads of any data in those buffers, carried on). The fault on the server was resolved Thursday morning and normal tape service resumed. | ||
+ | * There was a problem for LHCb data access over the weekend. LHCb submitted a GGUS alarm ticket early Sunday morning (28th Jan). The on-call team responded. Some routine maintenance work to re-balance disk space in the LHCb disk only data pool was the cause. This was stopped and the problem cleared during the Sunday morning. | ||
+ | * We await further updates regarding an ongoing problem with one of the BMS (Building Management Systems) in the R89 machine room. This has an intermittent fault. | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 22: | Line 25: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues | ||
|} | |} | ||
− | * | + | * The problems with data flows through the site firewall being restricted is still present. |
<!-- ***********End Current operational status and issues*********** -----> | <!-- ***********End Current operational status and issues*********** -----> | ||
<!-- *************************************************************** -----> | <!-- *************************************************************** -----> | ||
Line 33: | Line 36: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Castor Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Castor Disk Server Issues | ||
|} | |} | ||
− | * gdss736 (lhcbDst - D1T0) | + | * gdss736 (lhcbDst - D1T0) - Back in production RO |
− | * | + | * gdss737 (lhcbDst - D1T0) - Back in production RO |
<!-- ***************************************************** -----> | <!-- ***************************************************** -----> | ||
Line 44: | Line 47: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Castor Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Castor Disk Server Issues | ||
|} | |} | ||
− | * | + | * gdss762 (atlasStripInput - D0T1) – Removed again from prod for re-installation |
+ | * gdss761 (LHCbDst - D1T0 crashed yesterday lunchtime (Tuesday 30th Jan). Still under investigation. | ||
<!-- ***************End Ongoing Disk Server Issues**************** -----> | <!-- ***************End Ongoing Disk Server Issues**************** -----> | ||
<!-- ************************************************************* -----> | <!-- ************************************************************* -----> | ||
Line 66: | Line 70: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | ||
|} | |} | ||
− | * | + | * The Upgrade of Echo to the latest CEPH release (“Luminous”) was carried out successfully during the second half of last week. This was done as a rolling update with the service available throughout. |
− | * The | + | * ALICE VOBOXes (lcgvo07 and lcgvo09) and LHCb VOBOX (lcgvo10) are now dual stack IPv4/6. |
− | * | + | * The Hyper-K VO has been enabled on the batch farm. |
+ | * Atlas have moved their FTS transfers from our "test" FTS service to the Production one. This change was made at our request and effectively consolidates us with a single production FTS service. | ||
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 79: | Line 84: | ||
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting since the last report. | | style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting since the last report. | ||
|} | |} | ||
− | + | * None | |
<!-- **********************End GOC DB Entries************************** -----> | <!-- **********************End GOC DB Entries************************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 130: | Line 135: | ||
<!-- ******* still to be formally scheduled and/or announced ******* -----> | <!-- ******* still to be formally scheduled and/or announced ******* -----> | ||
'''Ongoing or Pending - but not yet formally announced:''' | '''Ongoing or Pending - but not yet formally announced:''' | ||
− | * | + | * Move IPv6 connectivity - currently on 10Gbit links - to share the higher bandwidth (40Gbit) links used by IPv4. |
'''Listing by category:''' | '''Listing by category:''' | ||
* Castor: | * Castor: | ||
** Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done) | ** Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done) | ||
** Move to generic Castor headnodes. | ** Move to generic Castor headnodes. | ||
− | |||
− | |||
* Networking | * Networking | ||
** Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers). | ** Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers). | ||
+ | ** Move IPv6 connectivity - currently on 10Gbit links - to share the higher bandwidth (40Gbit) links used by IPv4. | ||
+ | ** Replacement (upgrade) of RAL firewall. | ||
* Internal | * Internal | ||
** DNS servers will be rolled out within the Tier1 network. | ** DNS servers will be rolled out within the Tier1 network. | ||
Line 163: | Line 168: | ||
! Type of problem | ! Type of problem | ||
! Subject | ! Subject | ||
+ | ! Scope | ||
|- | |- | ||
| style="background-color: red;" | 117683 | | style="background-color: red;" | 117683 | ||
Line 172: | Line 178: | ||
| Information System | | Information System | ||
| CASTOR at RAL not publishing GLUE 2 | | CASTOR at RAL not publishing GLUE 2 | ||
+ | | EGI | ||
|- | |- | ||
| style="background-color: red;" | 124876 | | style="background-color: red;" | 124876 | ||
Line 181: | Line 188: | ||
| Operations | | Operations | ||
| [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk | | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk | ||
+ | | EGI | ||
|- | |- | ||
| style="background-color: red;" | 127597 | | style="background-color: red;" | 127597 | ||
Line 190: | Line 198: | ||
| File Transfer | | File Transfer | ||
| Check networking and xrootd RAL-CERN performance | | Check networking and xrootd RAL-CERN performance | ||
+ | | EGI | ||
|- | |- | ||
| style="background-color: orange;" | 132589 | | style="background-color: orange;" | 132589 | ||
Line 199: | Line 208: | ||
| Local Batch System | | Local Batch System | ||
| Killed pilots at RAL | | Killed pilots at RAL | ||
+ | | WLCG | ||
|- | |- | ||
| style="background-color: green;" | 133139 | | style="background-color: green;" | 133139 | ||
Line 208: | Line 218: | ||
| Operations | | Operations | ||
| [Rod Dashboard] Issue detected : org.nordugrid.ARC-CE-sw-csh-ops@arc-ce01.gridpp.rl.ac.uk | | [Rod Dashboard] Issue detected : org.nordugrid.ARC-CE-sw-csh-ops@arc-ce01.gridpp.rl.ac.uk | ||
+ | | EGI | ||
+ | |- | ||
+ | | style="background-color: green;" | 133154 | ||
+ | | ops | ||
+ | | in progress | ||
+ | | less urgent | ||
+ | | 31/01/2018 | ||
+ | | 31/01/2018 | ||
+ | | Operations | ||
+ | | [Rod Dashboard] Issue detected : org.nordugrid.ARC-CE-sw-csh-ops@arc-ce02.gridpp.rl.ac.uk | ||
+ | | EGI | ||
|} | |} | ||
<!-- **********************End Availability Report************************** -----> | <!-- **********************End Availability Report************************** -----> | ||
Line 367: | Line 388: | ||
! Day !! Atlas HC !! CMS HC !! Comment | ! Day !! Atlas HC !! CMS HC !! Comment | ||
|- | |- | ||
− | | | + | | 24/01/18 || 100 || 100 || |
|- | |- | ||
− | | | + | | 25/01/18 || 100 || 100 || |
|- | |- | ||
− | | | + | | 26/01/18 || 100 || 100 || |
|- | |- | ||
− | | | + | | 27/01/18 || style="background-color: grey;" | 0 || 100 || Atlas HC - no tests run in time bin |
|- | |- | ||
− | | | + | | 28/01/18 || 100 || style="background-color: yellow;" | 99 || |
|- | |- | ||
− | | | + | | 29/01/18 || 100 || 100 || |
|- | |- | ||
− | | | + | | 30/01/18 || 100 || 100 || |
|} | |} | ||
<!-- **********************End Hammercloud Test Report************************** -----> | <!-- **********************End Hammercloud Test Report************************** -----> | ||
Line 391: | Line 412: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | ||
|} | |} | ||
− | * | + | * The Echo CEPH update to the "Luminous" version does not include a fix for the 'backfill' bug that affected Echo a couple of months ago after new hardware was added in. The fix is still expected in a forthcoming minor version update. |
Latest revision as of 13:13, 7 February 2018
RAL Tier1 Operations Report for 31st January 2018
Review of Issues during the week 24th to 31st January 2018 |
- Last week we reported on an ongoing problem with the Atlas Castor instance. Overnight Tuesday to Wednesday last week (23/24 Jan) the Atlas Castor instance was declared down and this table (the “diskcopy” table and the associated indexes) was rebuilt. During Wednesday the Atlas Castor instance returned to normal operation. There is a large amount of what is effectively dark data (around 1Pbyte) to be deleted of the disk servers and this will take some time as a steady background operation.
- There was a problem with the system that runs the tape library control software overnight Wed/Thu (24/25 Jan). Staff were called late Wednesday evening but were unable to get the system up then. Overnight we were unable to mount tapes - effectively blocking tape access (although writes to the disk buffers in front of tape, plus reads of any data in those buffers, carried on). The fault on the server was resolved Thursday morning and normal tape service resumed.
- There was a problem for LHCb data access over the weekend. LHCb submitted a GGUS alarm ticket early Sunday morning (28th Jan). The on-call team responded. Some routine maintenance work to re-balance disk space in the LHCb disk only data pool was the cause. This was stopped and the problem cleared during the Sunday morning.
- We await further updates regarding an ongoing problem with one of the BMS (Building Management Systems) in the R89 machine room. This has an intermittent fault.
Current operational status and issues |
- The problems with data flows through the site firewall being restricted is still present.
Resolved Castor Disk Server Issues |
- gdss736 (lhcbDst - D1T0) - Back in production RO
- gdss737 (lhcbDst - D1T0) - Back in production RO
Ongoing Castor Disk Server Issues |
- gdss762 (atlasStripInput - D0T1) – Removed again from prod for re-installation
- gdss761 (LHCbDst - D1T0 crashed yesterday lunchtime (Tuesday 30th Jan). Still under investigation.
Limits on concurrent batch system jobs. |
- CMS Multicore 550
Notable Changes made since the last meeting. |
- The Upgrade of Echo to the latest CEPH release (“Luminous”) was carried out successfully during the second half of last week. This was done as a rolling update with the service available throughout.
- ALICE VOBOXes (lcgvo07 and lcgvo09) and LHCb VOBOX (lcgvo10) are now dual stack IPv4/6.
- The Hyper-K VO has been enabled on the batch farm.
- Atlas have moved their FTS transfers from our "test" FTS service to the Production one. This change was made at our request and effectively consolidates us with a single production FTS service.
Entries in GOC DB starting since the last report. |
- None
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
srm-alice.gridpp.rl.ac.uk, srm-atlas.gridpp.rl.ac.uk, srm-cms.gridpp.rl.ac.uk, srm-lhcb.gridpp.rl.ac.uk, srm-mice.gridpp.rl.ac.uk, | UNSCHEDULED | WARNING | 24/01/2018 22:00 | 25/01/2018 10:03 | 12 hours and 3 minutes | We have a problem with our tape library controller and cannot mount tapes. Writes to tape will succeed in that they will be stored on the disk buffers in front of the tape system. Recalls from tape will fail (unless the file is already on the disk buffer). Access to disk-only storage is unaffected. |
srm-atlas.gridpp.rl.ac.uk, | UNSCHEDULED | OUTAGE | 23/01/2018 15:45 | 24/01/2018 14:00 | 22 hours and 15 minutes | emergency downtime of Castor Atlas while rebuilding some database tables |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Ongoing or Pending - but not yet formally announced:
- Move IPv6 connectivity - currently on 10Gbit links - to share the higher bandwidth (40Gbit) links used by IPv4.
Listing by category:
- Castor:
- Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
- Move to generic Castor headnodes.
- Networking
- Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
- Move IPv6 connectivity - currently on 10Gbit links - to share the higher bandwidth (40Gbit) links used by IPv4.
- Replacement (upgrade) of RAL firewall.
- Internal
- DNS servers will be rolled out within the Tier1 network.
- Infrastructure
- Testing of power distribution boards in the R89 machine room is being scheduled for some time late July / Early August. The effect of this on our services is being discussed.
Open GGUS Tickets (Snapshot during morning of meeting) |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject | Scope |
---|---|---|---|---|---|---|---|---|
117683 | none | on hold | less urgent | 18/11/2015 | 03/01/2018 | Information System | CASTOR at RAL not publishing GLUE 2 | EGI |
124876 | ops | on hold | less urgent | 07/11/2016 | 13/11/2017 | Operations | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk | EGI |
127597 | cms | on hold | urgent | 07/04/2017 | 29/01/2018 | File Transfer | Check networking and xrootd RAL-CERN performance | EGI |
132589 | lhcb | in progress | very urgent | 21/12/2017 | 30/01/2018 | Local Batch System | Killed pilots at RAL | WLCG |
133139 | ops | in progress | less urgent | 30/01/2018 | 30/01/2018 | Operations | [Rod Dashboard] Issue detected : org.nordugrid.ARC-CE-sw-csh-ops@arc-ce01.gridpp.rl.ac.uk | EGI |
133154 | ops | in progress | less urgent | 31/01/2018 | 31/01/2018 | Operations | [Rod Dashboard] Issue detected : org.nordugrid.ARC-CE-sw-csh-ops@arc-ce02.gridpp.rl.ac.uk | EGI |
GGUS Tickets Closed Last week |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject |
---|---|---|---|---|---|---|---|
131815 | t2k.org | verified | less urgent | 13/11/2017 | 29/01/2018 | Storage Systems | Extremely long download times for T2K files on tape at RAL |
132712 | other | solved | less urgent | 04/01/2018 | 30/01/2018 | Other | support for the hyperk VO (RAL-LCG2) |
132802 | cms | solved | urgent | 11/01/2018 | 26/01/2018 | CMS_AAA WAN Access | Low HC xrootd success rates at T1_UK_RAL |
132844 | atlas | verified | urgent | 14/01/2018 | 25/01/2018 | Storage Systems | UK RAL-LCG2 DATADISK transfer errors "DESTINATION OVERWRITE srm-ifce err:" |
132935 | atlas | solved | less urgent | 18/01/2018 | 24/01/2018 | Storage Systems | UK RAL-LCG2: deletion errors |
132963 | atlas | verified | top priority | 21/01/2018 | 24/01/2018 | Middleware | RAL arc-ce03 flakey |
133046 | ops | verified | less urgent | 25/01/2018 | 29/01/2018 | Operations | [Rod Dashboard] Issue detected : org.nordugrid.ARC-CE-sw-csh-ops@arc-ce02.gridpp.rl.ac.uk |
133066 | atlas | solved | top priority | 26/01/2018 | 30/01/2018 | File Transfer | Unable to contact fts3-test.gridpp.rl.ac.uk |
133082 | lhcb | verified | top priority | 27/01/2018 | 28/01/2018 | File Transfer | Data transfer problem at RAL |
133092 | atlas | solved | top priority | 29/01/2018 | 29/01/2018 | Other | RAL FTS server in troubles |
Availability Report |
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas Echo | Comment |
---|---|---|---|---|---|---|---|
24/01/18 | 100 | 100 | 52 | 100 | 100 | 100 | |
25/01/18 | 100 | 100 | 100 | 100 | 100 | 100 | |
26/01/18 | 100 | 100 | 100 | 100 | 100 | 100 | |
27/01/18 | 100 | 100 | 100 | 100 | 100 | 100 | |
28/01/18 | 100 | 100 | 100 | 100 | 100 | 100 | |
29/01/18 | 100 | 100 | 100 | 100 | 100 | 100 | |
30/01/18 | 100 | 100 | 100 | 100 | 100 | 100 |
Hammercloud Test Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_UCORE, Template 841); CMS HC = CMS HammerCloud
Day | Atlas HC | CMS HC | Comment |
---|---|---|---|
24/01/18 | 100 | 100 | |
25/01/18 | 100 | 100 | |
26/01/18 | 100 | 100 | |
27/01/18 | 0 | 100 | Atlas HC - no tests run in time bin |
28/01/18 | 100 | 99 | |
29/01/18 | 100 | 100 | |
30/01/18 | 100 | 100 |
Notes from Meeting. |
- The Echo CEPH update to the "Luminous" version does not include a fix for the 'backfill' bug that affected Echo a couple of months ago after new hardware was added in. The fix is still expected in a forthcoming minor version update.