Difference between revisions of "Tier1 Operations Report 2018-06-18"
From GridPP Wiki
(→) |
(→) |
||
(4 intermediate revisions by one user not shown) | |||
Line 11: | Line 11: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 11th June to the 18th June 2018. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 11th June to the 18th June 2018. | ||
|} | |} | ||
− | * | + | * 12/6/18 there was an internal network problem, which took around ~3 hours to resolve. This seemed to mostly affect facilities rather than Tier-1 services. The concern is not that a piece of hardware failed, but that it doesn’t always switch seamlessly to the backup links (especially for IPv6). |
+ | * 19/6/18 09:00 - 13:00 IPv6 was not available. Fabric attempted a route switch failed. Should have dropped to IPv4 but didn't. Need to raise this issue with Networking/Fabric to try and resolve once and for all | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 138: | Line 139: | ||
! Scope | ! Scope | ||
|- | |- | ||
− | | style="background-color: green;" |135455 | + | | style="background-color: green;" | 135711 |
+ | | cms | ||
+ | | waiting for reply | ||
+ | | urgent | ||
+ | | 18/06/2018 | ||
+ | | 19/06/2018 | ||
+ | | CMS_Central Workflows | ||
+ | | T1_UK_RAL production jobs failing | ||
+ | | WLCG | ||
+ | |- | ||
+ | | style="background-color: green;" | 135455 | ||
| cms | | cms | ||
| in progress | | in progress | ||
Line 148: | Line 159: | ||
| EGI | | EGI | ||
|- | |- | ||
− | | style="background-color: green;" |135293 | + | | style="background-color: green;" | 135293 |
| ops | | ops | ||
| on hold | | on hold | ||
Line 158: | Line 169: | ||
| EGI | | EGI | ||
|- | |- | ||
− | | style="background-color: green;" |134685 | + | | style="background-color: green;" | 134685 |
| dteam | | dteam | ||
| in progress | | in progress | ||
Line 168: | Line 179: | ||
| EGI | | EGI | ||
|- | |- | ||
− | | style="background-color: red;" |124876 | + | | style="background-color: red;" | 124876 |
| ops | | ops | ||
| on hold | | on hold | ||
Line 399: | Line 410: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | ||
|} | |} | ||
− | * Notes from meeting of | + | * Notes from meeting of 20th June: |
− | + | * LHBb - A number of drive failures have resulted in 8 files lost, 4 not on name server 4 just plain lost. | |
− | * | + | * Issue with files appearing to be calculating a different checksum dependent on location, plain weird. Was thought to be a RAID issue but investigations have suggested this is not the case. |
− | + | ||
− | + | ||
− | + | ||
− | * | + | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + |
Latest revision as of 13:37, 20 June 2018
RAL Tier1 Operations Report for 18th June 2018
Review of Issues during the week 11th June to the 18th June 2018. |
- 12/6/18 there was an internal network problem, which took around ~3 hours to resolve. This seemed to mostly affect facilities rather than Tier-1 services. The concern is not that a piece of hardware failed, but that it doesn’t always switch seamlessly to the backup links (especially for IPv6).
- 19/6/18 09:00 - 13:00 IPv6 was not available. Fabric attempted a route switch failed. Should have dropped to IPv4 but didn't. Need to raise this issue with Networking/Fabric to try and resolve once and for all
Current operational status and issues |
- None.
Resolved Castor Disk Server Issues |
- gdss687 - LHCb - LHCbDst - d1t0. Currently back in Production.
- gdss738 - LHCb - LHCbDst - d1t0. Currently back in Production RO.
Ongoing Castor Disk Server Issues |
- gdss746 - ATLAS - atlasStripInut - d1t0. Currently in intervention.
- gdss685 - ATLAS - atlasTape - d1t0. Currently in intervention.
Limits on concurrent batch system jobs. |
- CMS Multicore 550
Notable Changes made since the last meeting. |
- None.
Entries in GOC DB starting since the last report. |
Declared in the GOC DB |
- None
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Castor:
- Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
- Move to generic Castor headnodes.
- Networking
- Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
- Internal
- DNS servers will be rolled out within the Tier1 network.
- Infrastructure
- Testing of power distribution boards in the R89 machine room is being scheduled for some time late July / Early August. The effect of this on our services is being discussed.
Open
GGUS Tickets (Snapshot taken during morning of the meeting). |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject | Scope |
---|---|---|---|---|---|---|---|---|
135711 | cms | waiting for reply | urgent | 18/06/2018 | 19/06/2018 | CMS_Central Workflows | T1_UK_RAL production jobs failing | WLCG |
135455 | cms | in progress | less urgent | 31/05/2018 | 04/06/2018 | File Transfer | Checksum verification at RAL | EGI |
135293 | ops | on hold | less urgent | 23/05/2018 | 04/06/2018 | Operations | [Rod Dashboard] Issues detected at RAL-LCG2 | EGI |
134685 | dteam | in progress | less urgent | 23/04/2018 | 11/06/2018 | Middleware | please upgrade perfsonar host(s) at RAL-LCG2 to CentOS7 | EGI |
124876 | ops | on hold | less urgent | 07/11/2016 | 13/11/2017 | Operations | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk | EGI |
GGUS Tickets Closed Last week |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject | Scope |
---|---|---|---|---|---|---|---|---|
135661 | atlas | solved | less urgent | 14/06/2018 | 14/06/2018 | Databases | RAL-LCG2: ATLAS RAL Frontier server down | WLCG |
135367 | snoplus.snolab.ca | solved | less urgent | 28/05/2018 | 13/06/2018 | Other | Lost access to srm-snoplus.gridpp.rl.ac.uk | EGI |
135308 | mice | solved | top priority | 24/05/2018 | 13/06/2018 | Information System | Can't send data to RAL Castor | EGI |
134468 | cms | closed | top priority | 09/04/2018 | 15/06/2018 | CMS_AAA WAN Access | Xrootd redirector not seeing some files in ECHO | WLCG |
117683 | none | closed | less urgent | 18/11/2015 | 15/06/2018 | Information System | CASTOR at RAL not publishing GLUE 2 | EGI |
Availability Report |
Target Availability for each site is 97.0% | Red <90% | Orange <97% |
Day | Atlas | Atlas-Echo | CMS | LHCB | Alice | OPS | Comments |
---|---|---|---|---|---|---|---|
2018-06-11 | 100 | 100 | 78 | 100 | 100 | 100 | |
2018-06-12 | 100 | 100 | 93 | 100 | 100 | 100 | |
2018-06-13 | 100 | 100 | 100 | 100 | 100 | 100 | |
2018-06-14 | 100 | 100 | 100 | 100 | 100 | 100 | |
2018-06-15 | 100 | 100 | 92 | 100 | 100 | 100 | |
2018-06-16 | 100 | 100 | 100 | 100 | 100 | 100 | |
2018-06-17 | 100 | 100 | 100 | 100 | 100 | 100 | |
2018-06-18 | 100 | 100 | 100 | 100 | 100 | 100 |
Hammercloud Test Report |
Target Availability for each site is 97.0% | Red <90% | Orange <97% |
Day | Atlas HC | CMS HC | Comment |
---|---|---|---|
2018/06/11 | 97 | 72 | |
2018/06/12 | 97 | 100 | |
2018/06/13 | 95 | 77 | |
2018/06/14 | 92 | 45 | |
2018/06/15 | 85 | 73 | |
2018/06/16 | 95 | 100 | |
2018/06/17 | 97 | 66 | 100 |
2018/06/18 | - | - |
Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud
Notes from Meeting. |
- Notes from meeting of 20th June:
- LHBb - A number of drive failures have resulted in 8 files lost, 4 not on name server 4 just plain lost.
- Issue with files appearing to be calculating a different checksum dependent on location, plain weird. Was thought to be a RAID issue but investigations have suggested this is not the case.