Difference between revisions of "Tier1 Operations Report 2019-02-05"
From GridPP Wiki
(Created page with "==RAL Tier1 Operations Report for 28th January 2019== __NOTOC__ ====== ====== <!-- ************************************************************* -----> <!-- ***********Start...") |
(→) |
||
(12 intermediate revisions by one user not shown) | |||
Line 1: | Line 1: | ||
− | ==RAL Tier1 Operations Report for | + | ==RAL Tier1 Operations Report for 5th February 2019== |
__NOTOC__ | __NOTOC__ | ||
Line 8: | Line 8: | ||
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;" | {| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;" | ||
|- | |- | ||
− | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week | + | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 28th January 2019 to the 5th February 2019. |
|} | |} | ||
* CPU Efficiencies have improved for CMS (>80%), although it is still fluctuating a lot. ATLAS has still 60 - 70% efficiency, Atlas liaison has investigated, the fluctuations look like mostly a result of a different mix of types of jobs, especially failed jobs and group production, which have a lower efficiency. The overall efficiency is similar, maybe a bit better than this time last year. | * CPU Efficiencies have improved for CMS (>80%), although it is still fluctuating a lot. ATLAS has still 60 - 70% efficiency, Atlas liaison has investigated, the fluctuations look like mostly a result of a different mix of types of jobs, especially failed jobs and group production, which have a lower efficiency. The overall efficiency is similar, maybe a bit better than this time last year. | ||
− | |||
− | |||
− | |||
− | |||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 25: | Line 21: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues | ||
|} | |} | ||
− | * | + | * The system drive in a disk server for LHCb failed on Thursday afternoon. This was a 14 generation machine (dual purpose for Ceph). The operating system was put on the SSD (to leave other disks for capacity), which is attached to the underside of the motherboard… Fabric is going to perform open heart surgery today to install another disk. |
+ | * The disk buffer in front of our new Castor tape instance almost filled up. Currently we don’t know the exact cause but on the 25th January (after several months of working perfectly), the Garbage collection daemon stopped working quickly and properly (clearing only a few files an hour). We have manually been wiping files from the tape buffer to keep space clear while we understand the problem. | ||
+ | * While we were investigating the full buffer we found that NA62 has been writing files to the “disk” endpoint on wlcgTape. This endpoint does not get written to tape and was designed for a small number of functional test files (e.g. SAM tests which get copied in and immediately deleted). There are ~197k files using up 11TB of space that as it currently stands will never be migrated to tape (and if they are not used will be deleted eventually). Most of these files have been written in the last few weeks. Tier-1 manager has started an urgent conversation with NA62 to find out how important these files are. | ||
+ | * ARC-CE04 has stopped working again. We are not sure if this is related to the number of LHCb jobs submitted to this CE or not. We have rolled out an updated version of the software to arc-ce05 for testing, which should fix the problem but at the very least it will mean that the ARC developers will need to look at the error. Unfortunately it is likely to break backward compatibility with some VOs. It would be desirable if we could get LHCb to submit their jobs more evenly across our CEs (currently the ratio is 0:25:25:50 for ARC-CE04[1-4] respectively). | ||
<!-- ***********End Current operational status and issues*********** -----> | <!-- ***********End Current operational status and issues*********** -----> | ||
<!-- *************************************************************** -----> | <!-- *************************************************************** -----> | ||
Line 44: | Line 43: | ||
! Comments | ! Comments | ||
|- | |- | ||
− | | | + | | - |
− | | | + | | - |
− | | | + | | - |
− | | | + | | - |
| - | | - | ||
|} | |} | ||
Line 67: | Line 66: | ||
! Comments | ! Comments | ||
|- | |- | ||
− | | | + | | gdss811 |
− | | | + | | LHCb |
− | | | + | | lhcbDst |
− | | | + | | d1t0 |
| - | | - | ||
|} | |} | ||
Line 193: | Line 192: | ||
! Scope | ! Scope | ||
|- | |- | ||
− | | style="background-color: lightgreen;" | | + | | style="background-color: lightgreen;" | 139477 |
− | | | + | | ops |
| in progress | | in progress | ||
− | | urgent | + | | less urgent |
− | | | + | | 01/02/2019 |
− | | | + | | 04/02/2019 |
− | | | + | | Operations |
− | | | + | | [Rod Dashboard] Issue detected : org.nordugrid.ARC-CE-submit-ops@arc-ce04.gridpp.rl.ac.uk |
− | | | + | | EGI |
|- | |- | ||
− | | style="background-color: lightgreen;" | | + | | style="background-color: lightgreen;" | 139476 |
− | | | + | | mice |
| in progress | | in progress | ||
− | | urgent | + | | less urgent |
− | | | + | | 01/02/2019 |
− | | | + | | 04/02/2019 |
| Other | | Other | ||
− | | | + | | LFC dump |
− | | | + | | EGI |
|- | |- | ||
| style="background-color: red;" | 139306 | | style="background-color: red;" | 139306 | ||
Line 228: | Line 227: | ||
| less urgent | | less urgent | ||
| 17/12/2018 | | 17/12/2018 | ||
− | | | + | | 05/02/2019 |
| Operations | | Operations | ||
| [Rod Dashboard] Issue detected : egi.eu.lowAvailability-/RAL-LCG2@RAL-LCG2_Availability | | [Rod Dashboard] Issue detected : egi.eu.lowAvailability-/RAL-LCG2@RAL-LCG2_Availability | ||
Line 235: | Line 234: | ||
| style="background-color: orange;" | 138665 | | style="background-color: orange;" | 138665 | ||
| mice | | mice | ||
− | | | + | | on hold |
| urgent | | urgent | ||
| 04/12/2018 | | 04/12/2018 | ||
− | | | + | | 30/01/2019 |
| Middleware | | Middleware | ||
| Problem accessing LFC at RAL | | Problem accessing LFC at RAL | ||
| EGI | | EGI | ||
|- | |- | ||
− | | style="background-color: | + | | style="background-color: red;" | 138500 |
| cms | | cms | ||
− | | | + | | on hold |
| urgent | | urgent | ||
| 26/11/2018 | | 26/11/2018 | ||
− | | | + | | 30/01/2019 |
| CMS_Data Transfers | | CMS_Data Transfers | ||
| Transfers failing from T2_PL_Swierk to RAL | | Transfers failing from T2_PL_Swierk to RAL | ||
Line 258: | Line 257: | ||
| less urgent | | less urgent | ||
| 19/11/2018 | | 19/11/2018 | ||
− | | | + | | 31/01/2019 |
| Other | | Other | ||
| RAL-LCG2: t2k.org LFC to DFC transition | | RAL-LCG2: t2k.org LFC to DFC transition | ||
Line 268: | Line 267: | ||
| urgent | | urgent | ||
| 01/11/2018 | | 01/11/2018 | ||
− | | | + | | 31/01/2019 |
| Other | | Other | ||
| singularity jobs failing at RAL | | singularity jobs failing at RAL | ||
Line 275: | Line 274: | ||
| style="background-color: lightgreen;" | 137897 | | style="background-color: lightgreen;" | 137897 | ||
| enmr.eu | | enmr.eu | ||
− | | | + | | on hold |
| urgent | | urgent | ||
| 23/10/2018 | | 23/10/2018 | ||
− | | | + | | 31/01/2019 |
| Workload Management | | Workload Management | ||
| enmr.eu accounting at RAL | | enmr.eu accounting at RAL | ||
Line 307: | Line 306: | ||
! Scope | ! Scope | ||
|- | |- | ||
− | | | + | | 139538 |
| cms | | cms | ||
| solved | | solved | ||
| urgent | | urgent | ||
− | | | + | | 05/02/2019 |
− | | | + | | 05/02/2019 |
− | | | + | | CMS_Data Transfers |
− | | | + | | Some transfers failing to RAL - SRM_AUTHORIZATION_FAILURE |
| WLCG | | WLCG | ||
|- | |- | ||
− | | style="background-color: lightgreen;" | | + | | 139414 |
+ | | lhcb | ||
+ | | verified | ||
+ | | very urgent | ||
+ | | 30/01/2019 | ||
+ | | 05/02/2019 | ||
+ | | Other | ||
+ | | Jobs Failed with Segmentation fault at RAL-LCG2 | ||
+ | | WLCG | ||
+ | |- | ||
+ | | style="background-color: lightgreen;" | 139405 | ||
+ | | ops | ||
+ | | verified | ||
+ | | less urgent | ||
+ | | 30/01/2019 | ||
+ | | 05/02/2019 | ||
+ | | Operations | ||
+ | | [Rod Dashboard] Issue detected : org.bdii.GLUE2-Validate@site-bdii.gridpp.rl.ac.uk | ||
+ | | EGI | ||
+ | |- | ||
+ | | 139404 | ||
+ | | none | ||
+ | | verified | ||
+ | | top priority | ||
+ | | 30/01/2019 | ||
+ | | 01/02/2019 | ||
+ | | Other | ||
+ | | This TEST ALARM has been raised for testing GGUS alarm work flow after a new GGUS release. | ||
+ | | WLCG | ||
+ | |- | ||
+ | | 139380 | ||
| cms | | cms | ||
| solved | | solved | ||
| urgent | | urgent | ||
− | |||
| 29/01/2019 | | 29/01/2019 | ||
− | | | + | | 31/01/2019 |
− | | | + | | CMS_Facilities |
+ | | T1_UK_RAL failing SAM tests inside Singularity | ||
| WLCG | | WLCG | ||
|- | |- | ||
− | | | + | | 139375 |
| atlas | | atlas | ||
| solved | | solved | ||
| urgent | | urgent | ||
− | | | + | | 29/01/2019 |
− | | | + | | 04/02/2019 |
− | | | + | | Other |
− | | RAL | + | | RAL-LCG2 transfers fail with "the server responded with an error 500" |
| WLCG | | WLCG | ||
|- | |- | ||
− | | | + | | 139328 |
| cms | | cms | ||
| solved | | solved | ||
| urgent | | urgent | ||
− | | | + | | 25/01/2019 |
− | | | + | | 29/01/2019 |
− | | | + | | CMS_Facilities |
− | | | + | | T1_UK_RAL SRM tests failing |
| WLCG | | WLCG | ||
|- | |- | ||
− | | | + | | 139312 |
| cms | | cms | ||
| solved | | solved | ||
| urgent | | urgent | ||
− | |||
| 25/01/2019 | | 25/01/2019 | ||
+ | | 29/01/2019 | ||
| CMS_Data Transfers | | CMS_Data Transfers | ||
− | | | + | | Corrupted files at RAL_Buffer? |
| WLCG | | WLCG | ||
|- | |- | ||
− | | | + | | 139245 |
| cms | | cms | ||
| solved | | solved | ||
| urgent | | urgent | ||
− | | | + | | 21/01/2019 |
− | | | + | | 04/02/2019 |
− | | | + | | CMS_Data Transfers |
− | | | + | | Transfers failing from CNAF_Disk to RAL_Buffer |
| WLCG | | WLCG | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|} | |} | ||
<!-- **********************End Availability Report************************** -----> | <!-- **********************End Availability Report************************** -----> | ||
Line 401: | Line 420: | ||
! Comments | ! Comments | ||
|- | |- | ||
− | | 2019-01- | + | | 2019-01-29 |
| 100 | | 100 | ||
| 100 | | 100 | ||
− | | | + | | 97 |
| 100 | | 100 | ||
| 100 | | 100 | ||
− | | style="background-color: cyan;" | | + | | style="background-color: cyan;" | -1 |
− | | | + | | |
|- | |- | ||
− | | 2019-01- | + | | 2019-01-30 |
| 100 | | 100 | ||
| 100 | | 100 | ||
− | | | + | | 98 |
| 100 | | 100 | ||
| 100 | | 100 | ||
− | | | + | | 100 |
− | | | + | | |
|- | |- | ||
− | | 2019-01- | + | | 2019-01-31 |
| 100 | | 100 | ||
| 100 | | 100 | ||
Line 425: | Line 444: | ||
| 100 | | 100 | ||
| 100 | | 100 | ||
− | | | + | | 100 |
− | | | + | | |
|- | |- | ||
− | | 2019- | + | | 2019-02-01 |
| 100 | | 100 | ||
| 100 | | 100 | ||
− | |||
| 100 | | 100 | ||
| 100 | | 100 | ||
− | | | + | | 100 |
− | | | + | | 100 |
+ | | | ||
|- | |- | ||
− | | 2019- | + | | 2019-02-02 |
| 100 | | 100 | ||
| 100 | | 100 | ||
− | | | + | | 98 |
| 100 | | 100 | ||
| 100 | | 100 | ||
− | | | + | | 100 |
− | | | + | | |
|- | |- | ||
− | | 2019- | + | | 2019-02-03 |
+ | | 100 | ||
+ | | 100 | ||
| 100 | | 100 | ||
| 100 | | 100 | ||
− | |||
| 100 | | 100 | ||
| 100 | | 100 | ||
− | | | + | | |
− | + | ||
|- | |- | ||
− | | 2019- | + | | 2019-02-04 |
| 100 | | 100 | ||
| 100 | | 100 | ||
− | |||
| 100 | | 100 | ||
| 100 | | 100 | ||
− | | | + | | 100 |
− | | | + | | 100 |
+ | | | ||
|- | |- | ||
− | | 2019- | + | | 2019-02-05 |
| 100 | | 100 | ||
| 100 | | 100 | ||
− | |||
| 100 | | 100 | ||
| 100 | | 100 | ||
− | | | + | | 100 |
− | | | + | | 100 |
+ | | | ||
|- | |- | ||
− | | 2019- | + | | 2019-02-06 |
| 100 | | 100 | ||
| 100 | | 100 | ||
− | |||
| 100 | | 100 | ||
| 100 | | 100 | ||
− | | | + | | 100 |
− | | | + | | 100 |
+ | | | ||
|} | |} | ||
Latest revision as of 11:53, 6 February 2019
RAL Tier1 Operations Report for 5th February 2019
Review of Issues during the week 28th January 2019 to the 5th February 2019. |
- CPU Efficiencies have improved for CMS (>80%), although it is still fluctuating a lot. ATLAS has still 60 - 70% efficiency, Atlas liaison has investigated, the fluctuations look like mostly a result of a different mix of types of jobs, especially failed jobs and group production, which have a lower efficiency. The overall efficiency is similar, maybe a bit better than this time last year.
Current operational status and issues |
- The system drive in a disk server for LHCb failed on Thursday afternoon. This was a 14 generation machine (dual purpose for Ceph). The operating system was put on the SSD (to leave other disks for capacity), which is attached to the underside of the motherboard… Fabric is going to perform open heart surgery today to install another disk.
- The disk buffer in front of our new Castor tape instance almost filled up. Currently we don’t know the exact cause but on the 25th January (after several months of working perfectly), the Garbage collection daemon stopped working quickly and properly (clearing only a few files an hour). We have manually been wiping files from the tape buffer to keep space clear while we understand the problem.
- While we were investigating the full buffer we found that NA62 has been writing files to the “disk” endpoint on wlcgTape. This endpoint does not get written to tape and was designed for a small number of functional test files (e.g. SAM tests which get copied in and immediately deleted). There are ~197k files using up 11TB of space that as it currently stands will never be migrated to tape (and if they are not used will be deleted eventually). Most of these files have been written in the last few weeks. Tier-1 manager has started an urgent conversation with NA62 to find out how important these files are.
- ARC-CE04 has stopped working again. We are not sure if this is related to the number of LHCb jobs submitted to this CE or not. We have rolled out an updated version of the software to arc-ce05 for testing, which should fix the problem but at the very least it will mean that the ARC developers will need to look at the error. Unfortunately it is likely to break backward compatibility with some VOs. It would be desirable if we could get LHCb to submit their jobs more evenly across our CEs (currently the ratio is 0:25:25:50 for ARC-CE04[1-4] respectively).
Resolved Castor Disk Server Issues |
Machine | VO | DiskPool | dxtx | Comments |
---|---|---|---|---|
- | - | - | - | - |
Ongoing Castor Disk Server Issues |
Machine | VO | DiskPool | dxtx | Comments |
---|---|---|---|---|
gdss811 | LHCb | lhcbDst | d1t0 | - |
Limits on concurrent batch system jobs. |
- ALICE - 1000
Notable Changes made since the last meeting. |
- NTR
Entries in GOC DB starting since the last report. |
Service | ID | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|---|
- | - | - | - | - | - | - | - |
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
- | - | - | - | - | - | - |
- No ongoing downtime
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- DNS servers will be rolled out within the Tier1 network.
Open
GGUS Tickets (Snapshot taken during morning of the meeting). |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject | Scope |
---|---|---|---|---|---|---|---|---|
139477 | ops | in progress | less urgent | 01/02/2019 | 04/02/2019 | Operations | [Rod Dashboard] Issue detected : org.nordugrid.ARC-CE-submit-ops@arc-ce04.gridpp.rl.ac.uk | EGI |
139476 | mice | in progress | less urgent | 01/02/2019 | 04/02/2019 | Other | LFC dump | EGI |
139306 | dteam | in progress | less urgent | 24/01/2019 | 29/01/2019 | Monitoring | perfsonar hosts need updating | EGI |
138891 | ops | on hold | less urgent | 17/12/2018 | 05/02/2019 | Operations | [Rod Dashboard] Issue detected : egi.eu.lowAvailability-/RAL-LCG2@RAL-LCG2_Availability | EGI |
138665 | mice | on hold | urgent | 04/12/2018 | 30/01/2019 | Middleware | Problem accessing LFC at RAL | EGI |
138500 | cms | on hold | urgent | 26/11/2018 | 30/01/2019 | CMS_Data Transfers | Transfers failing from T2_PL_Swierk to RAL | WLCG |
138361 | t2k.org | in progress | less urgent | 19/11/2018 | 31/01/2019 | Other | RAL-LCG2: t2k.org LFC to DFC transition | EGI |
138033 | atlas | in progress | urgent | 01/11/2018 | 31/01/2019 | Other | singularity jobs failing at RAL | EGI |
137897 | enmr.eu | on hold | urgent | 23/10/2018 | 31/01/2019 | Workload Management | enmr.eu accounting at RAL | EGI |
GGUS Tickets Closed Last week |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject | Scope |
---|---|---|---|---|---|---|---|---|
139538 | cms | solved | urgent | 05/02/2019 | 05/02/2019 | CMS_Data Transfers | Some transfers failing to RAL - SRM_AUTHORIZATION_FAILURE | WLCG |
139414 | lhcb | verified | very urgent | 30/01/2019 | 05/02/2019 | Other | Jobs Failed with Segmentation fault at RAL-LCG2 | WLCG |
139405 | ops | verified | less urgent | 30/01/2019 | 05/02/2019 | Operations | [Rod Dashboard] Issue detected : org.bdii.GLUE2-Validate@site-bdii.gridpp.rl.ac.uk | EGI |
139404 | none | verified | top priority | 30/01/2019 | 01/02/2019 | Other | This TEST ALARM has been raised for testing GGUS alarm work flow after a new GGUS release. | WLCG |
139380 | cms | solved | urgent | 29/01/2019 | 31/01/2019 | CMS_Facilities | T1_UK_RAL failing SAM tests inside Singularity | WLCG |
139375 | atlas | solved | urgent | 29/01/2019 | 04/02/2019 | Other | RAL-LCG2 transfers fail with "the server responded with an error 500" | WLCG |
139328 | cms | solved | urgent | 25/01/2019 | 29/01/2019 | CMS_Facilities | T1_UK_RAL SRM tests failing | WLCG |
139312 | cms | solved | urgent | 25/01/2019 | 29/01/2019 | CMS_Data Transfers | Corrupted files at RAL_Buffer? | WLCG |
139245 | cms | solved | urgent | 21/01/2019 | 04/02/2019 | CMS_Data Transfers | Transfers failing from CNAF_Disk to RAL_Buffer | WLCG |
Availability Report |
Day | Atlas | Atlas-Echo | CMS | LHCB | Alice | OPS | Comments |
---|---|---|---|---|---|---|---|
2019-01-29 | 100 | 100 | 97 | 100 | 100 | -1 | |
2019-01-30 | 100 | 100 | 98 | 100 | 100 | 100 | |
2019-01-31 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-02-01 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-02-02 | 100 | 100 | 98 | 100 | 100 | 100 | |
2019-02-03 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-02-04 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-02-05 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-02-06 | 100 | 100 | 100 | 100 | 100 | 100 |
Hammercloud Test Report |
Target Availability for each site is 97.0% | Red <90% | Orange <97% |
Day | Atlas HC | CMS HC | Comment |
---|---|---|---|
2019-01-23 | 100 | 98 | |
2019-01-24 | 100 | 98 | |
2019-01-25 | 100 | 98 | |
2019-01-26 | 100 | 91 | |
2019-01-27 | 100 | 97 | |
2019-01-28 | 100 | 93 | |
2019-01-29 | 100 | 98 |
Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud
Notes from Meeting. |