Difference between revisions of "Tier1 Operations Report 2019-04-15"
(→) |
(→) |
||
(2 intermediate revisions by one user not shown) | |||
Line 196: | Line 196: | ||
! Scope | ! Scope | ||
! Solution | ! Solution | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
| style="background-color: lightgreen;" | 140660 | | style="background-color: lightgreen;" | 140660 | ||
Line 213: | Line 202: | ||
| urgent | | urgent | ||
| 09/04/2019 | | 09/04/2019 | ||
− | | | + | | 15/04/2019 |
| CMS_Central Workflows | | CMS_Central Workflows | ||
| FIle read issues for Workflows where data is located at T1_UK_RAL | | FIle read issues for Workflows where data is located at T1_UK_RAL | ||
Line 221: | Line 210: | ||
| style="background-color: lightgreen;" | 140447 | | style="background-color: lightgreen;" | 140447 | ||
| dteam | | dteam | ||
− | | | + | | on hold |
| less urgent | | less urgent | ||
| 27/03/2019 | | 27/03/2019 | ||
− | | | + | | 16/04/2019 |
| Network problem | | Network problem | ||
| packet loss outbound from RAL-LCG2 over IPv6 | | packet loss outbound from RAL-LCG2 over IPv6 | ||
Line 287: | Line 276: | ||
! Scope | ! Scope | ||
! Solution | ! Solution | ||
+ | |- | ||
+ | | 140758 | ||
+ | | lhcb | ||
+ | | solved | ||
+ | | urgent | ||
+ | | 17/04/2019 | ||
+ | | 17/04/2019 | ||
+ | | File Access | ||
+ | | lhcbUser svcClass not working as it should ? | ||
+ | | WLCG | ||
+ | | Should be fixed now. | ||
+ | |- | ||
+ | | 140725 | ||
+ | | cms | ||
+ | | solved | ||
+ | | urgent | ||
+ | | 15/04/2019 | ||
+ | | 16/04/2019 | ||
+ | | CMS_Facilities | ||
+ | | T1_UK_RAL intermittent xrootd relative failures | ||
+ | | WLCG | ||
+ | | reason is clear, more additional hardware is on the way. | ||
|- | |- | ||
| 140683 | | 140683 | ||
Line 298: | Line 309: | ||
| WLCG | | WLCG | ||
| Problem was resolved and checks put in place to prevent recurrence. | | Problem was resolved and checks put in place to prevent recurrence. | ||
+ | |- | ||
+ | | 140599 | ||
+ | | lhcb | ||
+ | | solved | ||
+ | | very urgent | ||
+ | | 05/04/2019 | ||
+ | | 15/04/2019 | ||
+ | | File Access | ||
+ | | Data access problem at RAL-LCG2 | ||
+ | | WLCG | ||
+ | | Files have been transferred out of this diskserver into ECHO | ||
+ | |- | ||
+ | | 140589 | ||
+ | | lhcb | ||
+ | | verified | ||
+ | | very urgent | ||
+ | | 04/04/2019 | ||
+ | | 15/04/2019 | ||
+ | | Local Batch System | ||
+ | | Pilots killed at RAL-LCG2 | ||
+ | | WLCG | ||
+ | | As per Raja's comments, this original issue has now been resolved so ticket is being closed. | ||
|- | |- | ||
| 140577 | | 140577 | ||
Line 309: | Line 342: | ||
| EGI | | EGI | ||
| No solution found so far. LHCb is close to migrate from the old CASTIR instance soon | | No solution found so far. LHCb is close to migrate from the old CASTIR instance soon | ||
+ | |- | ||
+ | | 140511 | ||
+ | | cms | ||
+ | | closed | ||
+ | | urgent | ||
+ | | 01/04/2019 | ||
+ | | 16/04/2019 | ||
+ | | CMS_Facilities | ||
+ | | T1_UK_RAL SAM job run out of date | ||
+ | | WLCG | ||
+ | | Issue is related to SAM dashboard. | ||
+ | |- | ||
+ | | 140493 | ||
+ | | atlas | ||
+ | | closed | ||
+ | | less urgent | ||
+ | | 29/03/2019 | ||
+ | | 15/04/2019 | ||
+ | | File Transfer | ||
+ | | UK RAL-LCG2 MCTAPE: transfer error with" Connection timed out," | ||
+ | | WLCG | ||
+ | | Hi xin wang, | ||
+ | |||
+ | This looks more like a problem with the source site, LRZ-LMU_DATADISK. The error message refers to the source URL at httpg://lcg-lrz-srm.grid.lrz.de:8443/srm/managerv2. Also, if I try to download from the source path, it gets stuck: | ||
+ | |||
+ | gfal-copy srm://lcg-lrz-srm.grid.lrz.de:8443/srm/managerv2?SFN=/pnfs/lrz-muenchen.de/data/atlas/dq2/atlasdatadisk/rucio/mc16_13TeV/35/cb/HITS.17137527._002530.pool.root.1 . | ||
+ | |||
+ | Can you assign a new ticket for LRZ-LMU? | ||
+ | |||
+ | I hope it's OK for me to mark this ticket "solved". Please reopen if I was mistaken. | ||
+ | |||
+ | Thanks, | ||
+ | Tim. | ||
+ | |- | ||
+ | | 140467 | ||
+ | | cms | ||
+ | | closed | ||
+ | | urgent | ||
+ | | 28/03/2019 | ||
+ | | 15/04/2019 | ||
+ | | CMS_Data Transfers | ||
+ | | Stuck file at RAL | ||
+ | | WLCG | ||
+ | | Stuck file had missing stripes and a zeroth stripe with zero size. This was deleted by hand and the errors stopped appearing. | ||
|- | |- | ||
| 140385 | | 140385 | ||
Line 339: | Line 416: | ||
Confused of Tier-1! (Darren) | Confused of Tier-1! (Darren) | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|} | |} | ||
|}<!-- **********************End Availability Report************************** -----> | |}<!-- **********************End Availability Report************************** -----> | ||
Line 464: | Line 530: | ||
! Day !! Atlas HC !! CMS HC !! Comment | ! Day !! Atlas HC !! CMS HC !! Comment | ||
|- | |- | ||
− | | 2019- | + | | 2019-04-08 || 100 || 100 || |
|- | |- | ||
− | | 2019 | + | | 2019-04-09 || 100 || 100 || |
|- | |- | ||
− | | 2019- | + | | 2019-04-10 || style="background-color: red;" | 73 || style="background-color: red;" | 71 || |
|- | |- | ||
− | | 2019- | + | | 2019-04-11 || style="background-color: red;" | 87 || n/a || |
|- | |- | ||
− | | 2019- | + | | 2019-04-12 || 100 || n/a || |
|- | |- | ||
− | | 2019- | + | | 2019-04-13 || 100 || 100 || |
|- | |- | ||
− | | 2019-04- | + | | 2019-04-14 || 100 || 100 || |
|- | |- | ||
|} | |} |
Latest revision as of 12:14, 17 April 2019
RAL Tier1 Operations Report for 15th April 2019
Review of Issues during the week 8th April 2019 to the 15th April 2019. |
- We are seeing high outbound packet loss over IPv6. Investigations on hold as central networking do not have the expertise (Philip Garrad) available until after Easter.
- High CMS job failure rates. Ongoing issues with meta-data spread across large files. Temporarily limited CMS job slots.
- On Friday 5th April, gdss700 (LHCb) had a double drive failure and needed to be removed from production. Further problems were found, and the while we were able to return the disk to production briefly we were unable to copy all the files off and 1482 were lost.
- On Wednesday 10th April, gdss811 (LHCb) had a failure of the disk running the operating system. This generation of hardware has OS disks that are very inconveniently located (glued to the underside of the motherboard!). Not yet returned to production as of the morning of the 15th.
- On Thursday 11th April, unknown issue caused a significant fraction of docker containers (running jobs) to restart.
Current operational status and issues |
Resolved Castor Disk Server Issues |
Machine | VO | DiskPool | dxtx | Comments |
---|---|---|---|---|
gdss799 | LHCb | lhcb | d1t0 | Machine crashed. It is with fabric at the moment. |
Ongoing Castor Disk Server Issues |
Machine | VO | DiskPool | dxtx | Comments |
---|---|---|---|---|
gdss811 | LHCb | lhcb | d1t0 | Machine crashed. It is with fabric at the moment. |
Limits on concurrent batch system jobs. |
- ALICE - 1000
- CMS - 100
Notable Changes made since the last meeting. |
- NTR
Entries in GOC DB starting since the last report. |
Service | ID | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|---|
- | - | - | - | - | - | - | - |
Declared in the GOC DB |
Service | ID | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|---|
- | - | - | - | - | - | - | - |
- No ongoing downtime
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- DNS servers will be rolled out within the Tier1 network.
Open
GGUS Tickets (Snapshot taken during morning of the meeting). |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject | Scope | Solution |
---|---|---|---|---|---|---|---|---|---|
140660 | cms | in progress | urgent | 09/04/2019 | 15/04/2019 | CMS_Central Workflows | FIle read issues for Workflows where data is located at T1_UK_RAL | WLCG | |
140447 | dteam | on hold | less urgent | 27/03/2019 | 16/04/2019 | Network problem | packet loss outbound from RAL-LCG2 over IPv6 | EGI | |
140220 | mice | in progress | less urgent | 15/03/2019 | 08/04/2019 | Other | mice LFC to DFC transition | EGI | |
139672 | other | in progress | urgent | 13/02/2019 | 08/04/2019 | Middleware | No LIGO pilots running at RAL | EGI | |
138033 | atlas | waiting for reply | urgent | 01/11/2018 | 12/04/2019 | Other | singularity jobs failing at RAL | EGI |
GGUS Tickets Closed Last week |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject | Scope | Solution |
---|---|---|---|---|---|---|---|---|---|
140758 | lhcb | solved | urgent | 17/04/2019 | 17/04/2019 | File Access | lhcbUser svcClass not working as it should ? | WLCG | Should be fixed now. |
140725 | cms | solved | urgent | 15/04/2019 | 16/04/2019 | CMS_Facilities | T1_UK_RAL intermittent xrootd relative failures | WLCG | reason is clear, more additional hardware is on the way. |
140683 | lhcb | solved | top priority | 10/04/2019 | 12/04/2019 | Local Batch System | Pilots failing at RAL across all CEs | WLCG | Problem was resolved and checks put in place to prevent recurrence. |
140599 | lhcb | solved | very urgent | 05/04/2019 | 15/04/2019 | File Access | Data access problem at RAL-LCG2 | WLCG | Files have been transferred out of this diskserver into ECHO |
140589 | lhcb | verified | very urgent | 04/04/2019 | 15/04/2019 | Local Batch System | Pilots killed at RAL-LCG2 | WLCG | As per Raja's comments, this original issue has now been resolved so ticket is being closed. |
140577 | lhcb | solved | less urgent | 04/04/2019 | 11/04/2019 | File Access | LHCb disk only files requested with the wrong service class | EGI | No solution found so far. LHCb is close to migrate from the old CASTIR instance soon |
140511 | cms | closed | urgent | 01/04/2019 | 16/04/2019 | CMS_Facilities | T1_UK_RAL SAM job run out of date | WLCG | Issue is related to SAM dashboard. |
140493 | atlas | closed | less urgent | 29/03/2019 | 15/04/2019 | File Transfer | UK RAL-LCG2 MCTAPE: transfer error with" Connection timed out," | WLCG | Hi xin wang,
This looks more like a problem with the source site, LRZ-LMU_DATADISK. The error message refers to the source URL at httpg://lcg-lrz-srm.grid.lrz.de:8443/srm/managerv2. Also, if I try to download from the source path, it gets stuck: gfal-copy srm://lcg-lrz-srm.grid.lrz.de:8443/srm/managerv2?SFN=/pnfs/lrz-muenchen.de/data/atlas/dq2/atlasdatadisk/rucio/mc16_13TeV/35/cb/HITS.17137527._002530.pool.root.1 . Can you assign a new ticket for LRZ-LMU? I hope it's OK for me to mark this ticket "solved". Please reopen if I was mistaken. Thanks, Tim. |
140467 | cms | closed | urgent | 28/03/2019 | 15/04/2019 | CMS_Data Transfers | Stuck file at RAL | WLCG | Stuck file had missing stripes and a zeroth stripe with zero size. This was deleted by hand and the errors stopped appearing. |
140385 | cms | closed | less urgent | 25/03/2019 | 12/04/2019 | CMS_Data Transfers | Data Transfer problems at T1_UK_RAL | WLCG | The datasets listed had each one file with a problem. Two had zero size and were deleted from tape. Another seven were deleted from tape and invalidated by the data transfer team. The list of stuck routing files is now empty apart from a file from a T2 site currently experiencing problems. |
139723 | atlas | closed | less urgent | 15/02/2019 | 10/04/2019 | Data Management - generic | permissions on scratchdisk | EGI | Hi Folks,
This appears to have become a somewhat convoluted ticket which as far as I can tell has simply become a conversation thread. Assuming that the original issue has been resolved (further workarounds/enhancements not withstanding), I'm marking this as solved. Cheers Confused of Tier-1! (Darren) |
Availability Report |
Day | Atlas | Atlas-Echo | CMS | LHCB | Alice | OPS | Comments |
---|---|---|---|---|---|---|---|
2019-04-08 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-04-09 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-04-10 | 69 | 69 | 100 | 100 | 100 | 100 | |
2019-04-11 | 61 | 61 | 100 | 100 | 100 | 100 | |
2019-04-12 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-04-13 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-04-14 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-04-15 | 100 | 100 | 100 | 100 | 100 | 100 |
Hammercloud Test Report |
Target Availability for each site is 97.0% | Red <90% | Orange <97% |
Day | Atlas HC | CMS HC | Comment |
---|---|---|---|
2019-04-08 | 100 | 100 | |
2019-04-09 | 100 | 100 | |
2019-04-10 | 73 | 71 | |
2019-04-11 | 87 | n/a | |
2019-04-12 | 100 | n/a | |
2019-04-13 | 100 | 100 | |
2019-04-14 | 100 | 100 |
Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud
Notes from Meeting. |