Difference between revisions of "Tier1 Operations Report 2019-04-15"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(5 intermediate revisions by one user not shown)
Line 196: Line 196:
 
! Scope
 
! Scope
 
! Solution
 
! Solution
|-
 
| style="background-color: lightgreen;" | 140725
 
| cms
 
| in progress
 
| urgent
 
| 15/04/2019
 
| 15/04/2019
 
| CMS_Facilities
 
| T1_UK_RAL intermittent xrootd relative failures
 
| WLCG
 
|
 
 
|-
 
|-
 
| style="background-color: lightgreen;" | 140660
 
| style="background-color: lightgreen;" | 140660
Line 213: Line 202:
 
| urgent
 
| urgent
 
| 09/04/2019
 
| 09/04/2019
| 10/04/2019
+
| 15/04/2019
 
| CMS_Central Workflows
 
| CMS_Central Workflows
 
| FIle read issues for Workflows where data is located at T1_UK_RAL
 
| FIle read issues for Workflows where data is located at T1_UK_RAL
Line 221: Line 210:
 
| style="background-color: lightgreen;" | 140447
 
| style="background-color: lightgreen;" | 140447
 
| dteam
 
| dteam
| in progress
+
| on hold
 
| less urgent
 
| less urgent
 
| 27/03/2019
 
| 27/03/2019
| 02/04/2019
+
| 16/04/2019
 
| Network problem
 
| Network problem
 
| packet loss outbound from RAL-LCG2 over IPv6
 
| packet loss outbound from RAL-LCG2 over IPv6
Line 288: Line 277:
 
! Solution
 
! Solution
 
|-
 
|-
| style="background-color: lightgreen;" | 140521
+
| 140758
| ops
+
| lhcb
 +
| solved
 +
| urgent
 +
| 17/04/2019
 +
| 17/04/2019
 +
| File Access
 +
| lhcbUser svcClass not working as it should ?
 +
| WLCG
 +
| Should be fixed now.
 +
|-
 +
| 140725
 +
| cms
 +
| solved
 +
| urgent
 +
| 15/04/2019
 +
| 16/04/2019
 +
| CMS_Facilities
 +
| T1_UK_RAL intermittent xrootd relative failures
 +
| WLCG
 +
| reason is clear, more additional hardware is on the way.
 +
|-
 +
| 140683
 +
| lhcb
 +
| solved
 +
| top priority
 +
| 10/04/2019
 +
| 12/04/2019
 +
| Local Batch System
 +
| Pilots failing at RAL across all CEs
 +
| WLCG
 +
| Problem was resolved and checks put in place to prevent recurrence.
 +
|-
 +
| 140599
 +
| lhcb
 +
| solved
 +
| very urgent
 +
| 05/04/2019
 +
| 15/04/2019
 +
| File Access
 +
| Data access problem at RAL-LCG2
 +
| WLCG
 +
| Files have been transferred out of this diskserver into ECHO
 +
|-
 +
| 140589
 +
| lhcb
 
| verified
 
| verified
 +
| very urgent
 +
| 04/04/2019
 +
| 15/04/2019
 +
| Local Batch System
 +
| Pilots killed at RAL-LCG2
 +
| WLCG
 +
| As per Raja's comments, this original issue has now been resolved so ticket is being closed.
 +
|-
 +
| 140577
 +
| lhcb
 +
| solved
 
| less urgent
 
| less urgent
| 01/04/2019
 
 
| 04/04/2019
 
| 04/04/2019
| Operations
+
| 11/04/2019
| [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer@gridftp.echo.stfc.ac.uk
+
| File Access
 +
| LHCb disk only files requested with the wrong service class
 
| EGI
 
| EGI
| The Ceph team have added the problem  DN to the local over-ride for the grid map files.
+
| No solution found so far. LHCb is close to migrate from the old CASTIR instance soon
The test has cleared.
+
 
|-
 
|-
| style="background-color: lightgreen;" | 140511
+
| 140511
 
| cms
 
| cms
| solved
+
| closed
 
| urgent
 
| urgent
 
| 01/04/2019
 
| 01/04/2019
| 02/04/2019
+
| 16/04/2019
 
| CMS_Facilities
 
| CMS_Facilities
 
| T1_UK_RAL SAM job run out of date
 
| T1_UK_RAL SAM job run out of date
Line 311: Line 354:
 
| Issue is related to SAM dashboard.
 
| Issue is related to SAM dashboard.
 
|-
 
|-
| style="background-color: lightgreen;" | 140283
+
| 140493
 
| atlas
 
| atlas
 
| closed
 
| closed
 
| less urgent
 
| less urgent
| 19/03/2019
+
| 29/03/2019
| 03/04/2019
+
| 15/04/2019
 
| File Transfer
 
| File Transfer
| UK RAL-LCG2: DESTINATION SRM_PUT_TURL error
+
| UK RAL-LCG2 MCTAPE: transfer error with" Connection timed out,"
 
| WLCG
 
| WLCG
| Corrected ownership of the CASTOR dir to atlas001:atlas
+
| Hi xin wang,
 +
 
 +
This looks more like a problem with the source site, LRZ-LMU_DATADISK. The error message refers to the source URL at httpg://lcg-lrz-srm.grid.lrz.de:8443/srm/managerv2. Also, if I try to download from the source path, it gets stuck:
 +
 
 +
gfal-copy srm://lcg-lrz-srm.grid.lrz.de:8443/srm/managerv2?SFN=/pnfs/lrz-muenchen.de/data/atlas/dq2/atlasdatadisk/rucio/mc16_13TeV/35/cb/HITS.17137527._002530.pool.root.1 .
 +
 
 +
Can you assign a new ticket for LRZ-LMU?
 +
 
 +
I hope it's OK for me to mark this ticket "solved". Please reopen if I was mistaken.
 +
 
 +
Thanks,
 +
Tim.
 
|-
 
|-
| style="background-color: lightgreen;" | 140278
+
| 140467
 
| cms
 
| cms
 
| closed
 
| closed
 
| urgent
 
| urgent
| 19/03/2019
+
| 28/03/2019
| 05/04/2019
+
| 15/04/2019
 
| CMS_Data Transfers
 
| CMS_Data Transfers
| Transfers failing from FNAL_Buffer to RAL_Disk
+
| Stuck file at RAL
 
| WLCG
 
| WLCG
| we had a problem with ECHO mistakenly returning the correct size and checksum for files which failed transfers and were corrupt which resulted in them being marked as good by PhEDEx and then lying latent until they were accessed or transferred out.
+
| Stuck file had missing stripes and a zeroth stripe with zero size. This was deleted by hand and the errors stopped appearing.
 
+
Transfers recovered.
+
 
|-
 
|-
| style="background-color: lightgreen;" | 140210
+
| 140385
| atlas
+
| cms
 
| closed
 
| closed
| top priority
+
| less urgent
| 14/03/2019
+
| 25/03/2019
| 02/04/2019
+
| 12/04/2019
| File Transfer
+
| CMS_Data Transfers
| Cannot access some files
+
| Data Transfer problems at T1_UK_RAL
 
| WLCG
 
| WLCG
| These files were lost from RAL-LCG2-ECHO_DATADISK, probably due to an FTS bug which should now be fixed. They were identified here because they were needed by the production team. The ticket submitter has now marked them lost in Rucio.
+
| The datasets listed had each one file with a problem. Two had zero size and were deleted from tape. Another seven were deleted from tape and invalidated by the data transfer team. The list of stuck routing files is now empty apart from a file from a T2 site currently experiencing problems.
 
+
Independently, we have been analysing the results of the Rucio consistency checker. It identified 163 files lost from this disk. The 5 files listed in this ticket are among the 163. We will mark the remaining files lost in Rucio.
+
 
|-
 
|-
| style="background-color: lightgreen;" | 139990
+
| 139723
| cms
+
| atlas
 
| closed
 
| closed
| urgent
+
| less urgent
| 01/03/2019
+
| 15/02/2019
| 03/04/2019
+
| 10/04/2019
| CMS_AAA WAN Access
+
| Data Management - generic
| T1_UK_RAL xrootd segfaulted
+
| permissions on scratchdisk
| WLCG
+
| EGI
| The issue is intermittent and occasional. This segfault is most likely a known problem, which will be addressed in xrootd version 4.9, along with other fixes. RAL will install this when available.
+
| Hi Folks,
 +
 
 +
This appears to have become  a somewhat convoluted ticket which as far as I can tell has simply become a conversation thread.  
 +
 
 +
Assuming that the original issue has been resolved (further workarounds/enhancements not withstanding), I'm marking this as solved.
 +
 
 +
Cheers
 +
 
 +
Confused of Tier-1! (Darren)
 
|}
 
|}
 
|}<!-- **********************End Availability Report************************** ----->
 
|}<!-- **********************End Availability Report************************** ----->
Line 383: Line 441:
 
! Comments
 
! Comments
 
|-
 
|-
| 2019-04-02
+
| 2019-04-08
 +
| 100
 
| 100
 
| 100
 
| 100
 
| 100
| style="background-color: orange;" | 92
 
 
| 100
 
| 100
 
| 100
 
| 100
Line 392: Line 450:
 
|  
 
|  
 
|-
 
|-
| 2019-04-03
+
| 2019-04-09
 
| 100
 
| 100
 
| 100
 
| 100
Line 401: Line 459:
 
|  
 
|  
 
|-
 
|-
| 2019-04-04
+
| 2019-04-10
| 100
+
| style="background-color: red;" | 69
| 100
+
| style="background-color: red;" | 69
 
| 100
 
| 100
 
| 100
 
| 100
Line 410: Line 468:
 
|  
 
|  
 
|-
 
|-
| 2019-04-05
+
| 2019-04-11
| 100
+
| style="background-color: red;" | 61
| 100
+
| style="background-color: red;" | 61
 
| 100
 
| 100
 
| 100
 
| 100
Line 419: Line 477:
 
|  
 
|  
 
|-
 
|-
| 2019-04-06
+
| 2019-04-12
 +
| 100
 
| 100
 
| 100
 
| 100
 
| 100
Line 425: Line 484:
 
| 100
 
| 100
 
| 100
 
| 100
| style="background-color: red;" | 84
 
 
|  
 
|  
 
|-
 
|-
| 2019-04-07
+
| 2019-04-13
 
| 100
 
| 100
 
| 100
 
| 100
Line 437: Line 495:
 
|  
 
|  
 
|-
 
|-
| 2019-04-08
+
| 2019-04-14
 
| 100
 
| 100
 
| 100
 
| 100
Line 446: Line 504:
 
|  
 
|  
 
|-
 
|-
| 2019-04-09
+
| 2019-04-15
 
| 100
 
| 100
 
| 100
 
| 100
Line 472: Line 530:
 
! Day !! Atlas HC !! CMS HC !! Comment
 
! Day !! Atlas HC !! CMS HC !! Comment
 
|-
 
|-
| 2019-03-03 || 100 || style="background-color: orange;" | 94 ||  
+
| 2019-04-08 || 100 || 100 ||  
 
|-
 
|-
| 2019-03-04 || 100 || 98 ||  
+
| 2019-04-09 || 100 || 100 ||  
 
|-
 
|-
| 2019-03-05 || 100 || 99 ||  
+
| 2019-04-10 || style="background-color: red;" | 73 || style="background-color: red;" | 71 ||  
 
|-
 
|-
| 2019-03-06 || 100 || 100 ||  
+
| 2019-04-11 || style="background-color: red;" | 87 || n/a ||  
 
|-
 
|-
| 2019-03-07 || 100 || 100 ||  
+
| 2019-04-12 || 100 || n/a ||  
 
|-
 
|-
| 2019-03-08 || 100 || 100 ||  
+
| 2019-04-13 || 100 || 100 ||  
 
|-
 
|-
| 2019-04-09 || 100 || -||  
+
| 2019-04-14 || 100 || 100 ||  
 
|-
 
|-
 
|}  
 
|}  

Latest revision as of 12:14, 17 April 2019

RAL Tier1 Operations Report for 15th April 2019

Review of Issues during the week 8th April 2019 to the 15th April 2019.
  • We are seeing high outbound packet loss over IPv6. Investigations on hold as central networking do not have the expertise (Philip Garrad) available until after Easter.
  • High CMS job failure rates. Ongoing issues with meta-data spread across large files. Temporarily limited CMS job slots.
  • On Friday 5th April, gdss700 (LHCb) had a double drive failure and needed to be removed from production. Further problems were found, and the while we were able to return the disk to production briefly we were unable to copy all the files off and 1482 were lost.
  • On Wednesday 10th April, gdss811 (LHCb) had a failure of the disk running the operating system. This generation of hardware has OS disks that are very inconveniently located (glued to the underside of the motherboard!). Not yet returned to production as of the morning of the 15th.
  • On Thursday 11th April, unknown issue caused a significant fraction of docker containers (running jobs) to restart.
Current operational status and issues
Resolved Castor Disk Server Issues
Machine VO DiskPool dxtx Comments
gdss799 LHCb lhcb d1t0 Machine crashed. It is with fabric at the moment.
Ongoing Castor Disk Server Issues
Machine VO DiskPool dxtx Comments
gdss811 LHCb lhcb d1t0 Machine crashed. It is with fabric at the moment.
Limits on concurrent batch system jobs.
  • ALICE - 1000
  • CMS - 100
Notable Changes made since the last meeting.
  • NTR
Entries in GOC DB starting since the last report.
Service ID Scheduled? Outage/At Risk Start End Duration Reason
- - - - - - - -
Declared in the GOC DB
Service ID Scheduled? Outage/At Risk Start End Duration Reason
- - - - - - - -
  • No ongoing downtime
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • DNS servers will be rolled out within the Tier1 network.
Open

GGUS Tickets (Snapshot taken during morning of the meeting).

Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope Solution
140660 cms in progress urgent 09/04/2019 15/04/2019 CMS_Central Workflows FIle read issues for Workflows where data is located at T1_UK_RAL WLCG
140447 dteam on hold less urgent 27/03/2019 16/04/2019 Network problem packet loss outbound from RAL-LCG2 over IPv6 EGI
140220 mice in progress less urgent 15/03/2019 08/04/2019 Other mice LFC to DFC transition EGI
139672 other in progress urgent 13/02/2019 08/04/2019 Middleware No LIGO pilots running at RAL EGI
138033 atlas waiting for reply urgent 01/11/2018 12/04/2019 Other singularity jobs failing at RAL EGI
GGUS Tickets Closed Last week
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope Solution
140758 lhcb solved urgent 17/04/2019 17/04/2019 File Access lhcbUser svcClass not working as it should ? WLCG Should be fixed now.
140725 cms solved urgent 15/04/2019 16/04/2019 CMS_Facilities T1_UK_RAL intermittent xrootd relative failures WLCG reason is clear, more additional hardware is on the way.
140683 lhcb solved top priority 10/04/2019 12/04/2019 Local Batch System Pilots failing at RAL across all CEs WLCG Problem was resolved and checks put in place to prevent recurrence.
140599 lhcb solved very urgent 05/04/2019 15/04/2019 File Access Data access problem at RAL-LCG2 WLCG Files have been transferred out of this diskserver into ECHO
140589 lhcb verified very urgent 04/04/2019 15/04/2019 Local Batch System Pilots killed at RAL-LCG2 WLCG As per Raja's comments, this original issue has now been resolved so ticket is being closed.
140577 lhcb solved less urgent 04/04/2019 11/04/2019 File Access LHCb disk only files requested with the wrong service class EGI No solution found so far. LHCb is close to migrate from the old CASTIR instance soon
140511 cms closed urgent 01/04/2019 16/04/2019 CMS_Facilities T1_UK_RAL SAM job run out of date WLCG Issue is related to SAM dashboard.
140493 atlas closed less urgent 29/03/2019 15/04/2019 File Transfer UK RAL-LCG2 MCTAPE: transfer error with" Connection timed out," WLCG Hi xin wang,

This looks more like a problem with the source site, LRZ-LMU_DATADISK. The error message refers to the source URL at httpg://lcg-lrz-srm.grid.lrz.de:8443/srm/managerv2. Also, if I try to download from the source path, it gets stuck:

gfal-copy srm://lcg-lrz-srm.grid.lrz.de:8443/srm/managerv2?SFN=/pnfs/lrz-muenchen.de/data/atlas/dq2/atlasdatadisk/rucio/mc16_13TeV/35/cb/HITS.17137527._002530.pool.root.1 .

Can you assign a new ticket for LRZ-LMU?

I hope it's OK for me to mark this ticket "solved". Please reopen if I was mistaken.

Thanks, Tim.

140467 cms closed urgent 28/03/2019 15/04/2019 CMS_Data Transfers Stuck file at RAL WLCG Stuck file had missing stripes and a zeroth stripe with zero size. This was deleted by hand and the errors stopped appearing.
140385 cms closed less urgent 25/03/2019 12/04/2019 CMS_Data Transfers Data Transfer problems at T1_UK_RAL WLCG The datasets listed had each one file with a problem. Two had zero size and were deleted from tape. Another seven were deleted from tape and invalidated by the data transfer team. The list of stuck routing files is now empty apart from a file from a T2 site currently experiencing problems.
139723 atlas closed less urgent 15/02/2019 10/04/2019 Data Management - generic permissions on scratchdisk EGI Hi Folks,

This appears to have become a somewhat convoluted ticket which as far as I can tell has simply become a conversation thread.

Assuming that the original issue has been resolved (further workarounds/enhancements not withstanding), I'm marking this as solved.

Cheers

Confused of Tier-1! (Darren)

Availability Report

Day Atlas Atlas-Echo CMS LHCB Alice OPS Comments
2019-04-08 100 100 100 100 100 100
2019-04-09 100 100 100 100 100 100
2019-04-10 69 69 100 100 100 100
2019-04-11 61 61 100 100 100 100
2019-04-12 100 100 100 100 100 100
2019-04-13 100 100 100 100 100 100
2019-04-14 100 100 100 100 100 100
2019-04-15 100 100 100 100 100 100
Hammercloud Test Report
Target Availability for each site is 97.0% Red <90% Orange <97%
Day Atlas HC CMS HC Comment
2019-04-08 100 100
2019-04-09 100 100
2019-04-10 73 71
2019-04-11 87 n/a
2019-04-12 100 n/a
2019-04-13 100 100
2019-04-14 100 100

Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud

Notes from Meeting.