Difference between revisions of "Tier1 Operations Report 2019-04-01"

From GridPP Wiki
Jump to: navigation, search
()
Line 191: Line 191:
 
! Solution
 
! Solution
 
|-
 
|-
| style="background-color: lightgreen;" | 140599
+
| style="background-color: lightgreen;" | 140521
| lhcb
+
| ops
| in progress
+
| very urgent
+
| 05/04/2019
+
| 05/04/2019
+
| File Access
+
| Data access problem at RAL-LCG2
+
| WLCG
+
|
+
|-
+
| style="background-color: lightgreen;" | 140589
+
| lhcb
+
| in progress
+
| very urgent
+
| 04/04/2019
+
| 04/04/2019
+
| Local Batch System
+
| Pilots killed at RAL-LCG2
+
| WLCG
+
|
+
|-
+
| style="background-color: lightgreen;" | 140577
+
| lhcb
+
 
| in progress
 
| in progress
 
| less urgent
 
| less urgent
| 04/04/2019
+
| 01/04/2019
| 05/04/2019
+
| 01/04/2019
| File Access
+
| Operations
| LHCb disk only files requested with the wrong service class
+
| [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer@gridftp.echo.stfc.ac.uk
 
| EGI
 
| EGI
 
|  
 
|  
Line 265: Line 243:
 
| Middleware
 
| Middleware
 
| Problem accessing LFC at RAL
 
| Problem accessing LFC at RAL
| EGI
 
|
 
|-
 
| style="background-color: lightgreen;" | 138033
 
| atlas
 
| reopened
 
| urgent
 
| 01/11/2018
 
| 06/04/2019
 
| Other
 
| singularity jobs failing at RAL
 
 
| EGI
 
| EGI
 
|  
 
|  
Line 303: Line 270:
 
! Scope
 
! Scope
 
! Solution
 
! Solution
|-
 
| style="background-color: lightgreen;" | 140521
 
| ops
 
| verified
 
| less urgent
 
| 01/04/2019
 
| 04/04/2019
 
| Operations
 
| [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer@gridftp.echo.stfc.ac.uk
 
| EGI
 
| The Ceph team have added the problem  DN to the local over-ride for the grid map files.
 
The test has cleared.
 
 
|-
 
|-
 
| style="background-color: lightgreen;" | 140511
 
| style="background-color: lightgreen;" | 140511
Line 327: Line 282:
 
| Issue is related to SAM dashboard.
 
| Issue is related to SAM dashboard.
 
|-
 
|-
| style="background-color: lightgreen;" | 140467
+
| 140493
 +
| atlas
 +
| solved
 +
| less urgent
 +
| 29/03/2019
 +
| 29/03/2019
 +
| File Transfer
 +
| UK RAL-LCG2 MCTAPE: transfer error with" Connection timed out,"
 +
| WLCG
 +
| Hi xin wang,
 +
 
 +
This looks more like a problem with the source site, LRZ-LMU_DATADISK. The error message refers to the source URL at httpg://lcg-lrz-srm.grid.lrz.de:8443/srm/managerv2. Also, if I try to download from the source path, it gets stuck:
 +
 
 +
gfal-copy srm://lcg-lrz-srm.grid.lrz.de:8443/srm/managerv2?SFN=/pnfs/lrz-muenchen.de/data/atlas/dq2/atlasdatadisk/rucio/mc16_13TeV/35/cb/HITS.17137527._002530.pool.root.1 .
 +
 
 +
Can you assign a new ticket for LRZ-LMU?
 +
 
 +
I hope it's OK for me to mark this ticket "solved". Please reopen if I was mistaken.
 +
 
 +
Thanks,
 +
Tim.
 +
|-
 +
| 140467
 
| cms
 
| cms
 
| solved
 
| solved
Line 350: Line 327:
 
Closing this ticket.
 
Closing this ticket.
 
|-
 
|-
| style="background-color: lightgreen;" | 140283
+
| style="background-color: lightgreen;" | 140400
| atlas
+
| ops
| closed
+
| verified
 
| less urgent
 
| less urgent
| 19/03/2019
+
| 26/03/2019
| 03/04/2019
+
| 29/03/2019
| File Transfer
+
| Operations
| UK RAL-LCG2: DESTINATION SRM_PUT_TURL error
+
| [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer@gridftp.echo.stfc.ac.uk
| WLCG
+
| EGI
| Corrected ownership of the CASTOR dir to atlas001:atlas
+
| Tests are now being run with the correct certificate.
 
|-
 
|-
| style="background-color: lightgreen;" | 140278
+
| 140385
 
| cms
 
| cms
| closed
+
| solved
| urgent
+
| less urgent
| 19/03/2019
+
| 25/03/2019
| 05/04/2019
+
| 29/03/2019
 
| CMS_Data Transfers
 
| CMS_Data Transfers
| Transfers failing from FNAL_Buffer to RAL_Disk
+
| Data Transfer problems at T1_UK_RAL
 
| WLCG
 
| WLCG
| we had a problem with ECHO mistakenly returning the correct size and checksum for files which failed transfers and were corrupt which resulted in them being marked as good by PhEDEx and then lying latent until they were accessed or transferred out.
+
| The datasets listed had each one file with a problem. Two had zero size and were deleted from tape. Another seven were deleted from tape and invalidated by the data transfer team. The list of stuck routing files is now empty apart from a file from a T2 site currently experiencing problems.
 
+
Transfers recovered.
+
 
|-
 
|-
 
| style="background-color: lightgreen;" | 140210
 
| style="background-color: lightgreen;" | 140210
Line 387: Line 362:
 
Independently, we have been analysing the results of the Rucio consistency checker. It identified 163 files lost from this disk. The 5 files listed in this ticket are among the 163. We will mark the remaining files lost in Rucio.
 
Independently, we have been analysing the results of the Rucio consistency checker. It identified 163 files lost from this disk. The 5 files listed in this ticket are among the 163. We will mark the remaining files lost in Rucio.
 
|-
 
|-
| style="background-color: lightgreen;" | 140177
+
| 140177
 
| cms
 
| cms
 
| closed
 
| closed
Line 398: Line 373:
 
| IPv6 was registered, although not fully enabled at FNAL. RAL had IPv6 fully enabled, so expected to transfer via that connection, however it was timing out and not falling back to IPv4. FNAL removed the IPv6 address, and since then the link has been green.
 
| IPv6 was registered, although not fully enabled at FNAL. RAL had IPv6 fully enabled, so expected to transfer via that connection, however it was timing out and not falling back to IPv4. FNAL removed the IPv6 address, and since then the link has been green.
 
|-
 
|-
| style="background-color: lightgreen;" | 139990
+
| 139723
| cms
+
| atlas
| closed
+
| solved
 +
| less urgent
 +
| 15/02/2019
 +
| 27/03/2019
 +
| Data Management - generic
 +
| permissions on scratchdisk
 +
| EGI
 +
| Hi Folks,
 +
 
 +
This appears to have become  a somewhat convoluted ticket which as far as I can tell has simply become a conversation thread.   
 +
 
 +
Assuming that the original issue has been resolved (further workarounds/enhancements not withstanding), I'm marking this as solved.
 +
 
 +
Cheers
 +
 
 +
Confused of Tier-1! (Darren)
 +
|-
 +
| 138033
 +
| atlas
 +
| solved
 
| urgent
 
| urgent
| 01/03/2019
+
| 01/11/2018
| 03/04/2019
+
| 26/03/2019
| CMS_AAA WAN Access
+
| Other
| T1_UK_RAL xrootd segfaulted
+
| singularity jobs failing at RAL
| WLCG
+
| EGI
| The issue is intermittent and occasional. This segfault is most likely a known problem, which will be addressed in xrootd version 4.9, along with other fixes. RAL will install this when available.
+
| As  the new batch containers have been fully rolled out and appear to have been working since the 18/03/19 I'm going to call this one as solved.
 
|}
 
|}
 
|}<!-- **********************End Availability Report************************** ----->
 
|}<!-- **********************End Availability Report************************** ----->

Revision as of 08:23, 9 April 2019

RAL Tier1 Operations Report for 8th April 2019

Review of Issues during the week 1st April 2019 to the 8th April 2019.
Current operational status and issues
Resolved Castor Disk Server Issues
Machine VO DiskPool dxtx Comments
gdss733 LHCb lcbhDsy d1t0 -
Ongoing Castor Disk Server Issues
Machine VO DiskPool dxtx Comments
-
Limits on concurrent batch system jobs.
  • ALICE - 1000
Notable Changes made since the last meeting.
  • NTR
Entries in GOC DB starting since the last report.
Service ID Scheduled? Outage/At Risk Start End Duration Reason
- - - - - - - -
Declared in the GOC DB
Service ID Scheduled? Outage/At Risk Start End Duration Reason
- - - - - - - -
  • No ongoing downtime
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • DNS servers will be rolled out within the Tier1 network.
Open

GGUS Tickets (Snapshot taken during morning of the meeting).

Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope Solution
140521 ops in progress less urgent 01/04/2019 01/04/2019 Operations [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer@gridftp.echo.stfc.ac.uk EGI
140447 dteam in progress less urgent 27/03/2019 02/04/2019 Network problem packet loss outbound from RAL-LCG2 over IPv6 EGI
140220 mice in progress less urgent 15/03/2019 25/03/2019 Other mice LFC to DFC transition EGI
139672 other in progress urgent 13/02/2019 18/03/2019 Middleware No LIGO pilots running at RAL EGI
138665 mice on hold urgent 04/12/2018 30/01/2019 Middleware Problem accessing LFC at RAL EGI
GGUS Tickets Closed Last week
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope Solution
140511 cms solved urgent 01/04/2019 02/04/2019 CMS_Facilities T1_UK_RAL SAM job run out of date WLCG Issue is related to SAM dashboard.
140493 atlas solved less urgent 29/03/2019 29/03/2019 File Transfer UK RAL-LCG2 MCTAPE: transfer error with" Connection timed out," WLCG Hi xin wang,

This looks more like a problem with the source site, LRZ-LMU_DATADISK. The error message refers to the source URL at httpg://lcg-lrz-srm.grid.lrz.de:8443/srm/managerv2. Also, if I try to download from the source path, it gets stuck:

gfal-copy srm://lcg-lrz-srm.grid.lrz.de:8443/srm/managerv2?SFN=/pnfs/lrz-muenchen.de/data/atlas/dq2/atlasdatadisk/rucio/mc16_13TeV/35/cb/HITS.17137527._002530.pool.root.1 .

Can you assign a new ticket for LRZ-LMU?

I hope it's OK for me to mark this ticket "solved". Please reopen if I was mistaken.

Thanks, Tim.

140467 cms solved urgent 28/03/2019 01/04/2019 CMS_Data Transfers Stuck file at RAL WLCG Stuck file had missing stripes and a zeroth stripe with zero size. This was deleted by hand and the errors stopped appearing.
140443 none verified top priority 27/03/2019 01/04/2019 Other This TEST ALARM has been raised for testing GGUS alarm work flow after a new GGUS release. WLCG Alarms raised (and acknowledged) internally at RAL

Closing this ticket.

140400 ops verified less urgent 26/03/2019 29/03/2019 Operations [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer@gridftp.echo.stfc.ac.uk EGI Tests are now being run with the correct certificate.
140385 cms solved less urgent 25/03/2019 29/03/2019 CMS_Data Transfers Data Transfer problems at T1_UK_RAL WLCG The datasets listed had each one file with a problem. Two had zero size and were deleted from tape. Another seven were deleted from tape and invalidated by the data transfer team. The list of stuck routing files is now empty apart from a file from a T2 site currently experiencing problems.
140210 atlas closed top priority 14/03/2019 02/04/2019 File Transfer Cannot access some files WLCG These files were lost from RAL-LCG2-ECHO_DATADISK, probably due to an FTS bug which should now be fixed. They were identified here because they were needed by the production team. The ticket submitter has now marked them lost in Rucio.

Independently, we have been analysing the results of the Rucio consistency checker. It identified 163 files lost from this disk. The 5 files listed in this ticket are among the 163. We will mark the remaining files lost in Rucio.

140177 cms closed urgent 13/03/2019 01/04/2019 CMS_Data Transfers RAL FTS - Transfers failing from T1_US_FNAL_Disk to some sites WLCG IPv6 was registered, although not fully enabled at FNAL. RAL had IPv6 fully enabled, so expected to transfer via that connection, however it was timing out and not falling back to IPv4. FNAL removed the IPv6 address, and since then the link has been green.
139723 atlas solved less urgent 15/02/2019 27/03/2019 Data Management - generic permissions on scratchdisk EGI Hi Folks,

This appears to have become a somewhat convoluted ticket which as far as I can tell has simply become a conversation thread.

Assuming that the original issue has been resolved (further workarounds/enhancements not withstanding), I'm marking this as solved.

Cheers

Confused of Tier-1! (Darren)

138033 atlas solved urgent 01/11/2018 26/03/2019 Other singularity jobs failing at RAL EGI As the new batch containers have been fully rolled out and appear to have been working since the 18/03/19 I'm going to call this one as solved.

Availability Report

Day Atlas Atlas-Echo CMS LHCB Alice OPS Comments
2019-03-26 100 100 100 100 100 100
2019-03-27 100 100 100 100 100 100
2019-03-28 100 100 100 100 100 100
2019-03-29 100 100 100 100 100 100
2019-03-30 100 100 100 100 100 100
2019-03-31 100 100 100 100 100 100
2019-04-01 100 100 100 100 100 100
Hammercloud Test Report
Target Availability for each site is 97.0% Red <90% Orange <97%
Day Atlas HC CMS HC Comment
2019-03-26 100 99
2019-03-27 100 96
2019-03-28 100 100
2019-03-29 100 100
2019-03-30 100 99
2019-03-31 100 99
2019-04-01 100 100
2019-04-02 100 100

Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud

Notes from Meeting.