Difference between revisions of "Tier1 Operations Report 2019-04-01"

From GridPP Wiki
Jump to: navigation, search
()
(Undo revision 19781 by Darren Moore 213b9e8222 (talk))
 
(16 intermediate revisions by one user not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 26th March 2019 to the 1st April 2019.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 26th March 2019 to the 1st April 2019.
 
|}
 
|}
* Worker nodes on the batch farm had been kernel patched and rebooted. The Viglen 2011 tranche of machines not be returned to production (most of them are broken).
+
* This years CPU capacity entered production on 27th March.  V11 CPU has been retired.
* Tier-1 continues to be involved in the on-going Security challenge - we are now in the evaluation phase.
+
* gdss733Cb),  had a double disk failure in Castor.  Was out of production from 27th - 29th March.
* SAM tests are frequently not reporting periods of monitoring.
+
* CMS CPU efficiency has been poor all last week (25%-40%).  Efficiency started dropping around the 20th March.
 +
* We fixed a problem that prevent LSST jobs from running (HTCondor had not been updated with the new VOMS server).
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 64: Line 65:
 
! Comments
 
! Comments
 
|-
 
|-
| gdss733
+
|  
| LHCb
+
|  
| lhcbDst
+
|  
| d1t0
+
|  
 
| -
 
| -
 
|}
 
|}
Line 193: Line 194:
 
! Solution
 
! Solution
 
|-
 
|-
| style="background-color: lightgreen;" | 140385
+
| style="background-color: lightgreen;" | 140521
| cms
+
| ops
 
| in progress
 
| in progress
 
| less urgent
 
| less urgent
| 25/03/2019
+
| 01/04/2019
| 26/03/2019
+
| 01/04/2019
| CMS_Data Transfers
+
| Operations
| Data Transfer problems at T1_UK_RAL
+
| [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer@gridftp.echo.stfc.ac.uk
| WLCG
+
| EGI
 +
|
 +
|-
 +
| style="background-color: lightgreen;" | 140447
 +
| dteam
 +
| in progress
 +
| less urgent
 +
| 27/03/2019
 +
| 02/04/2019
 +
| Network problem
 +
| packet loss outbound from RAL-LCG2 over IPv6
 +
| EGI
 
|  
 
|  
 
|-
 
|-
Line 215: Line 227:
 
|  
 
|  
 
|-
 
|-
| style="background-color: yellow;" | 139672
+
| style="background-color: orange;" | 139672
 
| other
 
| other
 
| in progress
 
| in progress
Line 262: Line 274:
 
! Solution
 
! Solution
 
|-
 
|-
| 140283
+
| style="background-color: lightgreen;" | 140521
| atlas
+
| ops
 +
| verified
 +
| less urgent
 +
| 01/04/2019
 +
| 04/04/2019
 +
| Operations
 +
| [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer@gridftp.echo.stfc.ac.uk
 +
| EGI
 +
| The Ceph team have added the problem  DN to the local over-ride for the grid map files.
 +
The test has cleared.
 +
|-
 +
| style="background-color: lightgreen;" | 140511
 +
| cms
 
| solved
 
| solved
 +
| urgent
 +
| 01/04/2019
 +
| 02/04/2019
 +
| CMS_Facilities
 +
| T1_UK_RAL SAM job run out of date
 +
| WLCG
 +
| Issue is related to SAM dashboard.
 +
|-
 +
| style="background-color: lightgreen;" | 140283
 +
| atlas
 +
| closed
 
| less urgent
 
| less urgent
 
| 19/03/2019
 
| 19/03/2019
| 20/03/2019
+
| 03/04/2019
 
| File Transfer
 
| File Transfer
 
| UK RAL-LCG2: DESTINATION SRM_PUT_TURL error
 
| UK RAL-LCG2: DESTINATION SRM_PUT_TURL error
Line 273: Line 308:
 
| Corrected ownership of the CASTOR dir to atlas001:atlas
 
| Corrected ownership of the CASTOR dir to atlas001:atlas
 
|-
 
|-
| 140278
+
| style="background-color: lightgreen;" | 140278
 
| cms
 
| cms
| solved
+
| closed
 
| urgent
 
| urgent
 
| 19/03/2019
 
| 19/03/2019
| 22/03/2019
+
| 05/04/2019
 
| CMS_Data Transfers
 
| CMS_Data Transfers
 
| Transfers failing from FNAL_Buffer to RAL_Disk
 
| Transfers failing from FNAL_Buffer to RAL_Disk
Line 286: Line 321:
 
Transfers recovered.
 
Transfers recovered.
 
|-
 
|-
| 140210
+
| style="background-color: lightgreen;" | 140210
 
| atlas
 
| atlas
| solved
+
| closed
 
| top priority
 
| top priority
 
| 14/03/2019
 
| 14/03/2019
| 19/03/2019
+
| 02/04/2019
 
| File Transfer
 
| File Transfer
 
| Cannot access some files
 
| Cannot access some files
Line 299: Line 334:
 
Independently, we have been analysing the results of the Rucio consistency checker. It identified 163 files lost from this disk. The 5 files listed in this ticket are among the 163. We will mark the remaining files lost in Rucio.
 
Independently, we have been analysing the results of the Rucio consistency checker. It identified 163 files lost from this disk. The 5 files listed in this ticket are among the 163. We will mark the remaining files lost in Rucio.
 
|-
 
|-
| 140177
+
| style="background-color: lightgreen;" | 139990
 
| cms
 
| cms
| solved
 
| urgent
 
| 13/03/2019
 
| 18/03/2019
 
| CMS_Data Transfers
 
| RAL FTS - Transfers failing from T1_US_FNAL_Disk to some sites
 
| WLCG
 
| IPv6 was registered, although not fully enabled at FNAL. RAL had IPv6 fully enabled, so expected to transfer via that connection, however it was timing out and not falling back to IPv4. FNAL removed the IPv6 address, and since then the link has been green.
 
|-
 
| 140082
 
| atlas
 
 
| closed
 
| closed
| less urgent
 
| 06/03/2019
 
| 21/03/2019
 
| Other
 
| RAL-LCG2 squid service degraded
 
| WLCG
 
| squid01 node is back serving requests.
 
There was an overnight problem with the underlying storage for this VM node.
 
|-
 
| 139990
 
| cms
 
| solved
 
 
| urgent
 
| urgent
 
| 01/03/2019
 
| 01/03/2019
| 20/03/2019
+
| 03/04/2019
 
| CMS_AAA WAN Access
 
| CMS_AAA WAN Access
 
| T1_UK_RAL xrootd segfaulted
 
| T1_UK_RAL xrootd segfaulted
 
| WLCG
 
| WLCG
 
| The issue is intermittent and occasional. This segfault is most likely a known problem, which will be addressed in xrootd version 4.9, along with other fixes. RAL will install this when available.
 
| The issue is intermittent and occasional. This segfault is most likely a known problem, which will be addressed in xrootd version 4.9, along with other fixes. RAL will install this when available.
|-
 
| 139983
 
| t2k.org
 
| closed
 
| less urgent
 
| 28/02/2019
 
| 21/03/2019
 
| File Access
 
| Failed to bring files online at RAL-tape
 
| EGI
 
| See above.
 
|-
 
| 139858
 
| cms
 
| closed
 
| urgent
 
| 23/02/2019
 
| 18/03/2019
 
| CMS_Data Transfers
 
| Failing transfers of 7 files from T1_UK_RAL_Disk to T2_FR_GRIF_LLR
 
| WLCG
 
| Missing chunks, so deleted the files and they are now retransferred.
 
|-
 
| 139639
 
| cms
 
| closed
 
| very urgent
 
| 12/02/2019
 
| 22/03/2019
 
| CMS_AAA WAN Access
 
| file open error at RAL
 
| WLCG
 
| files retransfered
 
|-
 
| 139476
 
| mice
 
| closed
 
| less urgent
 
| 01/02/2019
 
| 22/03/2019
 
| Other
 
| LFC dump for MICE VO
 
| EGI
 
| DB dump provided.
 
|-
 
| 139306
 
| dteam
 
| closed
 
| less urgent
 
| 24/01/2019
 
| 20/03/2019
 
| Monitoring
 
| perfsonar hosts need updating
 
| EGI
 
| Hosts updated and additional information added. Thanks!
 
|-
 
| 138500
 
| cms
 
| closed
 
| urgent
 
| 26/11/2018
 
| 22/03/2019
 
| CMS_Data Transfers
 
| Transfers failing from T2_PL_Swierk to RAL
 
| WLCG
 
| block size changed to default
 
|-
 
| 138033
 
| atlas
 
| solved
 
| urgent
 
| 01/11/2018
 
| 26/03/2019
 
| Other
 
| singularity jobs failing at RAL
 
| EGI
 
| As  the new batch containers have been fully rolled out and appear to have been working since the 18/03/19 I'm going to call this one as solved.
 
 
|}
 
|}
 
|}<!-- **********************End Availability Report************************** ----->
 
|}<!-- **********************End Availability Report************************** ----->
Line 434: Line 369:
 
! Comments
 
! Comments
 
|-
 
|-
| 2019-03-19
+
| 2019-03-26
 +
| 100
 
| 100
 
| 100
 
| 100
 
| 100
Line 440: Line 376:
 
| 100
 
| 100
 
| 100
 
| 100
| style="background-color: orange;" | 93
 
 
|  
 
|  
 
|-
 
|-
| 2019-03-20
+
| 2019-03-27
 
| 100
 
| 100
 
| 100
 
| 100
Line 452: Line 387:
 
|  
 
|  
 
|-
 
|-
| 2019-03-21
+
| 2019-03-28
 
| 100
 
| 100
 
| 100
 
| 100
Line 461: Line 396:
 
|  
 
|  
 
|-
 
|-
| 2019-03-22
+
| 2019-03-29
 
| 100
 
| 100
 
| 100
 
| 100
Line 470: Line 405:
 
|  
 
|  
 
|-
 
|-
| 2019-03-23
+
| 2019-03-30
| style="background-color: red;" | 81
+
| style="background-color: red;" | 81
+
| 100
+
| 100
+
| 100
+
| 100
+
|
+
|-
+
| 2019-03-24
+
 
| 100
 
| 100
 
| 100
 
| 100
Line 488: Line 414:
 
|  
 
|  
 
|-
 
|-
| 2019-03-25
+
| 2019-03-31
 
| 100
 
| 100
 
| 100
 
| 100
Line 497: Line 423:
 
|  
 
|  
 
|-
 
|-
| 2019-03-26
+
| 2019-04-01
 
| 100
 
| 100
 
| 100
 
| 100
Line 523: Line 449:
 
! Day !! Atlas HC !! CMS HC !! Comment
 
! Day !! Atlas HC !! CMS HC !! Comment
 
|-
 
|-
| 2019-03-19 || style="background-color: orange;" | 96 || 99 ||  
+
| 2019-03-26 || 100 || 99 ||  
 
|-
 
|-
| 2019-03-20 || 100 || 99 ||  
+
| 2019-03-27 || 100 || style="background-color: orange;" | 96 ||  
 
|-
 
|-
| 2019-03-21 || 100 || N/A ||  
+
| 2019-03-28 || 100 || 100 ||  
 
|-
 
|-
| 2019-03-22 || 100 || N/A ||  
+
| 2019-03-29 || 100 || 100 ||  
 
|-
 
|-
| 2019-03-23 || 100 || 100 ||  
+
| 2019-03-30 || 100 || 99 ||  
 
|-
 
|-
| 2019-03-24 || 100 || 100 ||  
+
| 2019-03-31 || 100 || 99 ||  
 
|-
 
|-
| 2019-03-25 || 100 || 99 ||  
+
| 2019-04-01 || 100 || 100 ||  
 
|-
 
|-
| 2019-03-26 || 100 || 99 ||  
+
| 2019-04-02 || 100 || 100 ||  
 
|-
 
|-
 
|}  
 
|}  

Latest revision as of 08:38, 9 April 2019

RAL Tier1 Operations Report for 1st April 2019

Review of Issues during the week 26th March 2019 to the 1st April 2019.
  • This years CPU capacity entered production on 27th March. V11 CPU has been retired.
  • gdss733Cb), had a double disk failure in Castor. Was out of production from 27th - 29th March.
  • CMS CPU efficiency has been poor all last week (25%-40%). Efficiency started dropping around the 20th March.
  • We fixed a problem that prevent LSST jobs from running (HTCondor had not been updated with the new VOMS server).
Current operational status and issues
Resolved Castor Disk Server Issues
Machine VO DiskPool dxtx Comments
gdss733 LHCb lcbhDsy d1t0 -
Ongoing Castor Disk Server Issues
Machine VO DiskPool dxtx Comments
-
Limits on concurrent batch system jobs.
  • ALICE - 1000
Notable Changes made since the last meeting.
  • NTR
Entries in GOC DB starting since the last report.
Service ID Scheduled? Outage/At Risk Start End Duration Reason
- - - - - - - -
Declared in the GOC DB
Service ID Scheduled? Outage/At Risk Start End Duration Reason
- - - - - - - -
  • No ongoing downtime
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • DNS servers will be rolled out within the Tier1 network.
Open

GGUS Tickets (Snapshot taken during morning of the meeting).

Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope Solution
140521 ops in progress less urgent 01/04/2019 01/04/2019 Operations [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer@gridftp.echo.stfc.ac.uk EGI
140447 dteam in progress less urgent 27/03/2019 02/04/2019 Network problem packet loss outbound from RAL-LCG2 over IPv6 EGI
140220 mice in progress less urgent 15/03/2019 25/03/2019 Other mice LFC to DFC transition EGI
139672 other in progress urgent 13/02/2019 18/03/2019 Middleware No LIGO pilots running at RAL EGI
138665 mice on hold urgent 04/12/2018 30/01/2019 Middleware Problem accessing LFC at RAL EGI
GGUS Tickets Closed Last week
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope Solution
140521 ops verified less urgent 01/04/2019 04/04/2019 Operations [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer@gridftp.echo.stfc.ac.uk EGI The Ceph team have added the problem DN to the local over-ride for the grid map files.

The test has cleared.

140511 cms solved urgent 01/04/2019 02/04/2019 CMS_Facilities T1_UK_RAL SAM job run out of date WLCG Issue is related to SAM dashboard.
140283 atlas closed less urgent 19/03/2019 03/04/2019 File Transfer UK RAL-LCG2: DESTINATION SRM_PUT_TURL error WLCG Corrected ownership of the CASTOR dir to atlas001:atlas
140278 cms closed urgent 19/03/2019 05/04/2019 CMS_Data Transfers Transfers failing from FNAL_Buffer to RAL_Disk WLCG we had a problem with ECHO mistakenly returning the correct size and checksum for files which failed transfers and were corrupt which resulted in them being marked as good by PhEDEx and then lying latent until they were accessed or transferred out.

Transfers recovered.

140210 atlas closed top priority 14/03/2019 02/04/2019 File Transfer Cannot access some files WLCG These files were lost from RAL-LCG2-ECHO_DATADISK, probably due to an FTS bug which should now be fixed. They were identified here because they were needed by the production team. The ticket submitter has now marked them lost in Rucio.

Independently, we have been analysing the results of the Rucio consistency checker. It identified 163 files lost from this disk. The 5 files listed in this ticket are among the 163. We will mark the remaining files lost in Rucio.

139990 cms closed urgent 01/03/2019 03/04/2019 CMS_AAA WAN Access T1_UK_RAL xrootd segfaulted WLCG The issue is intermittent and occasional. This segfault is most likely a known problem, which will be addressed in xrootd version 4.9, along with other fixes. RAL will install this when available.

Availability Report

Day Atlas Atlas-Echo CMS LHCB Alice OPS Comments
2019-03-26 100 100 100 100 100 100
2019-03-27 100 100 100 100 100 100
2019-03-28 100 100 100 100 100 100
2019-03-29 100 100 100 100 100 100
2019-03-30 100 100 100 100 100 100
2019-03-31 100 100 100 100 100 100
2019-04-01 100 100 100 100 100 100
Hammercloud Test Report
Target Availability for each site is 97.0% Red <90% Orange <97%
Day Atlas HC CMS HC Comment
2019-03-26 100 99
2019-03-27 100 96
2019-03-28 100 100
2019-03-29 100 100
2019-03-30 100 99
2019-03-31 100 99
2019-04-01 100 100
2019-04-02 100 100

Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud

Notes from Meeting.