Difference between revisions of "Tier1 Operations Report 2019-06-03"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(13 intermediate revisions by one user not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 27th May 2019 to the 3rd June 2019.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 27th May 2019 to the 3rd June 2019.
 
|}
 
|}
*  
+
* Ongoing, we are seeing high outbound packet loss over IPv6.  Central networking performed a firmware update to the border routers but this didn’t resolve the issue.  Plan to move connections to the new border routers in Mid June.  Will do this before trying to debug any further.
 +
* The old LHCb Castor instance lost three disk servers over the weekend!!  We don’t intend to spend much effort recovering them.  The old LHCb castor instance will be decommissioned (no files will be recoverable) on Friday 7th June.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 21: Line 22:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
|}
 
|}
 +
*
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 39: Line 41:
 
! Comments
 
! Comments
 
|-
 
|-
| -
+
| gdss813
| -
+
| LHCb
| -
+
| lhcb
| -
+
| d1t0
| -
+
|
 +
|-
 +
| gdss815
 +
| LHCb
 +
| lhcb
 +
| d1t0
 +
|
 +
|-
 +
| gdss778
 +
| LHCb
 +
| lhcb
 +
| d1t0
 +
|
 +
|-
 
|}
 
|}
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
Line 78: Line 93:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Limits on concurrent batch system jobs.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Limits on concurrent batch system jobs.
 
|}
 
|}
* ALICE - 1000
+
*  
 
<!-- ******************End Limits On Batch System Jobs***************** ----->
 
<!-- ******************End Limits On Batch System Jobs***************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 293: Line 308:
 
{| border=1 align=center
 
{| border=1 align=center
 
|- bgcolor="#7c8aaf"
 
|- bgcolor="#7c8aaf"
! Request id
+
! Ticket-ID
! Affected vo
+
! Type
 +
! VO
 +
! Site
 +
! Priority
 +
! Responsible Unit
 
! Status
 
! Status
! Priority
+
! Last Update
! Date of creation
+
! Last update
+
! Type of problem
+
 
! Subject
 
! Subject
 
! Scope
 
! Scope
! Solution
 
 
|-
 
|-
| 141108
+
| 141359
| dune
+
| USER
 +
| ops
 +
| RAL-LCG2
 +
| less urgent
 +
| NGI_UK
 
| verified
 
| verified
| top priority
+
| 2019-05-31 08:04:00
| 10/05/2019
+
| [Rod Dashboard] Issue detected : org.sam.SRM-Put@srm-lhcb.gridpp.rl.ac.uk
| 17/05/2019
+
| Workload Management
+
| Problem submitting DUNE jobs to RAL CEs
+
 
| EGI
 
| EGI
| For the avoidance of doubt, DUNE uses the same queue names as LIGO (or CMS).  and as can be seen we are happily running DUNE jobs. 
 
 
 
Total: LIGO_UK_RAL_arc_ce01 Total LIGO_frontend OSG_Flock_frontend gpfrontend01_frontend
 
Status Running 45 0 0 45
 
Idle 5 5 0 0
 
Waiting 5 5 0 0
 
Pending 0 0 0 0
 
Stage in 0 0 0 0
 
Stage out 0 0 0 0
 
Unknown 0 0 0 0
 
Held 0 0 0 0
 
Requested Max glideins 792 10 0 782
 
Idle 0 0 0 0
 
Client Monitor Claimed 90 0 0 90
 
User Run Here 90 0 0 90
 
User Running 90 0 0 90
 
Unmatched 6391 0 0 6391
 
User idle 0 0 0 0
 
Registered 0 0 0 0
 
Info age 33427 71 0 33356
 
Troubleshoot Diff(Status: Running - CM: Registered) 45 0 0 45
 
Diff(Status: Idle - Requested: Idle) 5 5 0 0
 
 
|-
 
|-
| 141105
+
| 141333
| ops
+
| ALARM
 +
| none
 +
| RAL-LCG2
 +
| top priority
 +
| NGI_UK
 
| verified
 
| verified
| less urgent
+
| 2019-05-28 10:54:00
| 10/05/2019
+
| This TEST ALARM has been raised for testing GGUS alarm work flow after a new GGUS release.
| 17/05/2019
+
| Operations
+
| [Rod Dashboard] Issues detected at RAL-LCG2
+
| EGI
+
| The problem has been solved.
+
 
+
------ Details about the solution ---------
+
 
+
Passing tests now, thanks!
+
 
+
-------------------------------------------
+
 
+
andrew mcnab
+
|-
+
| 140965
+
| cms
+
| closed
+
| urgent
+
| 02/05/2019
+
| 17/05/2019
+
| CMS_Data Transfers
+
| Datatransfers T2_AT_VIenna -> T1_UK_RAL_Buffer failing
+
| WLCG
+
| Possible corruption on the file coming from Vienna may have made the file un-deleteable at RAL. The file was replaced with a new copy at Vienna and transfer in debug stream is now green.
+
|-
+
| 140887
+
| atlas
+
| closed
+
| urgent
+
| 27/04/2019
+
| 13/05/2019
+
| File Transfer
+
| UK RAL-LCG2 ransfer error with: srm-ifce err: Communication error on send
+
| WLCG
+
| This is not a RAL issue, but a problem with Wuppertalprod already ticketed at https://ggus.eu/index.php?mode=ticket_info&ticket_id=140883 .
+
Closing this ticket.
+
|-
+
| 140660
+
| cms
+
| closed
+
| urgent
+
| 09/04/2019
+
| 16/05/2019
+
| CMS_Central Workflows
+
| FIle read issues for Workflows where data is located at T1_UK_RAL
+
 
| WLCG
 
| WLCG
| The gridmap  has been replaced and we appear to be working again.  As such I'm going to close this ticket as implementation of the the full solution is beyond the scope of this ticket.
 
 
|}
 
|}
 
|}<!-- **********************End Availability Report************************** ----->
 
|}<!-- **********************End Availability Report************************** ----->
Line 414: Line 365:
 
! Comments
 
! Comments
 
|-
 
|-
| 2019-05-14
+
| 2019-05-27
 
| 100
 
| 100
 
| 100
 
| 100
Line 423: Line 374:
 
|  
 
|  
 
|-
 
|-
| 2019-05-15
+
| 2019-05-28
 
| 100
 
| 100
 
| 100
 
| 100
| 99
+
| 90
| 100
+
 
| 100
 
| 100
 +
| 98
 
| 100
 
| 100
 
|  
 
|  
 
|-
 
|-
| 2019-05-16
+
| 2019-05-29
| 100
+
 
| 100
 
| 100
 
| 100
 
| 100
 +
| 85
 
| 100
 
| 100
 
| 100
 
| 100
Line 441: Line 392:
 
|  
 
|  
 
|-
 
|-
| 2019-05-17
+
| 2019-05-30
| 100
+
| 100
+
 
| 100
 
| 100
 
| 100
 
| 100
 +
| 90
 
| 100
 
| 100
 +
| 93
 
| 100
 
| 100
 
|  
 
|  
 
|-
 
|-
| 2019-05-18
+
| 2019-05-31
| 100
+
 
| 100
 
| 100
 
| 100
 
| 100
 
| 100
 
| 100
 
| 100
 
| 100
 +
| 89
 
| 100
 
| 100
 
|  
 
|  
 
|-
 
|-
| 2019-05-19
+
| 2019-06-01
| 100
+
| 100
+
 
| 100
 
| 100
 
| 100
 
| 100
 
| 100
 
| 100
 +
| 95
 +
| 89
 
| 100
 
| 100
 
|  
 
|  
 
|-
 
|-
| 2019-05-20
+
| 2019-06-02
 
| 100
 
| 100
 
| 100
 
| 100
 
| 100
 
| 100
 
| 100
 
| 100
 +
| 84
 
| 100
 
| 100
| 100
+
|
|  
+
 
|}
 
|}
  
Line 494: Line 445:
 
! Day !! Atlas HC !! CMS HC !! Comment
 
! Day !! Atlas HC !! CMS HC !! Comment
 
|-
 
|-
| 2019-05-14 || 100 || 99 ||  
+
| 2019-05-27 || 100 || 98 ||  
 
|-
 
|-
| 2019-05-15 || 100 || 99 ||  
+
| 2019-05-28 || 100 || 98 ||  
 
|-
 
|-
| 2019-05-16 || 100 || 100 ||  
+
| 2019-05-29 || 100 || 99 ||  
 
|-
 
|-
| 2019-05-17 || 100 || 100 ||  
+
| 2019-05-30 || 100 || 100 ||  
 
|-
 
|-
| 2019-05-18 || 100 || 99 ||  
+
| 2019-05-31 || 100 || 100 ||  
 
|-
 
|-
| 2019-05-19 || 100 || 100 ||  
+
| 2019-05-01 || 100 || 99 ||  
 
|-
 
|-
| 2019-05-20 || 100 || 100 ||  
+
| 2019-05-02 || 100 || 100 ||  
 
|-
 
|-
 
|}  
 
|}  

Latest revision as of 08:37, 4 June 2019

RAL Tier1 Operations Report for 3rd June 2019

Review of Issues during the week 27th May 2019 to the 3rd June 2019.
  • Ongoing, we are seeing high outbound packet loss over IPv6. Central networking performed a firmware update to the border routers but this didn’t resolve the issue. Plan to move connections to the new border routers in Mid June. Will do this before trying to debug any further.
  • The old LHCb Castor instance lost three disk servers over the weekend!! We don’t intend to spend much effort recovering them. The old LHCb castor instance will be decommissioned (no files will be recoverable) on Friday 7th June.
Current operational status and issues
Resolved Castor Disk Server Issues
Machine VO DiskPool dxtx Comments
gdss813 LHCb lhcb d1t0
gdss815 LHCb lhcb d1t0
gdss778 LHCb lhcb d1t0
Ongoing Castor Disk Server Issues
Machine VO DiskPool dxtx Comments
- - - - -
Limits on concurrent batch system jobs.
Notable Changes made since the last meeting.
  • NTR
Entries in GOC DB starting since the last report.
Service ID Scheduled? Outage/At Risk Start End Duration Reason
- - - - - - - -
Declared in the GOC DB
Service ID Scheduled? Outage/At Risk Start End Duration Reason
- - - - - - - -
  • No ongoing downtime
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • DNS servers will be rolled out within the Tier1 network.
Open

GGUS Tickets (Snapshot taken during morning of the meeting).

Ticket-ID Type VO Site Priority Responsible Unit Status Last Update Subject Scope
141549 TEAM atlas RAL-LCG2 less urgent NGI_UK in progress 2019-06-03 08:08:00 ATLAS-RAL-Frontier and some of Lpad-RAL-LCG2 squid degraded WLCG
141537 TEAM lhcb RAL-LCG2 very urgent NGI_UK in progress 2019-05-31 19:28:00 Pilots Failed at RAL-LCG2 WLCG
141462 TEAM lhcb RAL-LCG2 top priority NGI_UK in progress 2019-06-02 05:45:00 Error: Connection limit exceeded WLCG
141262 TEAM lhcb RAL-LCG2 very urgent NGI_UK in progress 2019-05-31 09:23:00 Users are getting [FATAL] Auth failed WLCG
140870 USER t2k.org RAL-LCG2 less urgent NGI_UK in progress 2019-05-14 13:19:00 Files vanished from RAL tape? EGI
140447 USER dteam RAL-LCG2 less urgent NGI_UK on hold 2019-05-22 14:20:00 packet loss outbound from RAL-LCG2 over IPv6 EGI
140220 USER mice RAL-LCG2 less urgent NGI_UK in progress 2019-05-15 11:07:00 mice LFC to DFC transition EGI
139672 USER other RAL-LCG2 urgent NGI_UK waiting for reply 2019-06-03 09:23:00 No LIGO pilots running at RAL EGI
GGUS Tickets Closed Last week
Ticket-ID Type VO Site Priority Responsible Unit Status Last Update Subject Scope
141359 USER ops RAL-LCG2 less urgent NGI_UK verified 2019-05-31 08:04:00 [Rod Dashboard] Issue detected : org.sam.SRM-Put@srm-lhcb.gridpp.rl.ac.uk EGI
141333 ALARM none RAL-LCG2 top priority NGI_UK verified 2019-05-28 10:54:00 This TEST ALARM has been raised for testing GGUS alarm work flow after a new GGUS release. WLCG

Availability Report

Day Atlas Atlas-Echo CMS LHCB Alice OPS Comments
2019-05-27 100 100 100 100 100 100
2019-05-28 100 100 90 100 98 100
2019-05-29 100 100 85 100 100 100
2019-05-30 100 100 90 100 93 100
2019-05-31 100 100 100 100 89 100
2019-06-01 100 100 100 95 89 100
2019-06-02 100 100 100 100 84 100
Hammercloud Test Report
Target Availability for each site is 97.0% Red <90% Orange <97%
Day Atlas HC CMS HC Comment
2019-05-27 100 98
2019-05-28 100 98
2019-05-29 100 99
2019-05-30 100 100
2019-05-31 100 100
2019-05-01 100 99
2019-05-02 100 100

Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud

Notes from Meeting.