Difference between revisions of "Tier1 Operations Report 2018-06-18"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(11 intermediate revisions by one user not shown)
Line 11: Line 11:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 11th June to the 18th June 2018.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 11th June to the 18th June 2018.
 
|}
 
|}
* No incidents(major or minor), have been flagged during this reporting period.
+
* 12/6/18 there was an internal network problem, which took around ~3 hours to resolve.  This seemed to mostly affect facilities rather than Tier-1 services.  The concern is not that a piece of hardware failed, but that it doesn’t always switch seamlessly to the backup links (especially for IPv6).
 +
* 19/6/18 09:00 - 13:00 IPv6  was not available.  Fabric attempted a route switch failed.  Should have dropped to IPv4 but didn't. Need to raise this issue with Networking/Fabric to try and resolve once and for all
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 33: Line 34:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Castor Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Castor Disk Server Issues
 
|}
 
|}
* gdss687 - LHCb - d1t0.  Currently back in production RO.
+
* gdss687 - LHCb - LHCbDst - d1t0.  Currently back in Production.
 +
* gdss738 - LHCb - LHCbDst - d1t0.  Currently back in Production RO.
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 43: Line 45:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Castor Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Castor Disk Server Issues
 
|}
 
|}
* gdss687 - LHCb - d1t0.  Currently still in intervention.
+
* gdss746 - ATLAS - atlasStripInut - d1t0.  Currently in intervention.
 +
* gdss685 - ATLAS - atlasTape - d1t0.  Currently in intervention.
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 136: Line 139:
 
! Scope
 
! Scope
 
|-
 
|-
| style="background-color: green;" |135455
+
| style="background-color: green;" | 135711
 +
| cms
 +
| waiting for reply
 +
| urgent
 +
| 18/06/2018
 +
| 19/06/2018
 +
| CMS_Central Workflows
 +
| T1_UK_RAL production jobs failing
 +
| WLCG
 +
|-
 +
| style="background-color: green;" | 135455
 
| cms
 
| cms
 
| in progress
 
| in progress
Line 146: Line 159:
 
| EGI
 
| EGI
 
|-
 
|-
| style="background-color: green;" |135293
+
| style="background-color: green;" | 135293
 
| ops
 
| ops
 
| on hold
 
| on hold
Line 156: Line 169:
 
| EGI
 
| EGI
 
|-
 
|-
| style="background-color: green;" |134685
+
| style="background-color: green;" | 134685
 
| dteam
 
| dteam
 
| in progress
 
| in progress
Line 166: Line 179:
 
| EGI
 
| EGI
 
|-
 
|-
| style="background-color: red;" |124876
+
| style="background-color: red;" | 124876
 
| ops
 
| ops
 
| on hold
 
| on hold
Line 278: Line 291:
 
! Alice
 
! Alice
 
! OPS
 
! OPS
! Comments  
+
! Comments
 
|-
 
|-
| 2018-06-04
+
| 2018-06-11
 
| 100
 
| 100
 
| 100
 
| 100
| 97
+
| style="background-color: red;" |78
 +
| 100
 
| 100
 
| 100
 
| 100
 
| 100
| style="background-color: red;" | 0
 
 
|  
 
|  
 
|-
 
|-
| 2018-06-05
+
| 2018-06-12
 
| 100
 
| 100
 
| 100
 
| 100
| 99
+
| style="background-color: orange;" |93
 
| 100
 
| 100
 
| 100
 
| 100
| style="background-color: red;" | 46.80851
+
| 100
| Would hazzard a guess that BDii has just been put back!
+
|  
 
|-
 
|-
| 2018-06-06
+
| 2018-06-13
 +
| 100
 
| 100
 
| 100
 
| 100
 
| 100
| style="background-color: orange;" | 91
 
 
| 100
 
| 100
 
| 100
 
| 100
Line 307: Line 320:
 
|  
 
|  
 
|-
 
|-
| 2018-06-07
+
| 2018-06-14
 +
| 100
 
| 100
 
| 100
 
| 100
 
| 100
| 99
 
 
| 100
 
| 100
 
| 100
 
| 100
Line 316: Line 329:
 
|  
 
|  
 
|-
 
|-
| 2018-06-08
+
| 2018-06-15
 +
| 100
 +
| 100
 +
| style="background-color: orange;" |92
 +
| 100
 +
| 100
 +
| 100
 +
|
 +
|-
 +
| 2018-06-16
 
| 100
 
| 100
 
| 100
 
| 100
Line 325: Line 347:
 
|  
 
|  
 
|-
 
|-
| 2018-06-09
+
| 2018-06-17
 +
| 100
 
| 100
 
| 100
 
| 100
 
| 100
| 99
 
 
| 100
 
| 100
 
| 100
 
| 100
Line 334: Line 356:
 
|  
 
|  
 
|-
 
|-
| 2018-06-10
+
| 2018-06-18
 +
| 100
 
| 100
 
| 100
 
| 100
 
| 100
| 97
 
 
| 100
 
| 100
 
| 100
 
| 100
Line 360: Line 382:
 
! Day !! Atlas HC !! CMS HC !! Comment
 
! Day !! Atlas HC !! CMS HC !! Comment
 
|-
 
|-
| 2018/06/04 || style="background-color: orange;" | 95 ||  style="background-color: red;" |39||   
+
| 2018/06/11 || 97 ||  style="background-color: red;" |72||   
 
|-
 
|-
| 2018/06/05 || style="background-color: orange;" | 94 || style="background-color: red;" |19 ||  
+
| 2018/06/12 || 97 || 100||  
 
|-
 
|-
| 2018/06/06 || style="background-color: red;" | 89   || style="background-color: red;" |84 ||  
+
| 2018/06/13 || style="background-color: orange;" | 95   || style="background-color: red;" |77 ||  
 
|-
 
|-
| 2018/06/07 || style="background-color: red;" | 89   || 100 ||  
+
| 2018/06/14 || style="background-color: orange;" | 92   || style="background-color: red;" |45 ||  
 
|-
 
|-
| 2018/06/08 || style="background-color: orange;" | 95 || style="background-color: red;" |89 ||  
+
| 2018/06/15 || style="background-color: orange;" | 85 || style="background-color: red;" |73 ||  
 
|-
 
|-
| 2018/06/09 || style="background-color: red;" | 87   || style="background-color: red;" |66 ||  
+
| 2018/06/16 || style="background-color: orange;" | 95   || 100 ||  
 
|-
 
|-
| 2018/06/10 || style="background-color: red;" | 89   || style="background-color: red;" |66 ||  
+
| 2018/06/17 || 97   || style="background-color: red;" |66 || 100
 
|-
 
|-
| 2018/06/11 || - || - ||  
+
| 2018/06/18 || - || - ||  
 
|-
 
|-
 
|}  
 
|}  
Line 388: Line 410:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
|}
 
|}
* Notes from meeting of 13th June:
+
* Notes from meeting of 20th June:
 
+
* LHBb - A number of drive failures have resulted in 8 files lost, 4 not on name server 4 just plain lost.  
* SKA:
+
* Issue with  files appearing to be calculating a different checksum dependent on location, plain weird. Was thought to be a RAID issue but investigations have suggested this is not the case.
** It was noted that the SKA batch jobs are limited to 24 cores by the Dirac system (at Imperial) that they are using.
+
* Dune/ProtoDune:
+
** There is now a regular monday meeting for Dune/ProtoDune. Raja, as the Dune-Tier1 liaison person will attend regularly. Darren will attend in his place if Raja cannot be there.
+
** Access to the Tier1 batch farm is being set-up. The Tier1's aim is to get this set-up for initial testing by next Monday's meeting. (18th June).
+
** Alastair has set-up Dune access to Echo (line for access via WebDAV provided to Raja).
+
* Euclid:
+
** We are in contact with Euclid to get the public key for access to the Euclid CVMFS repository.
+
 
+
Documentation to Get VOs started accessing Tier1 Services:
+
A start has been made on this. See:  https://www.gridpp.ac.uk/wiki/RAL_Tier1_Echo
+

Latest revision as of 13:37, 20 June 2018

RAL Tier1 Operations Report for 18th June 2018

Review of Issues during the week 11th June to the 18th June 2018.
  • 12/6/18 there was an internal network problem, which took around ~3 hours to resolve. This seemed to mostly affect facilities rather than Tier-1 services. The concern is not that a piece of hardware failed, but that it doesn’t always switch seamlessly to the backup links (especially for IPv6).
  • 19/6/18 09:00 - 13:00 IPv6 was not available. Fabric attempted a route switch failed. Should have dropped to IPv4 but didn't. Need to raise this issue with Networking/Fabric to try and resolve once and for all
Current operational status and issues
  • None.
Resolved Castor Disk Server Issues
  • gdss687 - LHCb - LHCbDst - d1t0. Currently back in Production.
  • gdss738 - LHCb - LHCbDst - d1t0. Currently back in Production RO.
Ongoing Castor Disk Server Issues
  • gdss746 - ATLAS - atlasStripInut - d1t0. Currently in intervention.
  • gdss685 - ATLAS - atlasTape - d1t0. Currently in intervention.
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • None.
Entries in GOC DB starting since the last report.
  • None
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • Castor:
    • Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
    • Move to generic Castor headnodes.
  • Networking
    • Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
  • Internal
    • DNS servers will be rolled out within the Tier1 network.
  • Infrastructure
    • Testing of power distribution boards in the R89 machine room is being scheduled for some time late July / Early August. The effect of this on our services is being discussed.
Open

GGUS Tickets (Snapshot taken during morning of the meeting).

Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
135711 cms waiting for reply urgent 18/06/2018 19/06/2018 CMS_Central Workflows T1_UK_RAL production jobs failing WLCG
135455 cms in progress less urgent 31/05/2018 04/06/2018 File Transfer Checksum verification at RAL EGI
135293 ops on hold less urgent 23/05/2018 04/06/2018 Operations [Rod Dashboard] Issues detected at RAL-LCG2 EGI
134685 dteam in progress less urgent 23/04/2018 11/06/2018 Middleware please upgrade perfsonar host(s) at RAL-LCG2 to CentOS7 EGI
124876 ops on hold less urgent 07/11/2016 13/11/2017 Operations [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk EGI
GGUS Tickets Closed Last week
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
135661 atlas solved less urgent 14/06/2018 14/06/2018 Databases RAL-LCG2: ATLAS RAL Frontier server down WLCG
135367 snoplus.snolab.ca solved less urgent 28/05/2018 13/06/2018 Other Lost access to srm-snoplus.gridpp.rl.ac.uk EGI
135308 mice solved top priority 24/05/2018 13/06/2018 Information System Can't send data to RAL Castor EGI
134468 cms closed top priority 09/04/2018 15/06/2018 CMS_AAA WAN Access Xrootd redirector not seeing some files in ECHO WLCG
117683 none closed less urgent 18/11/2015 15/06/2018 Information System CASTOR at RAL not publishing GLUE 2 EGI

Availability Report

Target Availability for each site is 97.0% Red <90% Orange <97%
Day Atlas Atlas-Echo CMS LHCB Alice OPS Comments
2018-06-11 100 100 78 100 100 100
2018-06-12 100 100 93 100 100 100
2018-06-13 100 100 100 100 100 100
2018-06-14 100 100 100 100 100 100
2018-06-15 100 100 92 100 100 100
2018-06-16 100 100 100 100 100 100
2018-06-17 100 100 100 100 100 100
2018-06-18 100 100 100 100 100 100
Hammercloud Test Report
Target Availability for each site is 97.0% Red <90% Orange <97%
Day Atlas HC CMS HC Comment
2018/06/11 97 72
2018/06/12 97 100
2018/06/13 95 77
2018/06/14 92 45
2018/06/15 85 73
2018/06/16 95 100
2018/06/17 97 66 100
2018/06/18 - -

Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud

Notes from Meeting.
  • Notes from meeting of 20th June:
  • LHBb - A number of drive failures have resulted in 8 files lost, 4 not on name server 4 just plain lost.
  • Issue with files appearing to be calculating a different checksum dependent on location, plain weird. Was thought to be a RAID issue but investigations have suggested this is not the case.