Difference between revisions of "Tier1 Operations Report 2018-07-02"

From GridPP Wiki
Jump to: navigation, search
()
()
Line 396: Line 396:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
|}
 
|}
* Notes from meeting of 20th June:
+
* xrootd issue ggus ticket this morning - failing request was hitting the external gateway - it shouldn't.  Lots of transfers with similar failures.  All requests hitting xrootd servers are from CMs  - None from Atlas.  Not enough resource to handle requests.
* LHBb - A number of drive failures have resulted in 8 files lost, 4 not on name server 4 just plain lost.  
+
1. If they talk to the internal gateway will be redirected to the external - though this is not confirmed
* Issue with files appearing to be calculating a different checksum dependent on location, plain weird. Was thought to be a RAID issue but investigations have suggested this is not the case.
+
2. Mapping within the node/docker container is failing. A failing config somewhere, but TB doesn't think it's our fault as Atlas is running ok for the same config - our side.

Revision as of 11:35, 2 July 2018

RAL Tier1 Operations Report for 2nd July 2018

Review of Issues during the week 25th June to the 2nd July 2018.
  • 12/6/18 there was an internal network problem, which took around ~3 hours to resolve. This seemed to mostly affect facilities rather than Tier-1 services. The concern is not that a piece of hardware failed, but that it doesn’t always switch seamlessly to the backup links (especially for IPv6).
  • 19/6/18 09:00 - 13:00 IPv6 was not available. Fabric attempted a route switch failed. Should have dropped to IPv4 but didn't. Need to raise this issue with Networking/Fabric to try and resolve once and for all
Current operational status and issues
  • None.
Resolved Castor Disk Server Issues
  • gdss746 - ATLAS - atlasStripInut - d1t0. Back in full production RW.
  • gdss685 - ATLAS - atlasTape - d1t0. Back in full production RW.
  • gdss776 - LHCb - LHCb_FAILOVER,LHCb-Disk - d1t0. Currently back in production RO.
Ongoing Castor Disk Server Issues
  • None.
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • None.
Entries in GOC DB starting since the last report.
  • None
Declared in the GOC DB
  • No downtime scheduled in the GOCDB for next 2 weeks
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • Castor:
    • Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
    • Move to generic Castor headnodes.
  • Networking
    • Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
  • Internal
    • DNS servers will be rolled out within the Tier1 network.
  • Infrastructure
    • Testing of power distribution boards in the R89 machine room is being scheduled for 24th – 26th July. The effect of this on our services is anticipated to be minimal.
Open

GGUS Tickets (Snapshot taken during morning of the meeting).

Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
135822 cms in progress very urgent 26/06/2018 29/06/2018 CMS_Central Workflows File Read Problems for Production at T1_UK_RAL WLCG
134685 dteam in progress less urgent 23/04/2018 11/06/2018 Middleware please upgrade perfsonar host(s) at RAL-LCG2 to CentOS7 EGI
124876 ops reopened less urgent 07/11/2016 28/06/2018 Operations [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk EGI
GGUS Tickets Closed Last week
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
135723 lhcb solved top priority 19/06/2018 28/06/2018 File Transfer lcgfts3 FTS server fails all transfers WLCG
135661 atlas closed less urgent 14/06/2018 28/06/2018 Databases RAL-LCG2: ATLAS RAL Frontier server down WLCG
135455 cms solved less urgent 31/05/2018 25/06/2018 File Transfer Checksum verification at RAL EGI
135367 snoplus.snolab.ca closed less urgent 28/05/2018 27/06/2018 Other Lost access to srm-snoplus.gridpp.rl.ac.uk EGI
135308 mice closed top priority 24/05/2018 27/06/2018 Information System Can't send data to RAL Castor EGI
135293 ops solved less urgent 23/05/2018 28/06/2018 Operations [Rod Dashboard] Issues detected at RAL-LCG2 EGI

Availability Report

Target Availability for each site is 97.0% Red <90% Orange <97%
Day Atlas Atlas-Echo CMS LHCB Alice OPS Comments
2018-06-25 98 98 99 100 100 100
2018-06-26 100 100 98 100 100 100
2018-06-27 100 100 100 100 100 100
2018-06-28 100 100 100 100 100 100
2018-06-29 100 100 99 100 100 100
2018-06-30 100 100 100 100 100 100
2018-07-01 100 100 100 100 100 100
2018-07-02 - - - 100 100 100
Hammercloud Test Report
Target Availability for each site is 97.0% Red <90% Orange <97%
Day Atlas HC CMS HC Comment
2018/06/19 89 99
2018/06/20 100 83
2018/06/21 92 97
2018/06/22 85 100
2018/06/23 98 100
2018/06/24 93 100
2018/06/25 - -

Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud

Notes from Meeting.
  • xrootd issue ggus ticket this morning - failing request was hitting the external gateway - it shouldn't. Lots of transfers with similar failures. All requests hitting xrootd servers are from CMs - None from Atlas. Not enough resource to handle requests.

1. If they talk to the internal gateway will be redirected to the external - though this is not confirmed 2. Mapping within the node/docker container is failing. A failing config somewhere, but TB doesn't think it's our fault as Atlas is running ok for the same config - our side.