Difference between revisions of "Tier1 Operations Report 2018-07-16"

From GridPP Wiki
Jump to: navigation, search
()
()
Line 12: Line 12:
 
|}
 
|}
 
* Tier-1 has been very much business as usual this week.  The only incident of real note was the loss of an Atlas drive (gdss747 - d1t0). This incident involved a lost drive that was found to be not recoverable.  Subsequently a data loss declared.  Under normal circumstances this would be considered a serious incident, however Atlas are all but complete in the process of migrating from CASTOR to ECHO.  As such, no primary data had been lost with 89% of of lost files being log files.
 
* Tier-1 has been very much business as usual this week.  The only incident of real note was the loss of an Atlas drive (gdss747 - d1t0). This incident involved a lost drive that was found to be not recoverable.  Subsequently a data loss declared.  Under normal circumstances this would be considered a serious incident, however Atlas are all but complete in the process of migrating from CASTOR to ECHO.  As such, no primary data had been lost with 89% of of lost files being log files.
 
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->

Revision as of 09:08, 17 July 2018

RAL Tier1 Operations Report for 16th July 2018

Review of Issues during the week 9th July to the 16th July 2018.
  • Tier-1 has been very much business as usual this week. The only incident of real note was the loss of an Atlas drive (gdss747 - d1t0). This incident involved a lost drive that was found to be not recoverable. Subsequently a data loss declared. Under normal circumstances this would be considered a serious incident, however Atlas are all but complete in the process of migrating from CASTOR to ECHO. As such, no primary data had been lost with 89% of of lost files being log files.
Current operational status and issues
  • None.
Resolved Castor Disk Server Issues
  • None
Ongoing Castor Disk Server Issues
  • None.
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • Worker nodes and CONDOR. We are now running new Docker containers running Singularity. Further changes include an update to use UMD4. This new container is currently being tested and is running on arc-ce04.
  • The new Docker container image (stfc/grid-workernode-c6:2018-07-09.2 and stfc/grid-workernode-c7:2018-07-09.2), has been rolled out across the entire batch farm.
Entries in GOC DB starting since the last report.
  • None
Declared in the GOC DB
  • No downtime scheduled in the GOCDB for next 2 weeks
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • Castor:
    • Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
    • Move to generic Castor headnodes.
  • Internal
    • DNS servers will be rolled out within the Tier1 network.
  • Infrastructure
    • There is currently a scheduled three day period (July 24th - July 26th), where RAL Tier-1 will be undertaking server room circuit breaker testing. Although it is believed that these tests should not effect our services, such is the nature of these tests all Tier-1 production services should be considered "At Risk" during this time. All VOs are asked to give consideration with respect to possible unexpected outage to any high priority/critical jobs they may wish to run during this period.
Open

GGUS Tickets (Snapshot taken during morning of the meeting).

Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
136110 atlas in progress urgent 13/07/2018 16/07/2018 File Transfer RAL-LCG2: Transfer errors as source with "SRM_FILE_UNAVAILABLE" WLCG
136104 ops in progress less urgent 13/07/2018 13/07/2018 Operations [Rod Dashboard] Issues detected at RAL-LCG2 EGI
136097 other waiting for reply urgent 13/07/2018 16/07/2018 Operations Please restart frontier-squid on RAL cvmfs stratum 1 EGI
136028 cms in progress urgent 10/07/2018 12/07/2018 CMS_AAA WAN Access Issues reading files at T1_UK_RAL_Disk WLCG
134685 dteam in progress less urgent 23/04/2018 09/07/2018 Middleware please upgrade perfsonar host(s) at RAL-LCG2 to CentOS7 EGI
124876 ops reopened less urgent 07/11/2016 28/06/2018 Operations [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk EGI
GGUS Tickets Closed Last week
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
136045 lhcb verified very urgent 11/07/2018 13/07/2018 File Transfer Connection issue from RAL FTS? WLCG
136002 cms solved urgent 09/07/2018 09/07/2018 CMS_Facilities T1_UK_RAL SE Xrootd read failure WLCG
135723 lhcb closed top priority 19/06/2018 12/07/2018 File Transfer lcgfts3 FTS server fails all transfers WLCG
135455 cms closed less urgent 31/05/2018 09/07/2018 File Transfer Checksum verification at RAL EGI

Availability Report

Target Availability for each site is 97.0% Red <90% Orange <97%
Day Atlas Atlas-Echo CMS LHCB Alice OPS Comments
2018-07-09 100 100 100 100 100 100
2018-07-10 100 100 97 100 100 100
2018-07-11 100 100 100 100 100 100
2018-07-12 100 100 98 100 100 100
2018-07-13 100 100 99 100 100 100
2018-07-14 100 100 100 100 100 100
2018-07-15 100 100 100 92 100 100
Hammercloud Test Report
Target Availability for each site is 97.0% Red <90% Orange <97%
Day Atlas HC CMS HC Comment
2018-07-10 100 99
2018-07-11 98 99
2018-07-12 97 99
2018-07-13 96 99
2018-07-14 100 91
2018-07-15 100 91
2018-07-16 - -

Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud

Notes from Meeting.
  • GGUS ticket this morning ref xrootd - failing request were hitting the external gateway - it shouldn't.
  • Lots of transfers with similar failures. All failing requests hitting xrootd servers are from CMs - None from Atlas.
  • Not enough resource to handle requests.
  • 1. If they talk to the internal gateway will be redirected to the external - though this is not confirmed
  • 2. Mapping within the node/docker container is failing. A failing config somewhere, but TB doesn't think it's our fault as Atlas is running ok for the same config.