Tier1 Operations Report 2018-01-31

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 31st January 2018

Review of Issues during the week 24th to 31st January 2018
  • Last week we reported on an ongoing problem with the Atlas Castor instance. Overnight Tuesday to Wednesday last week (23/24 Jan) the Atlas Castor instance was declared down and this table (the “diskcopy” table and the associated indexes) was rebuilt. During Wednesday the Atlas Castor instance returned to normal operation. There is a large amount of what is effectively dark data (around 1Pbyte) to be deleted of the disk servers and this will take some time as a steady background operation.
  • There was a problem with the system that runs the tape library control software overnight Wed/Thu (24/25 Jan). Staff were called late Wednesday evening but were unable to get the system up then. Overnight we were unable to mount tapes - effectively blocking tape access (although writes to the disk buffers in front of tape, plus reads of any data in those buffers, carried on). The fault on the server was resolved Thursday morning and normal tape service resumed.
  • There was a problem for LHCb data access over the weekend. LHCb submitted a GGUS alarm ticket early Sunday morning (28th Jan). The on-call team responded. Some routine maintenance work to re-balance disk space in the LHCb disk only data pool was the cause. This was stopped and the problem cleared during the Sunday morning.
  • We await further updates regarding an ongoing problem with one of the BMS (Building Management Systems) in the R89 machine room. This has an intermittent fault.
Current operational status and issues
  • The problems with data flows through the site firewall being restricted is still present.
Resolved Castor Disk Server Issues
  • gdss736 (lhcbDst - D1T0) - Back in production RO
  • gdss737 (lhcbDst - D1T0) - Back in production RO
Ongoing Castor Disk Server Issues
  • gdss762 (atlasStripInput - D0T1) – Removed again from prod for re-installation
  • gdss761 (LHCbDst - D1T0 crashed yesterday lunchtime (Tuesday 30th Jan). Still under investigation.
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • The Upgrade of Echo to the latest CEPH release (“Luminous”) was carried out successfully during the second half of last week. This was done as a rolling update with the service available throughout.
  • ALICE VOBOXes (lcgvo07 and lcgvo09) and LHCb VOBOX (lcgvo10) are now dual stack IPv4/6.
  • The Hyper-K VO has been enabled on the batch farm.
  • Atlas have moved their FTS transfers from our "test" FTS service to the Production one. This change was made at our request and effectively consolidates us with a single production FTS service.
Entries in GOC DB starting since the last report.
  • None
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-alice.gridpp.rl.ac.uk, srm-atlas.gridpp.rl.ac.uk, srm-cms.gridpp.rl.ac.uk, srm-lhcb.gridpp.rl.ac.uk, srm-mice.gridpp.rl.ac.uk, UNSCHEDULED WARNING 24/01/2018 22:00 25/01/2018 10:03 12 hours and 3 minutes We have a problem with our tape library controller and cannot mount tapes. Writes to tape will succeed in that they will be stored on the disk buffers in front of the tape system. Recalls from tape will fail (unless the file is already on the disk buffer). Access to disk-only storage is unaffected.
srm-atlas.gridpp.rl.ac.uk, UNSCHEDULED OUTAGE 23/01/2018 15:45 24/01/2018 14:00 22 hours and 15 minutes emergency downtime of Castor Atlas while rebuilding some database tables
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Ongoing or Pending - but not yet formally announced:

  • Move IPv6 connectivity - currently on 10Gbit links - to share the higher bandwidth (40Gbit) links used by IPv4.

Listing by category:

  • Castor:
    • Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
    • Move to generic Castor headnodes.
  • Networking
    • Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
    • Move IPv6 connectivity - currently on 10Gbit links - to share the higher bandwidth (40Gbit) links used by IPv4.
    • Replacement (upgrade) of RAL firewall.
  • Internal
    • DNS servers will be rolled out within the Tier1 network.
    • Infrastructure
  • Testing of power distribution boards in the R89 machine room is being scheduled for some time late July / Early August. The effect of this on our services is being discussed.
Open GGUS Tickets (Snapshot during morning of meeting)
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
117683 none on hold less urgent 18/11/2015 03/01/2018 Information System CASTOR at RAL not publishing GLUE 2 EGI
124876 ops on hold less urgent 07/11/2016 13/11/2017 Operations [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk EGI
127597 cms on hold urgent 07/04/2017 29/01/2018 File Transfer Check networking and xrootd RAL-CERN performance EGI
132589 lhcb in progress very urgent 21/12/2017 30/01/2018 Local Batch System Killed pilots at RAL WLCG
133139 ops in progress less urgent 30/01/2018 30/01/2018 Operations [Rod Dashboard] Issue detected : org.nordugrid.ARC-CE-sw-csh-ops@arc-ce01.gridpp.rl.ac.uk EGI
133154 ops in progress less urgent 31/01/2018 31/01/2018 Operations [Rod Dashboard] Issue detected : org.nordugrid.ARC-CE-sw-csh-ops@arc-ce02.gridpp.rl.ac.uk EGI
GGUS Tickets Closed Last week
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject
131815 t2k.org verified less urgent 13/11/2017 29/01/2018 Storage Systems Extremely long download times for T2K files on tape at RAL
132712 other solved less urgent 04/01/2018 30/01/2018 Other support for the hyperk VO (RAL-LCG2)
132802 cms solved urgent 11/01/2018 26/01/2018 CMS_AAA WAN Access Low HC xrootd success rates at T1_UK_RAL
132844 atlas verified urgent 14/01/2018 25/01/2018 Storage Systems UK RAL-LCG2 DATADISK transfer errors "DESTINATION OVERWRITE srm-ifce err:"
132935 atlas solved less urgent 18/01/2018 24/01/2018 Storage Systems UK RAL-LCG2: deletion errors
132963 atlas verified top priority 21/01/2018 24/01/2018 Middleware RAL arc-ce03 flakey
133046 ops verified less urgent 25/01/2018 29/01/2018 Operations [Rod Dashboard] Issue detected : org.nordugrid.ARC-CE-sw-csh-ops@arc-ce02.gridpp.rl.ac.uk
133066 atlas solved top priority 26/01/2018 30/01/2018 File Transfer Unable to contact fts3-test.gridpp.rl.ac.uk
133082 lhcb verified top priority 27/01/2018 28/01/2018 File Transfer Data transfer problem at RAL
133092 atlas solved top priority 29/01/2018 29/01/2018 Other RAL FTS server in troubles
Availability Report
Day OPS Alice Atlas CMS LHCb Atlas Echo Comment
24/01/18 100 100 52 100 100 100
25/01/18 100 100 100 100 100 100
26/01/18 100 100 100 100 100 100
27/01/18 100 100 100 100 100 100
28/01/18 100 100 100 100 100 100
29/01/18 100 100 100 100 100 100
30/01/18 100 100 100 100 100 100
Hammercloud Test Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_UCORE, Template 841); CMS HC = CMS HammerCloud

Day Atlas HC CMS HC Comment
24/01/18 100 100
25/01/18 100 100
26/01/18 100 100
27/01/18 0 100 Atlas HC - no tests run in time bin
28/01/18 100 99
29/01/18 100 100
30/01/18 100 100
Notes from Meeting.
  • The Echo CEPH update to the "Luminous" version does not include a fix for the 'backfill' bug that affected Echo a couple of months ago after new hardware was added in. The fix is still expected in a forthcoming minor version update.