Tier1 Operations Report 2018-02-20

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 21st February 2018

Review of Issues during the week 14th February 2018 to 21st February 2018
  • On Wednesday (21st) the IPv6 connections, both to the RAL Core network and the ‘bypass’ route will be moved to share the IPv4 connections. We have been running with separate, 10Gbit physical links for IPv6 as this would aid investigations of problems etc. However, the time has come to move the IPv6 onto the 40Gbit connections. This is being done ahead of a change to enable IPv6 (dual stack) on the Echo Gateways – i.e. providing IPv6 access to Echo.
Current operational status and issues
  • The problems with data flows through the site firewall being restricted is still present. New firewall kit has be purchased and delivered, awaiting deployment.
Resolved Castor Disk Server Issues
  • None
Ongoing Castor Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • Move to IPv6 connectivity - currently on 10Gbit links - to share the higher bandwidth (40Gbit) links used by IPv4. This has been approved at the CAB meeting (13/2/18), and will implemented on the 21/2/2018.
Entries in GOC DB starting since the last report.
  • None
Declared in the GOC DB
  • No downtime scheduled in the GOCDB between 2018-02-14 and 2018-02-21
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • Echo
    • Enable IPv6 (idual stack IPv4/6) on Echo gateways
  • Castor:
    • Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
    • Move to generic Castor headnodes.
  • Networking
    • Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
    • Replacement (upgrade) of RAL firewall.
  • Internal
    • DNS servers will be rolled out within the Tier1 network.
    • Infrastructure
  • Testing of power distribution boards in the R89 machine room is being scheduled for some time late July / Early August. The effect of this on our services is being discussed.
Open GGUS Tickets (Snapshot during morning of meeting)
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
117683 none on hold less urgent 18/11/2015 03/01/2018 Information System CASTOR at RAL not publishing GLUE 2 EGI
124876 ops on hold less urgent 07/11/2016 13/11/2017 Operations [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk EGI
127597 cms on hold urgent 07/04/2017 29/01/2018 File Transfer Check networking and xrootd RAL-CERN performance EGI
132589 lhcb in progress very urgent 21/12/2017 31/01/2018 Local Batch System Killed pilots at RAL WLCG
GGUS Tickets Closed Last week
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
133265 atlas closed top priority 04/02/2018 19/02/2018 File Transfer Transfers to all UK, NL, and ND sites are failing with "No space left on device" WLCG
133399 atlas solved less urgent 09/02/2018 17/02/2018 File Transfer Transfers to RAL-LCG2-ECHO fail with "Address already in use" WLCG
133421 snoplus.snolab.ca solved urgent 12/02/2018 20/02/2018 File Transfer Failed Transfers since Friday 10:00 EGI
133475 none solved urgent 14/02/2018 15/02/2018 CMS_Data Transfers setup of /store/test/rucio WLCG
Availability Report
2018-02-15 100 100 100 100 100 100
2018-02-16 100 100 100 100 100 100
2018-02-17 100 100 100 100 100 100
2018-02-18 100 100 100 100 100 100
2018-02-19 100 100 100 100 100 100
2018-02-20 100 100 100 100 100 100
2018-02-21 100 100 100 100 100 100
Hammercloud Test Report

Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud

Day Atlas HC CMS HC Comment
2018/02/14 91 100
2018/02/15 98 97
2018/02/16 92 99
2018/02/17 100 99
2018/02/18 94 98
2018/02/19 100 99
2018/02/20 100 100
  • CMS having issues with debug transfers (and only debug transfers), Catalin to investigate?
Notes from Meeting.