Tier1 Operations Report 2018-03-07

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 28th February 2018

Review of Issues during the week 28th February 2018 to 7th March 2018
  • The Castor intervention scheduled for last Thursday (1st March) was cancelled owing to staff availability in the poor weather. (It was to apply Oracle patches to the database systems behind Castor). It is being re-scheduled possibly towards the end of March.
  • As planned the Echo gateways were dual stacked – so providing IPv6 access – last Tuesday (27th Feb). This change itself went well – however, it revealed problems with transfers to/from Echo via our FTS service. One issue was identified – the load balancers in front of the FTS system were not dual stacked. This was fixed quickly but the problem persists. Some VOs are making use of other FTS servers as a workaround. (LHCb seem to be the only one using our FTS server significantly at the moment).
  • Following aproblem with two (of the three) connections that make up the OPN link last week errors were seen on one of those links along with some packet loss. The problem was at the far end of the link and was fixed some time towards the end of yesterday (Tuesday) by Janet.
Current operational status and issues
  • The problems with data flows through the site firewall being restricted is still present. New firewall kit has be purchased and delivered, awaiting deployment.
  • After the Echo Gateway Dual Stack upgrade RAL FTS is still broken. Currently CERN FTS is being used as a stop-gap solution.
Resolved Castor Disk Server Issues
  • gdss618 (GenTape d0t1) failed late Monday morning. A faulty disk drive was replaced.
Ongoing Castor Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • Nine disk servers from the Dell 2015 batch were taken out of production for short periods yesterday (Tuesday 6th March) for BIOS update and to alter fan speeds to improve cooling. These servers were located in D0T1 areas (tape buffers).
Entries in GOC DB starting since the last report.
  • None
Declared in the GOC DB
  • No ongoing downtime
  • No downtime scheduled in the GOCDB for next 2 weeks
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • Echo
    • Applying minor CEPH update to fix the "backfill" bug. Increase number of placement groups and add more capacity.
  • Castor:
    • Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
    • Move to generic Castor headnodes.
  • Networking
    • Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
    • Replacement (upgrade) of RAL firewall.
  • Internal
    • DNS servers will be rolled out within the Tier1 network.
  • Infrastructure
    • Testing of power distribution boards in the R89 machine room is being scheduled for some time late July / Early August. The effect of this on our services is being discussed.
Open GGUS Tickets (Snapshot during morning of meeting)
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
117683 none on hold less urgent 18/11/2015 03/01/2018 Information System CASTOR at RAL not publishing GLUE 2 EGI
124876 ops on hold less urgent 07/11/2016 13/11/2017 Operations [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk EGI
127597 cms on hold urgent 07/04/2017 29/01/2018 File Transfer Check networking and xrootd RAL-CERN performance EGI
132589 lhcb in progress very urgent 21/12/2017 23/02/2018 Local Batch System Killed pilots at RAL WLCG
133619 cms waiting for reply top priority 21/02/2018 05/03/2018 CMS_Central Workflows T1_UK_RAL Unmerged files missing WLCG
133717 cms in progress very urgent 27/02/2018 27/02/2018 CMS_Data Transfers RAL FTS3 Service: Significant Drop in Transfer Efficiency WLCG
133719 atlas in progress urgent 27/02/2018 27/02/2018 File Transfer Transfers to RAL-LCG2-ECHO failing WLCG
133752 atlas in progress very urgent 01/03/2018 06/03/2018 File Transfer RAL FTS service appears broken WLCG
133764 snoplus.snolab.ca in progress very urgent 01/03/2018 06/03/2018 Information System BDII missing SFU information EGI
133842 snoplus.snolab.ca waiting for reply less urgent 05/03/2018 06/03/2018 Other File Stuck on RAL EGI
GGUS Tickets Closed Last week
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
133399 atlas closed less urgent 09/02/2018 05/03/2018 File Transfer Transfers to RAL-LCG2-ECHO fail with "Address already in use" WLCG
133475 none closed urgent 14/02/2018 01/03/2018 CMS_Data Transfers setup of /store/test/rucio WLCG
Availability Report
2018-02-28 100 100 100 100 100 100
2018-03-01 100 100 100 100 100 100
2018-03-02 100 100 100 100 100 100
2018-03-03 100 100 100 100 100 100
2018-03-04 100 100 100 100 100 100
2018-03-05 100 100 100 100 100 100
2018-03-06 100 100 100 99 100 100
Hammercloud Test Report

Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud

Day Atlas HC CMS HC Comment
2018/02/28 96 100
2018/03/01 90 100
2018/03/02 96 98
2018/03/03 96 100
2018/03/04 86 100
2018/03/05 86 99
2018/04/06 98 99
  • CMS increase their limit on Echo to 1.5PBytes on Monday (5th Mar). No change was needed at the Tier1 end.
Notes from Meeting.