Tier1 Operations Report 2018-04-16

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 16th April 2018

Review of Issues during the week 4th April to the 16th April 2018.
  • This is an extended report that covers the period from the 4th to the 11th April when the normal meeting attendees were at GridPP40. It is now also being created on a Monday to reflect the new format of the Tier 1 Liaison meetings.


  • Delayed intervention to patch the Oracle databases took place successfully on 27th March.
  • At the start of April four newer disk servers were added to the Castor GenTape instance. The older ones were set read-only ahead of withdrawal. A throughput problem last week (during the GridPP meeting) led to the older ones being re-instated (temporarily).


  • The dial stacking of Echo took place on 27th Feb. There were some problems after this. These were traced to a race condition in our system configuration utility (Quattor) that was causing the IPv6 configuration not to be applied correctly in some cases. This has now been fixed and we believe the problem affecting the FTS service is now resolved. While the problem was going on some VOs stopped using our (RAL) FTS but managed their file transfers using other FTS servers. Some other dual-stacked services still need to be checked out.
  • On 4th April a minor CEPH update was applied to Echo to fix the ‘backfill’ bug. This will make adding more hardware into Echo easier.


  • There was a problem mid-March when a disproportionately large number of GGUS tickets were raised, all appearing to have xrootd as the common denominator. After extensive investigations it was been found that a firewall rule had been lost and consequently packets being dropped. Once identified the rule was quickly reinstated.
  • The upgrade (replacement) of the RAL firewall is scheduled to take place on the morning of the 25th April. Hopefully this will fix problems we have seen with data flows to/from our worker nodes.
Current operational status and issues
  • The problems with data flows through the site firewall being restricted is still present. New firewall equipment has been delivered and is now scheduled for deployment 25th April 2018.
Resolved Castor Disk Server Issues
  • None.
Ongoing Castor Disk Server Issues
  • gdss782 (atlasStripInput - D1T0) - Currently out of production awaiting investigation by the CASTOR and Fabric teams.
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • Four disk servers have been deployed into Castor GenTape and the older ones withdrawn from service.
  • New Docker images rolled out across entire batch farm. (These fix a security issue with Singularity).
  • Rolled out CMS xrootd re-direction on Echo external gateways - to match that in the xrootd re-directors on the Worker Nodes.
  • At the time of the meeting an update is being applied to Echo which patches the 'backfill' bug.
Entries in GOC DB starting since the last report.
  • None
Declared in the GOC DB
Downtime id Hosts Start Time End Time Severity Description
25175 All RAL 18/11/2015 28/03/2018 SCHEDULED WARNING Upgrade (Replacement) of Site Firewall. Within this time window there is expected to be a short break in connectivity while connections are moved across to the new firewall.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Replacement (upgrade) of RAL firewall scheduled for a few weeks time.

Listing by category:

  • Echo
    • Applying minor CEPH update to fix the "backfill" bug. Increase number of placement groups and add more capacity.
  • Castor:
    • Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
    • Move to generic Castor headnodes.
  • Networking
    • Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
    • Replacement (upgrade) of RAL firewall.
  • Internal
    • DNS servers will be rolled out within the Tier1 network.
  • Infrastructure
    • Testing of power distribution boards in the R89 machine room is being scheduled for some time late July / Early August. The effect of this on our services is being discussed.
Open GGUS Tickets (Snapshot during morning of meeting)
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject
117683 none on hold less urgent 18/11/2015 28/03/2018 Information System CASTOR at RAL not publishing GLUE 2
134494 Atlas waiting for reply top priority 18/11/2015 11/04/2018 Storage Systems json space reporting not updated
134488 snoplus.snolab.ca waiting for reply urgent 10/04/2018 12/04/2018 File Transfer FTS failures to RAL
134468 CMS in progress urgent 09/04/2015 12/04/2018 CMS_AAA WAN Access Xrootd redirector not seeing some files in ECHO
134342 CMS in progress urgent 29/03/2015 05/04/2018 CMS_Data Transfers CERN->MSS rate test
133992 Atlas in progress less urgent 12/03/2018 15/03/2018 File Transfer RAL-LCG2-ECHO: No such file or directory
132589 lhcb in progress very urgent 21/12/2017 10/04/2018 Local Batch System Killed pilots at RAL
127597 cms in progress urgent 07/04/2017 30/03/2018 File Transfer Check networking and xrootd RAL-CERN performance
124876 ops in progress less urgent 07/11/2017 13/11/2017 Operations C[Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
GGUS Tickets Closed Last week
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
134438 snoplus.snolab.ca closed urgent 06/04/2018 06/04/2018 File Transfer FTS - transfer rate success drop WLCG
134426 none closed very urgent 05/04/2018 10/04/2018 VO Specific Software /cvmfs/larsoft.opensciencegrid.org not mounting WLCG
Availability Report
Target Availability for each site is 97.0% Red <90% Orange <97%
2018-04-04 100 100 100 100 100 100
2018-04-05 100 100 100 98 100 100
2018-04-06 100 100 100 100 100 100
2018-04-07 100 98 100 100 100 100
2018-04-08 100 100 100 100 100 100
2018-04-09 100 100 100 100 100
2018-04-10 100 100 100 98 100
2018-04-11 100 100 100 100 100
2018-04-12 100 100 100 99 100 100
2018-04-13 100 100 100 100 100 100
2018-04-14 100 100 100 100 100 100
2018-04-15 100 100 100 94 100 100
Hammercloud Test Report
Target Availability for each site is 97.0% Red <90% Orange <97%

Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud

Day Atlas HC CMS HC Comment
2018/04/05 80 99
2018/04/06 100 100
2018/04/07 100 100
2018/04/08 100 99
2018/04/09 97 100
2018/04/10 100 99
2018/04/11 100 100
2018/04/12 100 99
2018/04/13 100 100
2018/04/14 100 100
2018/04/15 100 100
Notes from Meeting.
  • None yet