Tier1 Operations Report 2018-03-14

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 14th March 2018

Review of Issues during the week 7th to 14th March 2018
  • The Castor intervention scheduled for 1st March that was cancelled owing to staff availability in the poor weather has been re-scheduled for Tuesday 27th March.
  • As reported before there have been problems with transfers to/from Echo via our FTS service since the Echo gateways were enabled for IPv6. At the start of this week a problem was found in which a race condition in our system configuration utility (Quattor) was causing the IPv6 configuration not to be applied correctly in some cases. This has now been fixed and we believe the problem affecting the FTS service is now resolved. While the problem was going on some VOs stopped using our (RAL) FTS but managed their file transfers using other FTS servers. Some other dual-stacked services still need to be checked out.
  • We are investigating transitory problems on worker nodes that stops Docker containers being started.
  • This aliceDisk pool in Castor is almost full and it has been so for a couple of weeks now. The Alice VO has been informed but there are still only a few hundred GBs free per disk server.
Current operational status and issues
  • The problems with data flows through the site firewall being restricted is still present. New firewall equipment has been delivered and is scheduled for installation next week (21st March).
Resolved Castor Disk Server Issues
  • None
Ongoing Castor Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • Updated condor to version 8.6.9 on all worker nodes.
  • New Docker containes rolled out via arc-ce02. These contain the latest singularity and singularity-runtime packages (requested by WLCG).
  • IPv6 configuration problem on some Tier1 systems resolved.
Entries in GOC DB starting since the last report.
  • None
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
All Castor SCHEDULED OUTAGE 27/03/2018 10:00 27/03/2018 16:00 6 hours Outage to Castor Storage System while back-end databases are patched. During the intervention both Castor disk and tape will be unavailable.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Replacement (upgrade) of RAL firewall scheduled for Wednesday 21st March.

Listing by category:

  • Echo
    • Applying minor CEPH update to fix the "backfill" bug. Increase number of placement groups and add more capacity.
  • Castor:
    • Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
    • Move to generic Castor headnodes.
  • Networking
    • Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
    • Replacement (upgrade) of RAL firewall.
  • Internal
    • DNS servers will be rolled out within the Tier1 network.
  • Infrastructure
    • Testing of power distribution boards in the R89 machine room is being scheduled for some time late July / Early August. The effect of this on our services is being discussed.
Open GGUS Tickets (Snapshot during morning of meeting)
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
117683 none on hold less urgent 18/11/2015 09/03/2018 Information System CASTOR at RAL not publishing GLUE 2 EGI
124876 ops on hold less urgent 07/11/2016 13/11/2017 Operations [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk EGI
127597 cms on hold urgent 07/04/2017 29/01/2018 File Transfer Check networking and xrootd RAL-CERN performance EGI
132589 lhcb in progress very urgent 21/12/2017 12/03/2018 Local Batch System Killed pilots at RAL WLCG
133619 cms waiting for reply top priority 21/02/2018 12/03/2018 CMS_Central Workflows T1_UK_RAL Unmerged files missing WLCG
133717 cms in progress very urgent 27/02/2018 07/03/2018 CMS_Data Transfers RAL FTS3 Service: Significant Drop in Transfer Efficiency WLCG
133764 snoplus.snolab.ca in progress very urgent 01/03/2018 08/03/2018 Information System BDII missing SFU information EGI
133992 atlas in progress less urgent 12/03/2018 14/03/2018 File Transfer RAL-LCG2-ECHO: No such file or directory EGI
GGUS Tickets Closed Last week
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
133719 atlas solved urgent 27/02/2018 14/03/2018 File Transfer Transfers to RAL-LCG2-ECHO failing WLCG
133752 atlas solved very urgent 01/03/2018 14/03/2018 File Transfer RAL FTS service appears broken WLCG
133842 snoplus.snolab.ca solved less urgent 05/03/2018 08/03/2018 Other File Stuck on RAL EGI
133997 lhcb verified urgent 13/03/2018 14/03/2018 File Access Bad data was encountered WLCG
Availability Report
Day ALICE ATLAS ATLAS-ECHO CMS LHCb OPS Comment
2018-03-07 100 100 100 99 100 100
2018-03-08 100 100 100 98 100 100
2018-03-09 100 100 100 98 100 100
2018-03-10 100 100 100 97 100 100
2018-03-11 100 100 100 100 100 100
2018-03-12 100 100 100 100 100 100
2018-03-13 100 98 100 100 100 100
Hammercloud Test Report

Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud

Day Atlas HC CMS HC Comment
2018/03/07 100 100
2018/03/08 94 100
2018/03/09 90 99
2018/03/10 69 100
2018/03/11 82 99
2018/03/12 91 100
2018/04/13 94 100
Notes from Meeting.
  • LHCb reported that the recent reprocessing campain went well. There was no recurrence of problems accessing Castor files experienced in previous such cases.