Tier1 Operations Report 2017-05-24

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 24th May 2017

Review of Issues during the week 17th to 24th May 2017.
  • There is a specific problem with Castor affecting LHCb when a TURL returned by the SRM does not always work when used for xroot access owing to an incorrect hostname. This is now largely understood although not yet fixed.
  • A problem of ARP poisoning on the Tier1 network is affecting some monitoring used by CEPH ECHO. Attempts to understand the source and cause of this are taking place.
  • There is a problem on the site firewall which is causing problems for some specific data flows. It was being investigated in connection with videoconferencing problems. It is not clear if this could be having any effect on our services.
Resolved Disk Server Issues
  • GDSS724 (AtlasDataDisk - D1T0) Crashed on Wednesday evening (157th May). It was returned to service, intially read-only, on the 19th. Three zero-sized files lost at time of crash.
Current operational status and issues
  • We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". This will be re-addresses once we have upgraded to Castor 2.1.16.
  • LHCb Castor performance has been OK since the 2.1.16 update although this was not under and high load. A load test (mimicing the stipping/merging campain) is being carried out with LHCb today (24th May).
Ongoing Disk Server Issues
  • GDSS773 (LHCbDst - D1T0) crashed on Sunday (21st May). Investigations are ongoing.
  • GDSS804 (LHCbDst - D1T0) was taken out of production yesterday as it was showing memory errors. These are still being checked out.
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • Atlas Castor instance updated to Castor version 2.1.16-13. (The Atlas SRMs were already at version 2.1.16).
  • CEs being migrated to use the load balancers in front of the argus service.
  • Atlas now have 2PBytes of data in ECHO which is their current allocation in there. This means Atlas' use of ECHO will now need to include deleting data to make room for more. (Note: ECHO has more capacity than this in total.)
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-cms-disk.gridpp.rl.ac.uk, srm-cms.gridpp.rl.ac.uk SCHEDULED OUTAGE 25/05/2017 10:00 25/05/2017 16:00 6 hours Upgrade of CMS Castor instance to version 2.1.16.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Update Castor (including SRMs) to version 2.1.16. Central nameserver and LHCb and Atlas stagers done. Current plan: CMS stager and SRMs on Thursday 25th May. GEN to follow.
  • Update Castor SRMs - CMS & GEN still to do. These are being done at the same time as the Catsor 2.1.16 updates.

Listing by category:

  • Castor:
    • Update SRMs to new version, including updating to SL6.
    • Update Castor to version 2.1.16 (ongoing)
    • Merge AtlasScratchDisk into larger Atlas disk pool.
  • Networking
    • Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links.
    • Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
  • Services
    • Put argus systems behind a load balancers to improve resilience.
    • The production FTS needs updating. This will no longer support the soap interface. (The "test" FTS , used by Atlas, has already been upgraded.)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-atlas.gridpp.rl.ac.uk SCHEDULED OUTAGE 23/05/2017 10:00 23/05/2017 11:34 1 hour and 34 minutes Upgrade of Atlas Castor instance to version 2.1.16.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
128398 Green Top Priority Waiting for Reply 2017-05-18 2017-05-24 LHCb File cannot be opened using xroot at RAL
128308 Green Urgent In Progress 2017-05-14 2017-05-15 CMS Description: T1_UK_RAL in error for about 6 hours
127967 Green Less Urgent On Hold 2017-04-27 2017-04-28 MICE Enabling pilot role for mice VO at RAL-LCG2
127612 Yellow Alarm In Progress 2017-04-08 2017-05-19 LHCb CEs at RAL not responding
127597 Yellow Urgent Waiting for Reply 2017-04-07 2017-05-16 CMS Check networking and xrootd RAL-CERN performance
127240 Red Urgent In Progress 2017-03-21 2017-05-15 CMS Staging Test at UK_RAL for Run2
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2017-05-10 CASTOR at RAL not publishing GLUE 2.
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC ECHO = Atlas ECHO (Template 841);CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas ECHO Atlas HC Atlas HC ECHO CMS HC Comment
17/05/17 100 100 100 91 100 100 96 98 100 Intermittent SRM test failures. (User timeout)
18/05/17 100 100 98 79 100 100 94 100 100 Atlas: One SRM test failure; CMS: Intermittent SRM test failures. (timeout)
19/05/17 100 100 100 78 100 100 100 100 100 Intermittent SRM test failures. (User)
20/05/17 100 100 100 83 100 100 95 100 100 Intermittent SRM test failures. (User)
21/05/17 100 100 100 80 100 100 100 99 100 Intermittent SRM test failures. (User)
22/05/17 100 100 100 83 100 100 100 100 100 Intermittent SRM test failures. (User)
23/05/17 100 100 92 96 100 100 100 100 100 Atlas Castor 2.1.16 update; CMS: Intermittent SRM test failures. (timeout)
Notes from Meeting.
  • The problem (reported above) with the site firewall was discussed. This is expected to affect Tier1 traffic that passes through the firewall (such as data access by the worker nodes).
  • Work is going on enable use of the Tier1 by LIGO (batch, Echo storage and cvmfs) and CCP4 (for batch).
  • The proxy servers added into the ECHO gateways (reported previously) have fixed the load problem that was seen.
  • CMS have successfully tested AAA access to ECHO over xrootd.