Tier1 Operations Report 2017-08-09

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 9th August 2017

Review of Issues during the fortnight 26th July to 9th August 2017.
  • The problem with file transfers initiated by the CERN FTS3 service to/from our Castor storage was ongoing at the time of the last meeting (26th July). This was traced to an update to the CERN FTS3 services. CERN reverted the change and the problem was resolved.
  • There was a problem with the Atlas Frontier service on Thursday 27th July. We saw high load on the back end database systems.
  • There was a problem with the test FTS3 srevice on Friday 28th July. The system hit a limit of having done 2 billion file transfers. An emergency update was applied.
  • There was a network break during the morning of Wednesday 2nd August. Unfortunately coinciding with staff being at a divisional meeting. There had been a problem with one of the RAL core network stacks on the 25th July. We had set our router pair (the Extreme X670s) to not flip back to use the link to this failing stack. However, during work to resolve the problem on the failed core stack our second link to another core stack went down - it appears our routers thought there was a network loop. This caused the Extreme x670 router pair to try switching back to the other connection. The upshot was a complete break in Tier1 connectivity to the core for around an hour. All network systems have since been fully restored and the failover configuration returned to its normal state. There was some delay in re-establishing IPv6 connectivity.
Resolved Disk Server Issues
  • None.
Current operational status and issues
  • We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
  • There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • On Monday (7th August) the AtlasScratchDisk was merged into the larger AtlasDataDisk pool in Castor.
  • The planned increases in the number of placement groups in the Echo CEPH Atlas pool has been completed. The 2016 storage purchases have been placed into Echo and the process of moving data so that use is made of this hardware has been started.
  • All squid nodes are now IPv4/IPv6 dual stack.
  • "Test" FTS3 instance (used by Atlas) updated to 3.6.10 (emergency update - as this was the first server to reach 2 billion transfers. Due to an internal 32-bit integer being used it completely stopped working at this point.)
  • Power work was carried out in building R26 (the Atlas building) over the weekend of 29/30 July. This had no impact on our operational services.
  • There was a successful UPS/Generator load test this morning (9th Aug). These are done quarterly and this was the first regular test since the building UPS was replaced. (It had been tested shortly after installation).
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
Whole site UNSCHEDULED WARNING 25/07/2017 10:57 26/07/2017 12:00 1 day, 1 hour and 3 minutes Warning after network problems and castor reboot
srm-superb.gridpp.rl.ac.uk, SCHEDULED OUTAGE 20/07/2017 16:00 30/08/2017 13:00 40 days, 21 hours SuperB no longer supported on Castor storage. Retiring endpoint.
srm-hone.gridpp.rl.ac.uk, SCHEDULED OUTAGE 20/07/2017 16:00 30/08/2017 13:00 40 days, 21 hours H1 no longer supported on Castor storage. Retiring endpoint.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface was disabled on Monday (17th July).
  • Re-distribute the date in Echo onto the 2016 capacity hardware. (Ongoing)

Listing by category:

  • Castor:
    • Move to generic Castor headnodes.
  • Echo:
    • Re-distribute the date in Echo onto the 2016 capacity hardware.
  • Networking
    • Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
  • Services
    • The production FTS will be updated now the requirement to support the deprecated SOAP interface has been removed.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
Whole site UNSCHEDULED WARNING 25/07/2017 10:57 26/07/2017 12:00 1 day, 1 hour and 3 minutes Warning after network problems and castor reboot
All Castor SCHEDULED OUTAGE 25/07/2017 09:30 25/07/2017 11:54 2 hours and 24 minutes Castor Storage Unavailable during OS patching.
Whole site UNSCHEDULED OUTAGE 25/07/2017 06:00 25/07/2017 11:54 5 hours and 54 minutes Network problem at RAL - under investigation
srm-superb.gridpp.rl.ac.uk, SCHEDULED OUTAGE 20/07/2017 16:00 30/08/2017 13:00 40 days, 21 hours SuperB no longer supported on Castor storage. Retiring endpoint.
srm-hone.gridpp.rl.ac.uk, SCHEDULED OUTAGE 20/07/2017 16:00 30/08/2017 13:00 40 days, 21 hours H1 no longer supported on Castor storage. Retiring endpoint.
srm-lhcb.gridpp.rl.ac.uk, SCHEDULED WARNING 19/07/2017 13:00 19/07/2017 16:30 3 hours and 30 minutes Rebooting some disk server to update firmware, causing some interruptions in service
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
129777 Green Urgent In Progress 2017-07-26 2017-07-26 CMS Transfer failing from T1_UK_RAL
129769 Green Less Urgent In Progress 2017-07-26 2017-07-26 LHCb FTS failure at RAL
129748 Green Less Urgeny Waiting Reply 2017-07-25 2017-07-26 Atlas RAL-LCG2: deletion errors
129573 Green Urgent In Progress 2017-07-16 2017-07-21 Atlas RAL-LCG2: DDM transfer failure with Connection to gridpp.rl.ac.uk refused
129342 Green Urgent In Progress 2017-07-04 2017-07-19 [Rod Dashboard] Issue detected : org.sam.SRM-Put-ops@srm-mice.gridpp.rl.ac.uk
128991 Green Less Urgent In Progress 2017-06-16 2017-07-20 Solid solidexperiment.org CASTOR tape support
127597 Red Urgent On Hold 2017-04-07 2017-06-14 CMS Check networking and xrootd RAL-CERN performance
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2017-07-06 CASTOR at RAL not publishing GLUE 2.
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas Echo Comment
26/07/17 100 100 100 100 100 100
27/07/17 100 100 100 100 100 100
28/07/17 100 100 100 100 100 100
29/07/17 100 100 100 100 100 100
30/07/17 100 100 100 100 100 100
31/07/17 100 100 100 100 100 100
01/08/17 100 100 100 91 100 100 SRM test failures. Mainly user timeouts.
02/08/17 95.5 100 100 86 94 100 Attributing all to network break.
03/08/17 100 100 100 79 100 100 SRM test failures. Mainly user timeouts.
04/08/17 100 100 100 93 100 100 Some ‘User timeout over’ errors on the SRM test. One or two failed CE tests.
05/08/17 100 100 100 96 100 100 At least two ‘Unable to issue PrepareToPut request to Castor’ failures on the SRM tests. A few scattered CE test failures.
06/08/17 100 100 100 99 100 100 There were two ‘Unable to issue PrepareToPut request to Castor’ failures on the SRM tests.
07/08/17 100 100 100 94 100 100 SRM test failures. Mainly user timeouts.
08/08/17 100 100 100 98 100 100 Two SRM test failures onGET. Both “User Timeout”s
Hammercloud Test Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud

Day Atlas HC Atlas HC Echo CMS HC Comment
26/07/17 99 100 100
27/07/17 97 96 100
28/07/17 85 91 100
29/07/17 100 96 100
30/07/17 100 100 100
31/07/17 100 100 100
01/08/17 100 98 100
02/08/17 97 96 100
03/08/17 99 100 100
04/08/17 100 100 100
05/08/17 57 100 94
06/08/17 100 100 38
07/08/17 80 98 67
08/08/17 68 96 100
Notes from Meeting.
  • None yet