Difference between revisions of "Tier1 Operations Report 2017-07-19"

From GridPP Wiki
Jump to: navigation, search
()
()
Line 23: Line 23:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* None
+
* GDSS650 (LHCbUser - D1T0) failed on Sunday 16th July. The server was being drained. A disk was replaced and the server returned to service this morning (19th July).
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  

Revision as of 11:50, 19 July 2017

RAL Tier1 Operations Report for 19th July 2017

Review of Issues during the week 12th to 19th July 2017.
  • There was a problem with the Atlas Castor instance for a few hours during the afternoon of Friday 14th July. A problem on the SRMs correlated with a spike in the SRM request rate.
  • On Sunday (16th July) there was a problem with the CMS Castor instance. On-call staff responded. The back-end database reported locking sessions and hot-spotting of files was seen. The problem affected batch access to files as well. Since then there have still been some indication of CMS Castor problems (with 'unable to issue PrepareToPut request to Castor' errors) but at a much reduced rate.
  • There is a power outage scheduled for the Atlas Building (R26) over the weekend of 29/30 July. Preparations are being made to remove any impact this may have on operational services.
Resolved Disk Server Issues
  • GDSS650 (LHCbUser - D1T0) failed on Sunday 16th July. The server was being drained. A disk was replaced and the server returned to service this morning (19th July).
Current operational status and issues
  • We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
  • There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • The number of placement groups in the Echo CEPH Atlas pool contiues to be increased. This is in preparation for the increase in storage capacity when new hardware is added.
  • Security patching being carried out across systems.
  • Updating of RAID card firmware in one batch of disk servers (OCF '14) is taking place.
  • On Thursday 13th July the WAN tuning parameters were updated on CASTOR disk servers. This had been applied to some of the servers some months ago and this completed the roll-out of this change and standardized these parameters across the Castor disk servers.
  • On Monday 17th July access to FTS3 via the SOAP interface was stopped by removing the fts3-prod-soap proxy from the load balancers.
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Firmware updates in OCF 14 disk servers. These will be done next week (17-21 July).
  • Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface will be disabled on the 17th July.
  • Increase the number of placement groups in the Atlas Echo CEPH pool. (Ongoing)

Listing by category:

  • Castor:
    • Move to generic Castor headnodes.
    • Merge AtlasScratchDisk into larger Atlas disk pool.
  • Echo:
    • Increase the number of placement groups in the Atlas Echo CEPH pool.
  • Networking
    • Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
  • Services
    • The production FTS needs updating. This will no longer support the SOAP interface. (The "test" FTS , used by Atlas, has already been upgraded.)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-lhcb.gridpp.rl.ac.uk, SCHEDULED WARNING 19/07/2017 13:00 19/07/2017 16:30 3 hours and 30 minutes Rebooting some disk server to update firmware, causing some interruptions in service
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
129342 Green Urgent In Progress 2017-07-04 2017-07-10 [Rod Dashboard] Issue detected : org.sam.SRM-Put-ops@srm-mice.gridpp.rl.ac.uk
129059 Yellow Very Urgent Waiting for Reply 2017-06-20 2017-06-28 LHCb Timeouts on RAL Storage
128991 Green Less Urgent In Progress 2017-06-16 2017-07-05 Solid solidexperiment.org CASTOR tape support
127597 Red Urgent On Hold 2017-04-07 2017-06-14 CMS Check networking and xrootd RAL-CERN performance
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2017-07-06 CASTOR at RAL not publishing GLUE 2.
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas Echo Atlas HC Atlas HC Echo CMS HC Comment
12/07/17 100 100 100 92 100 100 100 100 100 SRM test failures (Unable to issue PrepareToPut request to Castor).
13/07/17 100 100 100 90 100 100 99 100 99 SRM test failures (Mix of error messages, some ‘User timeout over’ and some ‘Unable to issue PrepareToPut request to Castor’)
14/07/17 100 100 87 98 100 100 91 100 79 Atlas: SRM problems; CMS: Single SRM error: ‘Unable to issue PrepareToPut request to Castor’
15/07/17 100 100 100 96 100 100 100 100 99 Failures for all tests. The CEs couldn’t open Castor files and the SRMs had errors like: ‘Unable to issue PrepareToPut request to Castor’
16/07/17 100 100 100 46 100 100 100 100 98 CMS Castor problems.
17/07/17 100 100 100 94 100 100 100 98 100 SRM test failures: 'Unable to issue PrepareToPut request'
12/07/17 100 100 100 94 100 100 100 100 100 SRM test failures: Combination of the normal ‘User timeout over’ errors and ‘Unable to issue PrepareToPut request'.
Notes from Meeting.
  • None yet