Difference between revisions of "Tier1 Operations Report 2017-07-19"

From GridPP Wiki
Jump to: navigation, search
(Created page with "==RAL Tier1 Operations Report for 19th July 2017== __NOTOC__ ====== ====== <!-- ************************************************************* -----> <!-- ***********Start Re...")
 
()
 
(14 intermediate revisions by one user not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 12th to 19th July 2017.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 12th to 19th July 2017.
 
|}
 
|}
 +
* There was a problem with the Atlas Castor instance for a few hours during the afternoon of Friday 14th July. A problem on the SRMs correlated with a spike in the SRM request rate.
 +
* On Sunday (16th July) there was a problem with the CMS Castor instance. On-call staff responded. The back-end database reported locking sessions and hot-spotting of files was seen. The problem affected batch access to files as well. Since then there have still been some indication of CMS Castor problems (with 'unable to issue PrepareToPut request to Castor' errors) but at a much reduced rate.
 
* There is a power outage scheduled for the Atlas Building (R26) over the weekend of 29/30 July. Preparations are being made to remove any impact this may have on operational services.
 
* There is a power outage scheduled for the Atlas Building (R26) over the weekend of 29/30 July. Preparations are being made to remove any impact this may have on operational services.
* There were problems with the Atlas SRMs both on Saturday 8th July and again during the evening of Monday 10th July. Some of the SRM systems were found to be unresponsive. These problems were resolved by the out of hours on-call team.
 
* One of the CEPH gateway systems has had a hardware fault and was taken out of use (and out of the DNS alias) between the 6th and 11th July. There were elevated levels of gridftp transfer failures while the machine was removed, with errors indicating that the gateways are unable to serve more traffic (with the current connection limits.
 
* Two of the OCF '14 disk servers (One in each of AtlasDataDisk and CMSDisk) were removed from production for a couple of hours on Monday morning (10th July) for RAID card firmware updates. These were done ahead of the updating of the remaining systems in this batch planned for next week.
 
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 24: Line 23:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* None
+
* GDSS650 (LHCbUser - D1T0) failed on Sunday 16th July. The server was being drained. A disk was replaced and the server returned to service this morning (19th July).
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 69: Line 68:
 
|}
 
|}
 
* The number of placement groups in the Echo CEPH Atlas pool contiues to be increased. This is in preparation for the increase in storage capacity when new hardware is added.
 
* The number of placement groups in the Echo CEPH Atlas pool contiues to be increased. This is in preparation for the increase in storage capacity when new hardware is added.
* CVMFS Stratum 0 and 1 nodes updated to cvmfs-server version v2.3.5-1.
 
 
* Security patching being carried out across systems.
 
* Security patching being carried out across systems.
 +
* Updating of RAID card firmware in one batch of disk servers (OCF '14) is taking place.
 +
* On Thursday 13th July the WAN tuning parameters were updated on CASTOR disk servers. This had been applied to some of the servers some months ago and this completed the roll-out of this change and standardized these parameters across the Castor disk servers.
 +
* On Monday 17th July access to FTS3 via the SOAP interface was stopped by removing the fts3-prod-soap proxy from the load balancers.
 +
* A test gateway to Echo ceph-test-gw691.gridpp.rl.ac.uk) has been made dual stack and test transfers have been shown to work over IPv6 to/from CERN.
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 97: Line 99:
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
'''Pending - but not yet formally announced:'''
 
'''Pending - but not yet formally announced:'''
* Firmware updates in OCF 14 disk servers. These will be done next week (17-21 July).
+
* Firmware updates in OCF 14 disk servers. These will be done next week (ongoing this week).
* Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface will be disabled on the 17th July.
+
* Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface was disabled on Monday (17th July).
 
* Increase the number of placement groups in the Atlas Echo CEPH pool. (Ongoing)
 
* Increase the number of placement groups in the Atlas Echo CEPH pool. (Ongoing)
 
'''Listing by category:'''
 
'''Listing by category:'''
Line 120: Line 122:
 
| style="background-color: #7c8aaf; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting since the last report.
 
| style="background-color: #7c8aaf; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting since the last report.
 
|}
 
|}
* None
+
{| border=1 align=center
 +
|- bgcolor="#7c8aaf"
 +
! Service
 +
! Scheduled?
 +
! Outage/At Risk
 +
! Start
 +
! End
 +
! Duration
 +
! Reason
 +
|-
 +
| srm-lhcb.gridpp.rl.ac.uk,
 +
| SCHEDULED
 +
| WARNING
 +
| 19/07/2017 13:00
 +
| 19/07/2017 16:30
 +
| 3 hours and 30 minutes
 +
| Rebooting some disk server to update firmware, causing some interruptions in service
 +
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 135: Line 154:
 
|-style="background:#b7f1ce"
 
|-style="background:#b7f1ce"
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 +
 +
|-
 +
| 129626
 +
| Green
 +
| Less Urgent
 +
| In Progress
 +
| 2017-07-19
 +
| 2017-07-19
 +
|
 +
| [Rod Dashboard] Issue detected : org.nordugrid.ARC-CE-sw-csh-ops@arc-ce01.gridpp.rl.ac.uk
 +
|-
 +
| 129573
 +
| Green
 +
| Urgent
 +
| In Progress
 +
| 2017-07-16
 +
| 2017-07-17
 +
| Atlas
 +
| RAL-LCG2: DDM transfer failure with Connection to gridpp.rl.ac.uk refused
 +
|-
 +
| 129562
 +
| Green
 +
| Very Urgent
 +
| In Progress
 +
| 2017-07-14
 +
| 2017-07-17
 +
| CMS
 +
| Unable to open trivial file catalog /etc/cms/PhEDEx/storage.xml
 +
|-
 +
| 129552
 +
| Green
 +
| Less Urgent
 +
| Waiting for Reply
 +
| 2017-07-14
 +
| 2017-07-19
 +
| MICE
 +
| Problem reappeared: RAL castor: not able to list directories and copy to
 
|-
 
|-
 
| 129342
 
| 129342
Line 141: Line 197:
 
| In Progress
 
| In Progress
 
| 2017-07-04
 
| 2017-07-04
| 2017-07-10
+
| 2017-07-19
 
|  
 
|  
 
| [Rod Dashboard] Issue detected : org.sam.SRM-Put-ops@srm-mice.gridpp.rl.ac.uk
 
| [Rod Dashboard] Issue detected : org.sam.SRM-Put-ops@srm-mice.gridpp.rl.ac.uk
|-
 
| 129059
 
| Yellow
 
| Very Urgent
 
| Waiting for Reply
 
| 2017-06-20
 
| 2017-06-28
 
| LHCb
 
| Timeouts on RAL Storage
 
 
|-
 
|-
 
| 128991
 
| 128991
Line 206: Line 253:
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas Echo !! Atlas HC !! Atlas HC Echo !! CMS HC !! Comment
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas Echo !! Atlas HC !! Atlas HC Echo !! CMS HC !! Comment
 
|-
 
|-
| 05/07/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || 100 || 100 || 100 || 100 || Few SRM test failures (mostly User timeout)
+
| 12/07/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 92 || 100 || 100 || 100 || 100 || 100 || SRM test failures (Unable to issue PrepareToPut request to Castor).
|-
+
| 06/07/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 97 || 100 || 100 || 97 || 100 || 100 || Few SRM test failures (User timeout)
+
|-
+
| 07/07/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 99 || 100 || 100 || 100 || 99 || 100 || Single SRM test failure (User timeout)
+
|-
+
| 08/07/17 || 100 || 100 || style="background-color: lightgrey;" | 87 || style="background-color: lightgrey;" | 89 || 100 || 100 || 91 || 100 || 99 || Atlas: Problem on two of the four SRMs. On-Call team intervened. CMS: mix of SRM test failures.
+
|-
+
| 09/07/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 92 || 100 || 100 || 100 || 100 || 99 || SRM test failures
+
|-
+
| 10/07/17 || 100 || 100 || style="background-color: lightgrey;" | 83 || style="background-color: lightgrey;" | 95 || style="background-color: lightgrey;" | 92 || 100 || 75 || 100 || 99 || Atlas: Failures on the SRM SAM test during evening. On-call team intervened. CMS: Block of failed tests for the CEs and sporadic SRM test failures. LHCb: Some SRM test failures ('Communication error’ and ‘SRM_FILE_BUSY‘)
+
|-
+
| 11/07/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 97 || style="background-color: lightgrey;" | 96 || 100 || 100 || 100 || 100 || CMS: SRM test failures. LHCb: Single SRM test failure: could not open connection to srm-lhcb.gridpp.rl.ac.uk:8443
+
 
+
|-
+
| 12/07/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
 
|-
 
|-
| 13/07/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||  
+
| 13/07/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 90 || 100 || 100 || 99 || 100 || 99 || SRM test failures (Mix of error messages, some ‘User timeout over’ and some ‘Unable to issue PrepareToPut request to Castor’)
 
|-
 
|-
| 14/07/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||  
+
| 14/07/17 || 100 || 100 || style="background-color: lightgrey;" | 87 || style="background-color: lightgrey;" | 98 || 100 || 100 || 91 || 100 || 79 || Atlas: SRM problems; CMS: Single SRM error:  ‘Unable to issue PrepareToPut request to Castor’
 
|-
 
|-
| 15/07/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||  
+
| 15/07/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 96 || 100 || 100 || 100 || 100 || 99 || Failures for all tests. The CEs couldn’t open Castor files and the SRMs had errors like: ‘Unable to issue PrepareToPut request to Castor’
 
|-
 
|-
| 16/07/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||  
+
| 16/07/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 46 || 100 || 100 || 100 || 100 || 98 || CMS Castor problems.
 
|-
 
|-
| 17/07/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||  
+
| 17/07/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 94 || 100 || 100 || 100 || 98 || 100 || SRM test failures: 'Unable to issue PrepareToPut request'
 
|-
 
|-
| 12/07/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||  
+
| 18/07/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 94 || 100 || 100 || 100 || 100 || 100 || SRM test failures: Combination of the normal ‘User timeout over’ errors and ‘Unable to issue PrepareToPut request'.
 
|}
 
|}
 
<!-- **********************End Availability Report************************** ----->
 
<!-- **********************End Availability Report************************** ----->
Line 245: Line 277:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
|}
 
|}
* None yet
+
* There will be a downtime for Castor on Tuesday 25th July for OS patching.
 +
* Some data has been added to Echo to enable LHCb to start testing.
 +
* Data is flowing from the Dirac Leicester site - reaching a peak transfer rate of around 1Gbit/sec.

Latest revision as of 07:47, 26 July 2017

RAL Tier1 Operations Report for 19th July 2017

Review of Issues during the week 12th to 19th July 2017.
  • There was a problem with the Atlas Castor instance for a few hours during the afternoon of Friday 14th July. A problem on the SRMs correlated with a spike in the SRM request rate.
  • On Sunday (16th July) there was a problem with the CMS Castor instance. On-call staff responded. The back-end database reported locking sessions and hot-spotting of files was seen. The problem affected batch access to files as well. Since then there have still been some indication of CMS Castor problems (with 'unable to issue PrepareToPut request to Castor' errors) but at a much reduced rate.
  • There is a power outage scheduled for the Atlas Building (R26) over the weekend of 29/30 July. Preparations are being made to remove any impact this may have on operational services.
Resolved Disk Server Issues
  • GDSS650 (LHCbUser - D1T0) failed on Sunday 16th July. The server was being drained. A disk was replaced and the server returned to service this morning (19th July).
Current operational status and issues
  • We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
  • There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • The number of placement groups in the Echo CEPH Atlas pool contiues to be increased. This is in preparation for the increase in storage capacity when new hardware is added.
  • Security patching being carried out across systems.
  • Updating of RAID card firmware in one batch of disk servers (OCF '14) is taking place.
  • On Thursday 13th July the WAN tuning parameters were updated on CASTOR disk servers. This had been applied to some of the servers some months ago and this completed the roll-out of this change and standardized these parameters across the Castor disk servers.
  • On Monday 17th July access to FTS3 via the SOAP interface was stopped by removing the fts3-prod-soap proxy from the load balancers.
  • A test gateway to Echo ceph-test-gw691.gridpp.rl.ac.uk) has been made dual stack and test transfers have been shown to work over IPv6 to/from CERN.
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Firmware updates in OCF 14 disk servers. These will be done next week (ongoing this week).
  • Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface was disabled on Monday (17th July).
  • Increase the number of placement groups in the Atlas Echo CEPH pool. (Ongoing)

Listing by category:

  • Castor:
    • Move to generic Castor headnodes.
    • Merge AtlasScratchDisk into larger Atlas disk pool.
  • Echo:
    • Increase the number of placement groups in the Atlas Echo CEPH pool.
  • Networking
    • Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
  • Services
    • The production FTS needs updating. This will no longer support the SOAP interface. (The "test" FTS , used by Atlas, has already been upgraded.)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-lhcb.gridpp.rl.ac.uk, SCHEDULED WARNING 19/07/2017 13:00 19/07/2017 16:30 3 hours and 30 minutes Rebooting some disk server to update firmware, causing some interruptions in service
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
129626 Green Less Urgent In Progress 2017-07-19 2017-07-19 [Rod Dashboard] Issue detected : org.nordugrid.ARC-CE-sw-csh-ops@arc-ce01.gridpp.rl.ac.uk
129573 Green Urgent In Progress 2017-07-16 2017-07-17 Atlas RAL-LCG2: DDM transfer failure with Connection to gridpp.rl.ac.uk refused
129562 Green Very Urgent In Progress 2017-07-14 2017-07-17 CMS Unable to open trivial file catalog /etc/cms/PhEDEx/storage.xml
129552 Green Less Urgent Waiting for Reply 2017-07-14 2017-07-19 MICE Problem reappeared: RAL castor: not able to list directories and copy to
129342 Green Urgent In Progress 2017-07-04 2017-07-19 [Rod Dashboard] Issue detected : org.sam.SRM-Put-ops@srm-mice.gridpp.rl.ac.uk
128991 Green Less Urgent In Progress 2017-06-16 2017-07-05 Solid solidexperiment.org CASTOR tape support
127597 Red Urgent On Hold 2017-04-07 2017-06-14 CMS Check networking and xrootd RAL-CERN performance
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2017-07-06 CASTOR at RAL not publishing GLUE 2.
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas Echo Atlas HC Atlas HC Echo CMS HC Comment
12/07/17 100 100 100 92 100 100 100 100 100 SRM test failures (Unable to issue PrepareToPut request to Castor).
13/07/17 100 100 100 90 100 100 99 100 99 SRM test failures (Mix of error messages, some ‘User timeout over’ and some ‘Unable to issue PrepareToPut request to Castor’)
14/07/17 100 100 87 98 100 100 91 100 79 Atlas: SRM problems; CMS: Single SRM error: ‘Unable to issue PrepareToPut request to Castor’
15/07/17 100 100 100 96 100 100 100 100 99 Failures for all tests. The CEs couldn’t open Castor files and the SRMs had errors like: ‘Unable to issue PrepareToPut request to Castor’
16/07/17 100 100 100 46 100 100 100 100 98 CMS Castor problems.
17/07/17 100 100 100 94 100 100 100 98 100 SRM test failures: 'Unable to issue PrepareToPut request'
18/07/17 100 100 100 94 100 100 100 100 100 SRM test failures: Combination of the normal ‘User timeout over’ errors and ‘Unable to issue PrepareToPut request'.
Notes from Meeting.
  • There will be a downtime for Castor on Tuesday 25th July for OS patching.
  • Some data has been added to Echo to enable LHCb to start testing.
  • Data is flowing from the Dirac Leicester site - reaching a peak transfer rate of around 1Gbit/sec.