Difference between revisions of "Tier1 Operations Report 2017-07-12"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(13 intermediate revisions by one user not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 5th to 12th July 2017.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 5th to 12th July 2017.
 
|}
 
|}
* There have been some intermittent problems accessing the Atlas Scratch Disk. We received a GGUS ticket from Atlas but the problem had resolved itself beforehand. However, it has also recurred. At present the cause is unknown.
+
* There is a power outage scheduled for the Atlas Building (R26) over the weekend of 29/30 July. Preparations are being made to remove any impact this may have on operational services.
* There were problems with xroot redirection for CMS last Thursday to Friday. The usual fix - restarting the services - didn't work. A full disk area on one of the nodes was found to be the cause.
+
* There were problems with the Atlas SRMs both on Saturday 8th July and again during the evening of Monday 10th July. Some of the SRM systems were found to be unresponsive. These problems were resolved by the out of hours on-call team.
* Castor Gen instance has been failing OPS SRM tests since Friday (30th June). This appears to be because we the tests have started trying to access an area that was decommissioned around a year ago.
+
* One of the CEPH gateway systems has had a hardware fault and was taken out of use (and out of the DNS alias) between the 6th and 11th July. There were elevated levels of gridftp transfer failures while the machine was removed, with errors indicating that the gateways are unable to serve more traffic (with the current connection limits.
* I have not been reporting data loss incidents in this report. To pick this back up for the last week:
+
* Two of the OCF '14 disk servers (One in each of AtlasDataDisk and CMSDisk) were removed from production for a couple of hours on Monday morning (10th July) for RAID card firmware updates. These were done ahead of the updating of the remaining systems in this batch planned for next week.
** Following draining of disk servers we reported:
+
*** When GDSS648 was drained one file was lost from LHCbUser.
+
*** When GDSS694 was drained two files were lost for CMS
+
** One corrupted file from CMSDisk reported from GDSS801. File appears to have written correctly but later read failed owing to corruption.
+
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 39: Line 35:
 
|}
 
|}
 
* We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
 
* We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
* There is a problem on the site firewall which is causing problems for some specific data flows. This was being investigated in connection with videoconferencing problems. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
+
* There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 72: Line 68:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* The number of placement groups in the Echo CEPH Atlas pool is being steadily increased. This is in preparation for the increase in storage capacity when new hardware is added.
+
* The number of placement groups in the Echo CEPH Atlas pool contiues to be increased. This is in preparation for the increase in storage capacity when new hardware is added.
 +
* CVMFS Stratum 0 and 1 nodes updated to cvmfs-server version v2.3.5-1.
 +
* Security patching being carried out across systems.
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 99: Line 97:
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
'''Pending - but not yet formally announced:'''
 
'''Pending - but not yet formally announced:'''
* Firmware updates in OCF 14 disk servers. Likely timescale is withing next fortnight.  
+
* Firmware updates in OCF 14 disk servers. These will be done next week (17-21 July).
* Upgrade the FTS3 service to a version that will no longer support the SOAP interface.
+
* Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface will be disabled on the 17th July.
* Increase the number of placement groups in the Atlas Echo CEPH pool.
+
* Increase the number of placement groups in the Atlas Echo CEPH pool. (Ongoing)
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Castor:  
 
* Castor:  
Line 137: Line 135:
 
|-style="background:#b7f1ce"
 
|-style="background:#b7f1ce"
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
 
 
|-
 
|-
 
| 129342
 
| 129342
 
| Green
 
| Green
 
| Urgent
 
| Urgent
| Assigned to someone else
+
| In Progress
| 2017-07-04
+
 
| 2017-07-04
 
| 2017-07-04
 +
| 2017-07-10
 
|  
 
|  
 
| [Rod Dashboard] Issue detected : org.sam.SRM-Put-ops@srm-mice.gridpp.rl.ac.uk
 
| [Rod Dashboard] Issue detected : org.sam.SRM-Put-ops@srm-mice.gridpp.rl.ac.uk
 
|-
 
| 129299
 
| Green
 
| Urgent
 
| In Progress
 
| 2017-06-30
 
| 2017-07-03
 
| CMS
 
| bad data was encountered errors in some transfers from RAL
 
 
|-
 
| 129228
 
| Green
 
| Urgent
 
| Waiting for Reply
 
| 2017-06-28
 
| 2017-06-30
 
| CMS
 
| Low HC xrootd success rates at T1_UK_RAL
 
 
|-
 
| 129211
 
| Green
 
| Urgent
 
| In Progress
 
| 2017-06-27
 
| 2017-06-29
 
| Atlas
 
| FR TOKYO-LCG2: DDM transfer errors with "A system call failed: Connection refused"
 
 
|-
 
| 129098
 
| Green
 
| Urgent
 
| In Progress
 
| 2017-06-22
 
| 2017-06-27
 
| Atlas
 
| RAL-LCG2: source / destination file transfer errors ("Connection timed out")
 
 
|-
 
|-
 
| 129059
 
| 129059
| Green
+
| Yellow
 
| Very Urgent
 
| Very Urgent
 
| Waiting for Reply
 
| Waiting for Reply
Line 203: Line 159:
 
| In Progress
 
| In Progress
 
| 2017-06-16
 
| 2017-06-16
| 2017-06-16
+
| 2017-07-05
 
| Solid
 
| Solid
 
| solidexperiment.org CASTOR tape support
 
| solidexperiment.org CASTOR tape support
Line 230: Line 186:
 
| On Hold
 
| On Hold
 
| 2015-11-18
 
| 2015-11-18
| 2017-05-10
+
| 2017-07-06
 
|  
 
|  
 
| CASTOR at RAL not publishing GLUE 2.
 
| CASTOR at RAL not publishing GLUE 2.
Line 250: Line 206:
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas Echo !! Atlas HC !! Atlas HC Echo !! CMS HC !! Comment
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas Echo !! Atlas HC !! Atlas HC Echo !! CMS HC !! Comment
 
|-
 
|-
| 28/06/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 99 || 100 || 100 || 100 || 100 || 100 || Single SRM test failure with User Timeout.
+
| 05/07/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || 100 || 100 || 100 || 100 || Few SRM test failures (mostly User timeout)
 
|-
 
|-
| 29/06/17 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || 100 || 100 || 99 || 100 || 100 || Single SRM test failure '[SRM_FAILURE] Unable to receive header’.
+
| 06/07/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 97 || 100 || 100 || 97 || 100 || 100 || Few SRM test failures (User timeout)
 
|-
 
|-
| 30/06/17 || 100 || 100 || 100 || 100 || 100 || 100 || 96 || 100 || 100 ||  
+
| 07/07/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 99 || 100 || 100 || 100 || 99 || 100 || Single SRM test failure (User timeout)
 
|-
 
|-
| 01/07/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 99 || 100 || 100 || 100 || 100 || 100 || Single SRM test failure with User Timeout.
+
| 08/07/17 || 100 || 100 || style="background-color: lightgrey;" | 87 || style="background-color: lightgrey;" | 89 || 100 || 100 || 91 || 100 || 99 || Atlas: Problem on two of the four SRMs. On-Call team intervened. CMS: mix of SRM test failures.
 
|-
 
|-
| 02/07/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||  
+
| 09/07/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 92 || 100 || 100 || 100 || 100 || 99 || SRM test failures
 
|-
 
|-
| 03/07/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 98 || 100 ||
+
| 10/07/17 || 100 || 100 || style="background-color: lightgrey;" | 83 || style="background-color: lightgrey;" | 95 || style="background-color: lightgrey;" | 92 || 100 || 75 || 100 || 99 || Atlas: Failures on the SRM SAM test during evening. On-call team intervened. CMS: Block of failed tests for the CEs and sporadic SRM test failures. LHCb: Some SRM test failures ('Communication error’ and ‘SRM_FILE_BUSY‘)
|-  
+
| 04/07/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 97 || 100 || 100 || 100 || 100 || 92 || Two SRM test failures and a brief set of failures of Job submission tests.
+
 
+
 
+
 
|-
 
|-
| 05/07/17 || 100 || 100 || 100 || 98 || 100 || 100 || 100 || 100 || 100 || Few SRM test failures (mostly User timeout)
+
| 11/07/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 97 || style="background-color: lightgrey;" | 96 || 100 || 100 || 100 || 100 || CMS: SRM test failures. LHCb: Single SRM test failure: could not open connection to srm-lhcb.gridpp.rl.ac.uk:8443
|-
+
| 06/07/17 || 100 || 100 || 100 || 97 || 100 || 100 || 100 || 100 || 100 || Few SRM test failures (User timeout)
+
|-
+
| 07/07/17 || 100 || 100 || 100 || 99 || 100 || 100 || 100 || 100 || 100 || Single SRM test failure (User timeout)
+
|-
+
| 08/07/17 || 100 || 100 || 87 || 89 || 100 || 100 || 100 || 100 || 100 || Atlas: Problem on two of the four SRMs. On-Call team intervened. CMS: mix of SRM test failures.
+
|-
+
| 09/07/17 || 100 || 100 || 100 || 92 || 100 || 100 || 100 || 100 || 100 || SRM test failures
+
|-
+
| 10/07/17 || 100 || 100 || 83 || 95 || 92 || 100 || 100 || 100 || 100 || Atlas: Failures on the SRM SAM test during evening. On-call team intervened. CMS: Block of failed tests for the CEs and sporadic SRM test failures. LHCb: Some SRM test failures ('Communication error’ and ‘SRM_FILE_BUSY‘)
+
|-
+
| 11/07/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
 
+
 
|}
 
|}
 
<!-- **********************End Availability Report************************** ----->
 
<!-- **********************End Availability Report************************** ----->
Line 291: Line 230:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
|}
 
|}
* None yet
+
* The Castor team are preparing some '12 generation disk servers to go into tape-backed pools. These were removed from disk-only Castor pools.
 +
* Tuned parameters for Wide Area Network settings are being rolled out on Castor disk servers. These had been applied to some many months ago. The remainder are now being done.
 +
* The number of batch jobs being run by LHCb this last week is around 3000. This is significantly below their fairshare. There is no cap in place. Raja (LHCb) will try submitting more pilots.
 +
* Catalin reported that he is working on enabling IPV6 on the CVMFS Stratum1 servers.
 +
* Andrew Lahiff has enabled a test Centos7 queue on arc-ce01.

Latest revision as of 15:00, 12 July 2017

RAL Tier1 Operations Report for 12th July 2017

Review of Issues during the week 5th to 12th July 2017.
  • There is a power outage scheduled for the Atlas Building (R26) over the weekend of 29/30 July. Preparations are being made to remove any impact this may have on operational services.
  • There were problems with the Atlas SRMs both on Saturday 8th July and again during the evening of Monday 10th July. Some of the SRM systems were found to be unresponsive. These problems were resolved by the out of hours on-call team.
  • One of the CEPH gateway systems has had a hardware fault and was taken out of use (and out of the DNS alias) between the 6th and 11th July. There were elevated levels of gridftp transfer failures while the machine was removed, with errors indicating that the gateways are unable to serve more traffic (with the current connection limits.
  • Two of the OCF '14 disk servers (One in each of AtlasDataDisk and CMSDisk) were removed from production for a couple of hours on Monday morning (10th July) for RAID card firmware updates. These were done ahead of the updating of the remaining systems in this batch planned for next week.
Resolved Disk Server Issues
  • None
Current operational status and issues
  • We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
  • There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • The number of placement groups in the Echo CEPH Atlas pool contiues to be increased. This is in preparation for the increase in storage capacity when new hardware is added.
  • CVMFS Stratum 0 and 1 nodes updated to cvmfs-server version v2.3.5-1.
  • Security patching being carried out across systems.
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Firmware updates in OCF 14 disk servers. These will be done next week (17-21 July).
  • Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface will be disabled on the 17th July.
  • Increase the number of placement groups in the Atlas Echo CEPH pool. (Ongoing)

Listing by category:

  • Castor:
    • Move to generic Castor headnodes.
    • Merge AtlasScratchDisk into larger Atlas disk pool.
  • Echo:
    • Increase the number of placement groups in the Atlas Echo CEPH pool.
  • Networking
    • Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
  • Services
    • The production FTS needs updating. This will no longer support the SOAP interface. (The "test" FTS , used by Atlas, has already been upgraded.)
Entries in GOC DB starting since the last report.
  • None
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
129342 Green Urgent In Progress 2017-07-04 2017-07-10 [Rod Dashboard] Issue detected : org.sam.SRM-Put-ops@srm-mice.gridpp.rl.ac.uk
129059 Yellow Very Urgent Waiting for Reply 2017-06-20 2017-06-28 LHCb Timeouts on RAL Storage
128991 Green Less Urgent In Progress 2017-06-16 2017-07-05 Solid solidexperiment.org CASTOR tape support
127597 Red Urgent On Hold 2017-04-07 2017-06-14 CMS Check networking and xrootd RAL-CERN performance
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2017-07-06 CASTOR at RAL not publishing GLUE 2.
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas Echo Atlas HC Atlas HC Echo CMS HC Comment
05/07/17 100 100 100 98 100 100 100 100 100 Few SRM test failures (mostly User timeout)
06/07/17 100 100 100 97 100 100 97 100 100 Few SRM test failures (User timeout)
07/07/17 100 100 100 99 100 100 100 99 100 Single SRM test failure (User timeout)
08/07/17 100 100 87 89 100 100 91 100 99 Atlas: Problem on two of the four SRMs. On-Call team intervened. CMS: mix of SRM test failures.
09/07/17 100 100 100 92 100 100 100 100 99 SRM test failures
10/07/17 100 100 83 95 92 100 75 100 99 Atlas: Failures on the SRM SAM test during evening. On-call team intervened. CMS: Block of failed tests for the CEs and sporadic SRM test failures. LHCb: Some SRM test failures ('Communication error’ and ‘SRM_FILE_BUSY‘)
11/07/17 100 100 100 97 96 100 100 100 100 CMS: SRM test failures. LHCb: Single SRM test failure: could not open connection to srm-lhcb.gridpp.rl.ac.uk:8443
Notes from Meeting.
  • The Castor team are preparing some '12 generation disk servers to go into tape-backed pools. These were removed from disk-only Castor pools.
  • Tuned parameters for Wide Area Network settings are being rolled out on Castor disk servers. These had been applied to some many months ago. The remainder are now being done.
  • The number of batch jobs being run by LHCb this last week is around 3000. This is significantly below their fairshare. There is no cap in place. Raja (LHCb) will try submitting more pilots.
  • Catalin reported that he is working on enabling IPV6 on the CVMFS Stratum1 servers.
  • Andrew Lahiff has enabled a test Centos7 queue on arc-ce01.