Difference between revisions of "Tier1 Operations Report 2017-06-21"

From GridPP Wiki
Jump to: navigation, search
(Created page with "==RAL Tier1 Operations Report for 21st June 2017== __NOTOC__ ====== ====== <!-- ************************************************************* -----> <!-- ***********Start Re...")
 
()
 
(13 intermediate revisions by 2 users not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 14th to 21st June 2017.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 14th to 21st June 2017.
 
|}
 
|}
* There were problems with the SRMs for the Castor GEN instance over the last weekend (10/11 June) with on pf the processes failing. The problem is not yet understood.
+
* Problems with the Echo gateways were reported last week. These coincide with a large increase in requests. In response the xrootd gateway was stopped for a few days and a concurrent connection limit applied to the GridFTP gateways. High memory usage was observed and steps are being taken to rectify this.
* There have been problems with the Echo gateways over the last day. These are being worked on as this report is being prepared.
+
* Last week we reported that we are seeing a high rate of reported disk problems on one (the OCF '14) batch of disk servers. In some of the cases the vendor finds no fault in the drives that have been removed. We plan to update the RAID card firmware in these systems following testing of the latest version.
+
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 23: Line 21:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS731 (LHCbDst - D1T0) failed late Saturday night (10th June). It was returned to service Monday afternoon (12th June). A faulty disk had been replaced and the RAID array rebuild was OK.
+
* GDSS732 (LHCbDst - D1T0) failed early afternoon on Wednesday 14th June. It was returned to service the following day. One disk drive was replaced and one file declared lost from the failure.
 +
* GDSS819 (LHCbDst - D1T0) failed in the early hours of Saturday morning (17th June). It was returned to service, initially read-only, later that day.
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 67: Line 66:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* All CEs now migrated to use the load balancers in front of the argus service.
+
* An internal (transparent) change to the Echo CEPH CMS pool was made to increase the number of placement groups. This improves the distribution of data and is being done ahead of adding more hardware capacity.
* A start has been made  enabling XRootD gatweays on worker nodes for Echo access. This will be ramped up to one batch of worker nodes.
+
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 98: Line 96:
 
* Firmware updates in OCF 14 disk servers.
 
* Firmware updates in OCF 14 disk servers.
 
* Upgrade the FTS3 service to a version that will no longer support the SOAP interface.
 
* Upgrade the FTS3 service to a version that will no longer support the SOAP interface.
 +
* Increase the number of placement groups in the Atlas Echo CEPH pool.
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Castor:  
 
* Castor:  
 
** Move to generic Castor headnodes.
 
** Move to generic Castor headnodes.
 
** Merge AtlasScratchDisk into larger Atlas disk pool.
 
** Merge AtlasScratchDisk into larger Atlas disk pool.
 +
* Echo:
 +
** Increase the number of placement groups in the Atlas Echo CEPH pool.
 
* Networking
 
* Networking
 
** Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links.
 
** Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links.
Line 132: Line 133:
 
|-style="background:#b7f1ce"
 
|-style="background:#b7f1ce"
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 +
|-
 +
| 129072
 +
| Green
 +
| Less Urgent
 +
| In Progress
 +
| 2017-06-20
 +
| 2017-06-20
 +
|
 +
| Please remove vo.londongrid.ac.uk from RAL-LCG2 resources
 +
|-
 +
| 129059
 +
| Green
 +
| Very Urgent
 +
| In Progress
 +
| 2017-06-20
 +
| 2017-06-20
 +
| LHCb
 +
| Timeouts on RAL Storage
 +
|-
 +
| 128991
 +
| Green
 +
| Less Urgent
 +
| In Progress
 +
| 2017-06-16
 +
| 2017-06-16
 +
| Solid
 +
| solidexperiment.org CASTOR tape support
 
|-
 
|-
 
| 128954
 
| 128954
Line 138: Line 166:
 
| In Progress
 
| In Progress
 
| 2017-06-14
 
| 2017-06-14
| 2017-06-14
+
| 2017-06-20
 
| SNO+
 
| SNO+
 
| Tape storage failure
 
| Tape storage failure
 
|-
 
|-
 
| 128830
 
| 128830
| Green
+
| Yellow
 
| Less Urgent
 
| Less Urgent
 
| Waiting For Reply
 
| Waiting For Reply
Line 163: Line 191:
 
| Red
 
| Red
 
| Urgent
 
| Urgent
| In Progress
+
| On Hold
 
| 2017-04-07
 
| 2017-04-07
| 2017-05-16
+
| 2017-06-14
 
| CMS
 
| CMS
 
| Check networking and xrootd RAL-CERN performance
 
| Check networking and xrootd RAL-CERN performance
|-
 
| 127240
 
| Red
 
| Urgent
 
| In Progress
 
| 2017-03-21
 
| 2017-05-15
 
| CMS
 
| Staging Test at UK_RAL for Run2
 
 
|-
 
|-
 
| 124876
 
| 124876
Line 212: Line 231:
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas Echo !! Atlas HC !! Atlas HC Echo !! CMS HC !! Comment
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas Echo !! Atlas HC !! Atlas HC Echo !! CMS HC !! Comment
 
|-
 
|-
| 07/06/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 92 || 100 || 100 || 99 || 100 || 100 || SRM test failures. (User timeouts).
+
| 14/06/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 99 || style="background-color: lightgrey;" | 96 || 100 || 100 || 81 || 100 || SRM test failures. CMS: User timeouts. LHCb: There is no data for the SRM tests, the other tests seem fine.
 
|-
 
|-
| 08/06/17 || 100 || 100 || style="background-color: lightgrey;" | 98 || style="background-color: lightgrey;" | 94 || 100 || 100 || 96 || 93 || 99 || Atlas: SRM test failure with “Host not known”; CMS: 94% (SRM test failures with “user timeout”)
+
| 15/06/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || 100 || 98 || 100 || 100 || SRM test failures. (User timeouts).
 
|-
 
|-
| 09/06/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 99 || 100 || 100 || 94 || 100 || 97 || SRM test failures. (User timeouts).
+
| 16/06/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || 100 || 100 || 100 || N/A || SRM test failures. (User timeouts).
 
|-
 
|-
| 10/06/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 96 || 100 || 100 || 100 || 100 || 100 || SRM test failures. (User timeouts).
+
| 17/06/17 || 100 || 100 || style="background-color: lightgrey;" | 98 || style="background-color: lightgrey;" | 99 || 100 || 100 || 100 || 100 || 100 || SRM test failures. (User timeouts).
 
|-
 
|-
| 11/06/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||  
+
| 18/06/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 94 || style="background-color: lightgrey;" | 95 || 100 || 100 || 100 || 100 || CMS: Could not open connection to srm-cms.gridpp.rl.ac.uk; LHCb: Could not open connection to srm-lhcb.gridpp.rl.ac.uk – Both CMS and LHCb test failures occurred at similar times (17:00 CET).
 
|-
 
|-
| 12/06/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 99 || 100 || 100 || 100 || 100 || 100 || SRM test failures. (User timeouts).
+
| 18/06/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || 100 || 100 || 98 || 100 || SRM test failures. (User timeouts).
 
|-
 
|-
| 13/06/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 96 || 100 || 100 || 99 || 92 || 100 || SRM test failures. (User timeouts).
+
| 20/06/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 99 || 100 || 100 || 100 || 100 || 98 || Single SRM test failure. (User timeout).
 
+
|-
+
| 14/06/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 15/06/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 16/06/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 17/06/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 18/06/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 18/06/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 20/06/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
 
+
 
|}
 
|}
 
<!-- **********************End Availability Report************************** ----->
 
<!-- **********************End Availability Report************************** ----->

Latest revision as of 10:25, 21 June 2017

RAL Tier1 Operations Report for 21st June 2017

Review of Issues during the week 14th to 21st June 2017.
  • Problems with the Echo gateways were reported last week. These coincide with a large increase in requests. In response the xrootd gateway was stopped for a few days and a concurrent connection limit applied to the GridFTP gateways. High memory usage was observed and steps are being taken to rectify this.
Resolved Disk Server Issues
  • GDSS732 (LHCbDst - D1T0) failed early afternoon on Wednesday 14th June. It was returned to service the following day. One disk drive was replaced and one file declared lost from the failure.
  • GDSS819 (LHCbDst - D1T0) failed in the early hours of Saturday morning (17th June). It was returned to service, initially read-only, later that day.
Current operational status and issues
  • We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
  • There is a problem on the site firewall which is causing problems for some specific data flows. This was being investigated in connection with videoconferencing problems. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • An internal (transparent) change to the Echo CEPH CMS pool was made to increase the number of placement groups. This improves the distribution of data and is being done ahead of adding more hardware capacity.
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links (delayed until 28th June).
  • Firmware updates in OCF 14 disk servers.
  • Upgrade the FTS3 service to a version that will no longer support the SOAP interface.
  • Increase the number of placement groups in the Atlas Echo CEPH pool.

Listing by category:

  • Castor:
    • Move to generic Castor headnodes.
    • Merge AtlasScratchDisk into larger Atlas disk pool.
  • Echo:
    • Increase the number of placement groups in the Atlas Echo CEPH pool.
  • Networking
    • Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links.
    • Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
  • Services
    • The production FTS needs updating. This will no longer support the SOAP interface. (The "test" FTS , used by Atlas, has already been upgraded.)
Entries in GOC DB starting since the last report.
  • None
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
129072 Green Less Urgent In Progress 2017-06-20 2017-06-20 Please remove vo.londongrid.ac.uk from RAL-LCG2 resources
129059 Green Very Urgent In Progress 2017-06-20 2017-06-20 LHCb Timeouts on RAL Storage
128991 Green Less Urgent In Progress 2017-06-16 2017-06-16 Solid solidexperiment.org CASTOR tape support
128954 Green Less Urgent In Progress 2017-06-14 2017-06-20 SNO+ Tape storage failure
128830 Yellow Less Urgent Waiting For Reply 2017-06-07 2017-06-07 Pheno Jobs failing at RAL due errors with gfal2
127612 Red Alarm In Progress 2017-04-08 2017-05-19 LHCb CEs at RAL not responding
127597 Red Urgent On Hold 2017-04-07 2017-06-14 CMS Check networking and xrootd RAL-CERN performance
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2017-05-10 CASTOR at RAL not publishing GLUE 2.
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas Echo Atlas HC Atlas HC Echo CMS HC Comment
14/06/17 100 100 100 99 96 100 100 81 100 SRM test failures. CMS: User timeouts. LHCb: There is no data for the SRM tests, the other tests seem fine.
15/06/17 100 100 100 98 100 100 98 100 100 SRM test failures. (User timeouts).
16/06/17 100 100 100 98 100 100 100 100 N/A SRM test failures. (User timeouts).
17/06/17 100 100 98 99 100 100 100 100 100 SRM test failures. (User timeouts).
18/06/17 100 100 100 94 95 100 100 100 100 CMS: Could not open connection to srm-cms.gridpp.rl.ac.uk; LHCb: Could not open connection to srm-lhcb.gridpp.rl.ac.uk – Both CMS and LHCb test failures occurred at similar times (17:00 CET).
18/06/17 100 100 100 98 100 100 100 98 100 SRM test failures. (User timeouts).
20/06/17 100 100 100 99 100 100 100 100 98 Single SRM test failure. (User timeout).
Notes from Meeting.
  • None yet. Note this is a "virtual" meeting. This report is produced and comments invited.