Difference between revisions of "Tier1 Operations Report 2017-05-24"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(18 intermediate revisions by one user not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 17th to 24th May 2017.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 17th to 24th May 2017.
 
|}
 
|}
* Following the failure of the UPS in building R89 on Friday 28th April a replacement UPS was installed at the end of last week. This was brought into use yesterday (16th May) and then a ups/generator load test successfully carried out this morning.
+
* There is a specific problem with Castor affecting LHCb when a TURL returned by the SRM does not always work when used for xroot access owing to an incorrect hostname. This is now largely understood although not yet fixed.
* There was a significant problem with the CMS Castor instance over the weekend that severely affected availabilities. Space was only available on a small number of disk servers and these became heavily overloaded.
+
* A problem of ARP poisoning on the Tier1 network is affecting some monitoring used by CEPH ECHO. Attempts to understand the source and cause of this are taking place.
* Atlas and CMS were affected for a couple of hours yesterday when Castor was reporting disk pools as full. An update had unexpectedly caused processes on disk servers to restart and this had a knock effect.
+
* There is a problem on the site firewall which is causing problems for some specific data flows. It was being investigated in connection with videoconferencing problems. It is not clear if this could be having any effect on our services.
* There have been some problems with the ECHO CEPH xrootd gateways. A xrootd proxy cache has been installed on these gateways and this has resolved this issue. However, the root cause is still being investigated.
+
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 24: Line 23:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS744 (AtlasDataDisk - D1T0) Crashed on Monday morning (15th May). Two disk drives were replace and it was returned to service (initially read-only) at the end of Tuesday afternoon (16th).
+
* GDSS724 (AtlasDataDisk - D1T0) Crashed on Wednesday evening (157th May). It was returned to service, intially read-only, on the 19th. Three zero-sized files lost at time of crash.
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 35: Line 34:
 
|}
 
|}
 
* We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". This will be re-addresses once we have upgraded to Castor 2.1.16.
 
* We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". This will be re-addresses once we have upgraded to Castor 2.1.16.
* LHCb Castor performance. I have left this item in place although here has been a Castor update for LHCb and testing has been carried out this week.
+
* LHCb Castor performance has been OK since the 2.1.16 update although this was not under and high load. A load test (mimicing the stipping/merging campain) is being carried out with LHCb today (24th May).
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 46: Line 45:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
|}
 
|}
* None
+
* GDSS773 (LHCbDst - D1T0) crashed on Sunday (21st May). Investigations are ongoing.
 +
* GDSS804 (LHCbDst - D1T0) was taken out of production yesterday as it was showing memory errors. These are still being checked out.
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 68: Line 68:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* New UPS installed in R89 and tested.
+
* Atlas Castor instance updated to Castor version 2.1.16-13. (The Atlas SRMs were already at version 2.1.16).
* LHCb Castor instance updated to Castor version 2.1.16-13.
+
* CEs being migrated to use the load balancers in front of the argus service.
* Edinburgh Dirac site now moving 'production' files to Castor.
+
* Atlas now have 2PBytes of data in ECHO which is their current allocation in there. This means Atlas' use of ECHO will now need to include deleting data to make room for more. (Note: ECHO has more capacity than this in total.)
* Support for the following VOs removed from batch (as no longer supported by GridPP): "hone" "fusion" "superbvo.org"
+
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 82: Line 81:
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
|}
 
|}
None
+
{| border=1 align=center
 +
|- bgcolor="#7c8aaf"
 +
! Service
 +
! Scheduled?
 +
! Outage/At Risk
 +
! Start
 +
! End
 +
! Duration
 +
! Reason
 +
|-
 +
| srm-cms-disk.gridpp.rl.ac.uk, srm-cms.gridpp.rl.ac.uk
 +
| SCHEDULED
 +
| OUTAGE
 +
| 25/05/2017 10:00
 +
| 25/05/2017 16:00
 +
| 6 hours
 +
|Upgrade of CMS Castor instance to version 2.1.16.
 +
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 97: Line 113:
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
'''Pending - but not yet formally announced:'''
 
'''Pending - but not yet formally announced:'''
* Update Castor (including SRMs) to version 2.1.16. Central nameserver done. Current plan: LHCb stager on Thursday 11th May. Others to follow.
+
* Update Castor (including SRMs) to version 2.1.16. Central nameserver and LHCb and Atlas stagers done. Current plan: CMS stager and SRMs on Thursday 25th May. GEN to follow.
* Update Castor SRMs - CMS & GEN still to do. Problems seen with the SRM update mean these will wait until Castor 2.1.16 is rolled out.
+
* Update Castor SRMs - CMS & GEN still to do. These are being done at the same time as the Catsor 2.1.16 updates.
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Castor:  
 
* Castor:  
Line 108: Line 124:
 
** Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
 
** Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
 
* Services
 
* Services
** Put argus systems behind a load balancer to improve resilience.
+
** Put argus systems behind a load balancers to improve resilience.
 
** The production FTS needs updating. This will no longer support the soap interface. (The "test" FTS , used by Atlas, has already been upgraded.)
 
** The production FTS needs updating. This will no longer support the soap interface. (The "test" FTS , used by Atlas, has already been upgraded.)
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ***************End Advanced warning for other interventions*************** ----->
Line 130: Line 146:
 
! Reason
 
! Reason
 
|-
 
|-
| Whole site
+
| srm-atlas.gridpp.rl.ac.uk
| UNSCHEDULED
+
| WARNING
+
| 16/05/2017 14:00
+
| 16/05/2017 17:00
+
| 3 hours
+
| Emergency warning while bringing UPS supply back online.
+
|-
+
| srm-lhcb.gridpp.rl.ac.uk  
+
 
| SCHEDULED
 
| SCHEDULED
 
| OUTAGE
 
| OUTAGE
| 11/05/2017 10:00
+
| 23/05/2017 10:00
| 11/05/2017 12:50
+
| 23/05/2017 11:34
| 2 hours and 50 minutes
+
| 1 hour and 34 minutes
| Downtime while upgrading LHCb Castor instance to 2.1.16
+
| Upgrade of Atlas Castor instance to version 2.1.16.
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 161: Line 169:
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
|-
 
|-
| 128350
+
| 128398
 
| Green
 
| Green
| Less Urgent
+
| Top Priority
| In Progress
+
| Waiting for Reply
| 2017-05-16
+
| 2017-05-18
| 2017-05-16
+
| 2017-05-24
| Atlas
+
| LHCb
| RAL-LCG2_DATADISK: transfer errors
+
| File cannot be opened using xroot at RAL
 
|-
 
|-
 
| 128308
 
| 128308
Line 178: Line 186:
 
| CMS
 
| CMS
 
| Description: T1_UK_RAL in error for about 6 hours
 
| Description: T1_UK_RAL in error for about 6 hours
|-
 
| 128180
 
| Green
 
| Urgent
 
| In Progress
 
| 2017-05-05
 
| 2017-05-08
 
|
 
| WLGC-IPv6 Tier-1 readiness
 
|-
 
| 127968
 
| Green
 
| Less Urgent
 
| In Progress
 
| 2017-04-27
 
| 2017-04-27
 
| MICE
 
| RAL castor: not able to list directories and copy to
 
 
|-
 
|-
 
| 127967
 
| 127967
Line 207: Line 197:
 
|-
 
|-
 
| 127612
 
| 127612
| Red
+
| Yellow
 
| Alarm
 
| Alarm
 
| In Progress
 
| In Progress
 
| 2017-04-08
 
| 2017-04-08
| 2017-05-09
+
| 2017-05-19
 
| LHCb
 
| LHCb
 
| CEs at RAL not responding
 
| CEs at RAL not responding
|-
 
| 127598
 
| Green
 
| Urgent
 
| In Progress
 
| 2017-04-07
 
| 2017-05-12
 
| CMS
 
| UK XRootD Redirector
 
 
|-
 
|-
 
| 127597
 
| 127597
Line 234: Line 215:
 
|-
 
|-
 
| 127240
 
| 127240
| Amber
+
| Red
 
| Urgent
 
| Urgent
| Waiting for Reply
+
| In Progress
 
| 2017-03-21
 
| 2017-03-21
 
| 2017-05-15
 
| 2017-05-15
Line 267: Line 248:
 
<!-- **********************Start Availability Report************************** ----->
 
<!-- **********************Start Availability Report************************** ----->
 
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;"
 
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;"
 +
|-
 +
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Availability Report
 +
|}
 +
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC ECHO = Atlas ECHO (Template 841);CMS HC = CMS HammerCloud
 +
{|border="1" cellpadding="1",center;
 +
|+
 +
|-style="background:#b7f1ce"
 +
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas ECHO !! Atlas HC !! Atlas HC ECHO !! CMS HC !! Comment
 
|-
 
|-
 
| 17/05/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 91 || 100 || 100 || 96 || 98 || 100 || Intermittent SRM test failures. (User timeout)
 
| 17/05/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 91 || 100 || 100 || 96 || 98 || 100 || Intermittent SRM test failures. (User timeout)
Line 276: Line 265:
 
| 20/05/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 83 || 100 || 100 || 95 || 100 || 100 || Intermittent SRM test failures. (User)
 
| 20/05/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 83 || 100 || 100 || 95 || 100 || 100 || Intermittent SRM test failures. (User)
 
|-
 
|-
| 21/05/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 80 || 100 || 100 || 100 || 199 || 100 || Intermittent SRM test failures. (User)
+
| 21/05/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 80 || 100 || 100 || 100 || 99 || 100 || Intermittent SRM test failures. (User)
 
|-
 
|-
 
| 22/05/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 83 || 100 || 100 || 100 || 100 || 100 || Intermittent SRM test failures. (User)
 
| 22/05/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 83 || 100 || 100 || 100 || 100 || 100 || Intermittent SRM test failures. (User)
Line 292: Line 281:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
|}
 
|}
* None yet
+
* The problem (reported above) with the site firewall was discussed. This is expected to affect Tier1 traffic that passes through the firewall (such as data access by the worker nodes).
 +
* Work is going on enable use of the Tier1 by LIGO (batch, Echo storage and cvmfs) and CCP4 (for batch).
 +
* The proxy servers added into the ECHO gateways (reported previously) have fixed the load problem that was seen.
 +
* CMS have successfully tested AAA access to ECHO over xrootd.

Latest revision as of 14:43, 24 May 2017

RAL Tier1 Operations Report for 24th May 2017

Review of Issues during the week 17th to 24th May 2017.
  • There is a specific problem with Castor affecting LHCb when a TURL returned by the SRM does not always work when used for xroot access owing to an incorrect hostname. This is now largely understood although not yet fixed.
  • A problem of ARP poisoning on the Tier1 network is affecting some monitoring used by CEPH ECHO. Attempts to understand the source and cause of this are taking place.
  • There is a problem on the site firewall which is causing problems for some specific data flows. It was being investigated in connection with videoconferencing problems. It is not clear if this could be having any effect on our services.
Resolved Disk Server Issues
  • GDSS724 (AtlasDataDisk - D1T0) Crashed on Wednesday evening (157th May). It was returned to service, intially read-only, on the 19th. Three zero-sized files lost at time of crash.
Current operational status and issues
  • We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". This will be re-addresses once we have upgraded to Castor 2.1.16.
  • LHCb Castor performance has been OK since the 2.1.16 update although this was not under and high load. A load test (mimicing the stipping/merging campain) is being carried out with LHCb today (24th May).
Ongoing Disk Server Issues
  • GDSS773 (LHCbDst - D1T0) crashed on Sunday (21st May). Investigations are ongoing.
  • GDSS804 (LHCbDst - D1T0) was taken out of production yesterday as it was showing memory errors. These are still being checked out.
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • Atlas Castor instance updated to Castor version 2.1.16-13. (The Atlas SRMs were already at version 2.1.16).
  • CEs being migrated to use the load balancers in front of the argus service.
  • Atlas now have 2PBytes of data in ECHO which is their current allocation in there. This means Atlas' use of ECHO will now need to include deleting data to make room for more. (Note: ECHO has more capacity than this in total.)
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-cms-disk.gridpp.rl.ac.uk, srm-cms.gridpp.rl.ac.uk SCHEDULED OUTAGE 25/05/2017 10:00 25/05/2017 16:00 6 hours Upgrade of CMS Castor instance to version 2.1.16.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Update Castor (including SRMs) to version 2.1.16. Central nameserver and LHCb and Atlas stagers done. Current plan: CMS stager and SRMs on Thursday 25th May. GEN to follow.
  • Update Castor SRMs - CMS & GEN still to do. These are being done at the same time as the Catsor 2.1.16 updates.

Listing by category:

  • Castor:
    • Update SRMs to new version, including updating to SL6.
    • Update Castor to version 2.1.16 (ongoing)
    • Merge AtlasScratchDisk into larger Atlas disk pool.
  • Networking
    • Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links.
    • Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
  • Services
    • Put argus systems behind a load balancers to improve resilience.
    • The production FTS needs updating. This will no longer support the soap interface. (The "test" FTS , used by Atlas, has already been upgraded.)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-atlas.gridpp.rl.ac.uk SCHEDULED OUTAGE 23/05/2017 10:00 23/05/2017 11:34 1 hour and 34 minutes Upgrade of Atlas Castor instance to version 2.1.16.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
128398 Green Top Priority Waiting for Reply 2017-05-18 2017-05-24 LHCb File cannot be opened using xroot at RAL
128308 Green Urgent In Progress 2017-05-14 2017-05-15 CMS Description: T1_UK_RAL in error for about 6 hours
127967 Green Less Urgent On Hold 2017-04-27 2017-04-28 MICE Enabling pilot role for mice VO at RAL-LCG2
127612 Yellow Alarm In Progress 2017-04-08 2017-05-19 LHCb CEs at RAL not responding
127597 Yellow Urgent Waiting for Reply 2017-04-07 2017-05-16 CMS Check networking and xrootd RAL-CERN performance
127240 Red Urgent In Progress 2017-03-21 2017-05-15 CMS Staging Test at UK_RAL for Run2
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2017-05-10 CASTOR at RAL not publishing GLUE 2.
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC ECHO = Atlas ECHO (Template 841);CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas ECHO Atlas HC Atlas HC ECHO CMS HC Comment
17/05/17 100 100 100 91 100 100 96 98 100 Intermittent SRM test failures. (User timeout)
18/05/17 100 100 98 79 100 100 94 100 100 Atlas: One SRM test failure; CMS: Intermittent SRM test failures. (timeout)
19/05/17 100 100 100 78 100 100 100 100 100 Intermittent SRM test failures. (User)
20/05/17 100 100 100 83 100 100 95 100 100 Intermittent SRM test failures. (User)
21/05/17 100 100 100 80 100 100 100 99 100 Intermittent SRM test failures. (User)
22/05/17 100 100 100 83 100 100 100 100 100 Intermittent SRM test failures. (User)
23/05/17 100 100 92 96 100 100 100 100 100 Atlas Castor 2.1.16 update; CMS: Intermittent SRM test failures. (timeout)
Notes from Meeting.
  • The problem (reported above) with the site firewall was discussed. This is expected to affect Tier1 traffic that passes through the firewall (such as data access by the worker nodes).
  • Work is going on enable use of the Tier1 by LIGO (batch, Echo storage and cvmfs) and CCP4 (for batch).
  • The proxy servers added into the ECHO gateways (reported previously) have fixed the load problem that was seen.
  • CMS have successfully tested AAA access to ECHO over xrootd.