Difference between revisions of "Tier1 Operations Report 2017-05-10"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(12 intermediate revisions by 2 users not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 3rd to 10th May 2017.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 3rd to 10th May 2017.
 
|}
 
|}
* LHCb have managed to complete their stripping and merging campaign for data stored at RAL. Nevertheless we still need to follow up with improvements to Castor to resolve throughput and stability issues. The SRMs required a restart on Friday aftternoon (28th April).
+
* There was a problem with one of the argus servers that was ongoing at the time of last week's meeting. This was resolved later that day although there problem led to a significant availability loss for CMS (it affected their CE tests).
* AtlasDataDisk became full. This severely affected the Atlas SAM tests and hence availability. Investigations show problems with Castor's file deletion rate. Once the disk is full any attempted writes are subsequently cleaned up (- i.e. a delete request). These deletions take all of Castor's capacity to delete files. The result is that once a disk area fills the problem becomes a severe fault and is only resolved by the VO stopping attempts to write data - allowing some level of deletions to happen. However, it is clear that the deletion rate was not sufficient at times before the disk became full - although this is not understood.
+
* Following the upgrade of the Castor central services to version 2.1.16 yesterday (Tuesday 9th May) there was a problem bringing the CMS instance back which was resolved by the evening. However, there was also an entirely separate problem with xroot redirection for CMS. This caused the CMS CE SAM tests to fail. This problem was not picked up until this morning (Wednesday) when a restart of the appropraite services fixed it.
* There was a problem with Castor for CMS overnight last night (Tuesday/Wednesday 25/26 April) when the database reported blocking sessions. This fixed itself after a couple of hours.
+
* There is an ongoing problem with one of the argus servers. This has affected CMS CE SAM tests (and hence availability) since yesterday (Tuesday) evening. Work is ongoing at the time of the meeting.
+
* We are also seeing a non-negligible failure rate for CE SAM tests for CMS that is not yet understood. CMS are experimented in running with 'lazy download' disabled.
+
* There is a problem with the UPS system in the R89 computer problem. Internal capacitors overheated last Friday (28th April) and we are currently running in Mains by-Pass without UPS protection.
+
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 26: Line 22:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS793 (AtlasDataDisk - D1T0) was taken out of service on Friday 5th May with a double disk failure. It was retured to service on Monday (8th) after thw first disk replacement had comleted rebuilding.
+
* GDSS793 (AtlasDataDisk - D1T0) was taken out of service on Friday 5th May with a double disk failure. It was retured to service on Monday (8th) after the first disk replacement had comleted rebuilding.
 
* GDSS818 (LHCbDst - D1T0) failed twice in this last week. The first time was in the early hours of 5th May. During that day its RAID card was swapped and it was returned to service that afternoon (read-only). It failed again during the following night. Following an intervention over the weekend the server was put back read-only on Monday (8th May) and will be drained out pending further investigation.
 
* GDSS818 (LHCbDst - D1T0) failed twice in this last week. The first time was in the early hours of 5th May. During that day its RAID card was swapped and it was returned to service that afternoon (read-only). It failed again during the following night. Following an intervention over the weekend the server was put back read-only on Monday (8th May) and will be drained out pending further investigation.
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
Line 37: Line 33:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
|}
 
|}
* We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities.
+
* On Friday 28th April there was a problem with the UPS for building R89. The UPS switched itself into "bypass" mode - which effectively means we have no UPS. We have run in this way since then. The cause was overheating of internal capacitors which failed. At the moment costs are being gathered for a way forward. In the current situation the diesel generator cannot be used either.
 +
* We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access perfromance and have turned off "lazy-download". This will be re-addresses once we have upgraded to Castor 2.1.16.
 +
* LHCb Castor performance. This was a significant problem during LHCb strippong/merging campain in April. As for CMS Castor the next step is to upgrade to Castor 2.1.16.
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 70: Line 68:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* Updated GridFTP / XRootD plugins on ECHO Ceph gateways changing the way checksum is stored.
+
* FTS3 "test" instance upgraded to 3.6.8
 +
* Castor central services updated to Castor version 2.1.16-13.
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 81: Line 80:
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
|}
 
|}
None
+
{| border=1 align=center
 +
|- bgcolor="#7c8aaf"
 +
! Service
 +
! Scheduled?
 +
! Outage/At Risk
 +
! Start
 +
! End
 +
! Duration
 +
! Reason
 +
|-
 +
| srm-lhcb.gridpp.rl.ac.uk
 +
| SCHEDULED
 +
| OUTAGE
 +
| 11/05/2017 10:00
 +
| 11/05/2017 16:00
 +
| 6 hours
 +
| Downtime while upgrading LHCb Castor instance to 2.1.16
 +
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 101: Line 117:
 
* Castor:  
 
* Castor:  
 
** Update SRMs to new version, including updating to SL6.
 
** Update SRMs to new version, including updating to SL6.
** Update to version 2.1.16
+
** Update Cator to version 2.1.16 (ongoing)
 
** Merge AtlasScratchDisk into larger Atlas disk pool.
 
** Merge AtlasScratchDisk into larger Atlas disk pool.
 
* Networking
 
* Networking
 
** Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
 
** Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
 +
* Services
 +
** Put argus systems behind a load balancer to improve resilience.
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ************************************************************************** ----->
 
<!-- ************************************************************************** ----->
Line 115: Line 133:
 
| style="background-color: #7c8aaf; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting since the last report.
 
| style="background-color: #7c8aaf; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting since the last report.
 
|}
 
|}
* None
+
{| border=1 align=center
 +
|- bgcolor="#7c8aaf"
 +
! Service
 +
! Scheduled?
 +
! Outage/At Risk
 +
! Start
 +
! End
 +
! Duration
 +
! Reason
 +
|-
 +
| srm-lhcb.gridpp.rl.ac.uk
 +
| SCHEDULED
 +
| OUTAGE
 +
| 09/05/2017 10:00
 +
| 09/05/2017 12:22
 +
| 2 hours and 22 minutes
 +
| Update central Castor services (including the nameserver component) and LHCb stager to version 2.1.16
 +
|-
 +
| All Castor (except LHCb)
 +
| SCHEDULED
 +
| OUTAGE
 +
| 09/05/2017 10:00
 +
| 09/05/2017 12:21
 +
| 2 hours and 21 minutes
 +
| Update central Castor services (including the nameserver component) to version 2.1.16
 +
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 139: Line 182:
 
|  
 
|  
 
| WLGC-IPv6 Tier-1 readiness
 
| WLGC-IPv6 Tier-1 readiness
|-
 
| 128068
 
| Green
 
| Urgent
 
| On Hold
 
| 2017-05-04
 
| 2017-05-04
 
| CMS
 
| T1_UK_RAL, SAM tests failing
 
 
|-
 
|-
 
| 127968
 
| 127968
Line 166: Line 200:
 
| MICE
 
| MICE
 
| Enabling pilot role for mice VO at RAL-LCG2
 
| Enabling pilot role for mice VO at RAL-LCG2
|-
 
| 127916
 
| Green
 
| Alarm
 
| In Progress
 
| 2017-04-25
 
| 2017-04-25
 
| LHCb
 
| RAL srm-s are down again for LHCb
 
 
|-
 
|-
 
| 127612
 
| 127612
Line 195: Line 220:
 
|-
 
|-
 
| 127597
 
| 127597
| Green
+
| Yellow
 
| Urgent
 
| Urgent
 
| Waiting for Reply
 
| Waiting for Reply
Line 202: Line 227:
 
| CMS
 
| CMS
 
| Check networking and xrootd RAL-CERN performance
 
| Check networking and xrootd RAL-CERN performance
|-
 
| 127388
 
| Green
 
| Less Urgent
 
| In Progress
 
| 2017-03-29
 
| 2017-05-02
 
| LHCb
 
| [FATAL] Connection error for some file
 
 
|-
 
|-
 
| 127240
 
| 127240
| Yellow
+
| Amber
 
| Urgent
 
| Urgent
 
| In Progress
 
| In Progress
Line 220: Line 236:
 
| CMS
 
| CMS
 
| Staging Test at UK_RAL for Run2
 
| Staging Test at UK_RAL for Run2
|-
 
| 126905
 
| Green
 
| Less Urgent
 
| Waiting Reply
 
| 2017-03-02
 
| 2017-04-21
 
| solid
 
| finish commissioning cvmfs server for solidexperiment.org
 
 
|-
 
|-
 
| 124876
 
| 124876
Line 288: Line 295:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
|}
 
|}
* None yet
+
* The new version of FTS3, now installed on the "test" FTS (used by Atlas) no longer supports the SOAP API. There are still some access to our "production" FTS3 service via this API (mainly SNO+ and MICE). We will wish to upgrade the production FTS3 srevice - and need VOs to move to the rest API provided with the newer versions of FTS. Would like to set deadline when we upgrade the "production" instance and hence no longer support the SOAP API.
 +
* The recent chiller replacement look like they will give a return on investment (in electricity costs) in around 5 years.
 +
* The delay in updating the LHCb stager and SRMs was caused by a problem with the repository mirrors blocking teh re-installation of machines as was needed to upgrade the SRMs. This is now fixed and the LHCb Castor stager and SRM updates are scheduled for tomorrow (11th May).
 +
* ECHO: Cluster healthy. Some issues with xrootd gateways are under investigation.

Latest revision as of 13:14, 10 May 2017

RAL Tier1 Operations Report for 10th May 2017

Review of Issues during the week 3rd to 10th May 2017.
  • There was a problem with one of the argus servers that was ongoing at the time of last week's meeting. This was resolved later that day although there problem led to a significant availability loss for CMS (it affected their CE tests).
  • Following the upgrade of the Castor central services to version 2.1.16 yesterday (Tuesday 9th May) there was a problem bringing the CMS instance back which was resolved by the evening. However, there was also an entirely separate problem with xroot redirection for CMS. This caused the CMS CE SAM tests to fail. This problem was not picked up until this morning (Wednesday) when a restart of the appropraite services fixed it.
Resolved Disk Server Issues
  • GDSS793 (AtlasDataDisk - D1T0) was taken out of service on Friday 5th May with a double disk failure. It was retured to service on Monday (8th) after the first disk replacement had comleted rebuilding.
  • GDSS818 (LHCbDst - D1T0) failed twice in this last week. The first time was in the early hours of 5th May. During that day its RAID card was swapped and it was returned to service that afternoon (read-only). It failed again during the following night. Following an intervention over the weekend the server was put back read-only on Monday (8th May) and will be drained out pending further investigation.
Current operational status and issues
  • On Friday 28th April there was a problem with the UPS for building R89. The UPS switched itself into "bypass" mode - which effectively means we have no UPS. We have run in this way since then. The cause was overheating of internal capacitors which failed. At the moment costs are being gathered for a way forward. In the current situation the diesel generator cannot be used either.
  • We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access perfromance and have turned off "lazy-download". This will be re-addresses once we have upgraded to Castor 2.1.16.
  • LHCb Castor performance. This was a significant problem during LHCb strippong/merging campain in April. As for CMS Castor the next step is to upgrade to Castor 2.1.16.
Ongoing Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • FTS3 "test" instance upgraded to 3.6.8
  • Castor central services updated to Castor version 2.1.16-13.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-lhcb.gridpp.rl.ac.uk SCHEDULED OUTAGE 11/05/2017 10:00 11/05/2017 16:00 6 hours Downtime while upgrading LHCb Castor instance to 2.1.16
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Update Castor (including SRMs) to version 2.1.16. Central nameserver done. Current plan: LHCb stager on Thursday 11th May. Others to follow.
  • Update Castor SRMs - CMS & GEN still to do. Problems seen with the SRM update mean these will wait until Castor 2.1.16 is rolled out.

Listing by category:

  • Castor:
    • Update SRMs to new version, including updating to SL6.
    • Update Cator to version 2.1.16 (ongoing)
    • Merge AtlasScratchDisk into larger Atlas disk pool.
  • Networking
    • Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
  • Services
    • Put argus systems behind a load balancer to improve resilience.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-lhcb.gridpp.rl.ac.uk SCHEDULED OUTAGE 09/05/2017 10:00 09/05/2017 12:22 2 hours and 22 minutes Update central Castor services (including the nameserver component) and LHCb stager to version 2.1.16
All Castor (except LHCb) SCHEDULED OUTAGE 09/05/2017 10:00 09/05/2017 12:21 2 hours and 21 minutes Update central Castor services (including the nameserver component) to version 2.1.16
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
128180 Green Urgent In Progress 2017-05-05 2017-05-08 WLGC-IPv6 Tier-1 readiness
127968 Green Less Urgent In Progress 2017-04-27 2017-04-27 MICE RAL castor: not able to list directories and copy to
127967 Green Less Urgent On Hold 2017-04-27 2017-04-28 MICE Enabling pilot role for mice VO at RAL-LCG2
127612 Red Alarm In Progress 2017-04-08 2017-05-09 LHCb CEs at RAL not responding
127598 Green Urgent In Progress 2017-04-07 2017-04-07 CMS UK XRootD Redirector
127597 Yellow Urgent Waiting for Reply 2017-04-07 2017-05-04 CMS Check networking and xrootd RAL-CERN performance
127240 Amber Urgent In Progress 2017-03-21 2017-05-08 CMS Staging Test at UK_RAL for Run2
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2017-03-02 CASTOR at RAL not publishing GLUE 2.
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC ECHO = Atlas ECHO (Template 841);CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas ECHO Atlas HC Atlas HC ECHO CMS HC Comment
03/05/17 100 100 87 36 96 100 98 100 85 CMS: glexec test failures; Others SRM test failures.
04/05/17 100 100 94 100 96 100 98 99 93 SRM test failures.
05/05/17 100 100 96 100 100 100 100 100 100 SRM test failures on GET ("unknown error")
06/05/17 100 100 81 100 100 100 100 100 N/A SRM test failures on GET ("unknown error")
07/05/17 100 100 85 96 100 100 100 100 100 SRM test failures: Atlas - on GET ("unknown error"); CMS - Unable to issue PrepareToPut/Get request
08/05/17 100 100 98 100 100 100 100 100 N/A Single SRM test failure on GET ("unknown error")
09/05/17 100 100 77 55 87 100 90 98 100 All: Castor planned outage for a couple of hours. Atlas: Plus sporadic SRM test failures (unknown error); CMS CE tests failed owing to xroot redirection failing.
Notes from Meeting.
  • The new version of FTS3, now installed on the "test" FTS (used by Atlas) no longer supports the SOAP API. There are still some access to our "production" FTS3 service via this API (mainly SNO+ and MICE). We will wish to upgrade the production FTS3 srevice - and need VOs to move to the rest API provided with the newer versions of FTS. Would like to set deadline when we upgrade the "production" instance and hence no longer support the SOAP API.
  • The recent chiller replacement look like they will give a return on investment (in electricity costs) in around 5 years.
  • The delay in updating the LHCb stager and SRMs was caused by a problem with the repository mirrors blocking teh re-installation of machines as was needed to upgrade the SRMs. This is now fixed and the LHCb Castor stager and SRM updates are scheduled for tomorrow (11th May).
  • ECHO: Cluster healthy. Some issues with xrootd gateways are under investigation.