Difference between revisions of "Tier1 Operations Report 2017-03-22"

From GridPP Wiki
Jump to: navigation, search
(Created page with "==RAL Tier1 Operations Report for 22nd March 2017== __NOTOC__ ====== ====== <!-- ************************************************************* -----> <!-- ***********Start R...")
 
()
 
(21 intermediate revisions by one user not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 15th to 22nd March 2017.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 15th to 22nd March 2017.
 
|}
 
|}
* There were a very large FTS transfer queue for Atlas during the middle of last week. Resolved by Atlas.
+
* There was a problem with the Atlas Castor instance on the evening of Wednesday 15th Mar. The oncall was contacted and Castor services restarted to resolve the problem. The cause was a known bug that causes exhaustion of a particular database resource.
* There was a problem with the Microsoft Hyper-V 2012 hypervisor cluster on Thursday afternoon 9th March. This hoists many service VMs. Two out of the five nodes appear to have updated a particular component - and attempted to move VMs to other nodes. It took a few hours for the system to recover. This affected a number of services including BDIIs, FTS nodes and CEs. However, the resilience built into these services meant that the operational effect was small.
+
* A crash of one of the five hypervisors in the Microsoft Hyper-V high availability cluster caused a number of VMs to reboot overnight Thursday-Friday (16-17 Mar). A knock-on effect was that one of the argus servers did not start cleanly and this affected CMS glexec tests.
* We have an ongoing problem with the SRM SAM tests for Atlas which are failing a lot of the time. We have confirmed this is not affecting Atlas operationally it is just the tests that fails. We still have a GGUS ticket open with Atlas as the test appears to be problematic.
+
* We have an ongoing problem with the SRM SAM tests for Atlas which are failing a lot of the time. We have confirmed this is not affecting Atlas operationally it is just the tests that fails. We still have a GGUS ticket open with Atlas as the test appears to be problematic.  
* There was a problem with squid systems on Sunday (12th Mar). Under high load there was a resource limitation. This is being resolved by increasing one of the OS parameters.
+
* There has been a large backlog of Atlas transfers for RAL queued in the FTS. The high number is because Atlas are doing reprocessing and pulling data back from tape. This high request rate is seen at other Tier1s but we are coping worse than other sites.
* Yesterday (Tuesday 14th) there were problems with Castor during a reconfiguration of the back-end database nodes. A GGUS ticket was received from LHCb - other VOs were also affected.
+
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 25: Line 24:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS689 (AtlasDataDisk - D1T0) reported 'fsprobe' errors and was taken out of production last Wednesday (8th March). It was returned to service on Friday (10th) having had two disks replaced.
+
* None
* GDSS623 (GenTape - D0T1) had one partition go read-only on Friday evening, 10th Mar. It was put back in service read-only on the SUnday (12th) so that the files awaiting migration to tape could be drained off.
+
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 70: Line 68:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* Increased max_filedesc parameter on squids to enable them to better cope with high load.
+
* Last week nine of the ’14 generation disk servers – each 100TB - were deployed into AtlasDataDisk. (These are from the batch that was used as  CEPH test servers).  
* ECHO: Two additional 'MON' boxes are being set-up bringing the total to five. The existing three can cope with normal activity but the additional ones would speed up recoveries and starts. Two additional gateway nodes are also being set-up (also bringing the total to five) which will improve access bandwidth.
+
* Work ongoing on replacing two of the chillers.
* The first of two chillers for the R89 machine room air-conditioning has been replaced.
+
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 83: Line 80:
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
|}
 
|}
None
+
{| border=1 align=center
 +
|- bgcolor="#7c8aaf"
 +
! Service
 +
! Scheduled?
 +
! Outage/At Risk
 +
! Start
 +
! End
 +
! Duration
 +
! Reason
 +
|-
 +
| srm-lhcb.gridpp.rl.ac.uk
 +
| SCHEDULED
 +
| OUTAGE
 +
| 23/03/2017 11:00
 +
| 23/03/2017 17:00
 +
| 6 hours
 +
| Upgrade of Castor SRMs for LHCb to version 2.1.16-10
 +
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 98: Line 112:
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
'''Pending - but not yet formally announced:'''
 
'''Pending - but not yet formally announced:'''
* Update Castor SRMs. Propose LHCb SRMs first - target date 22nd March.
+
* Update Castor SRMs starting with LHCb. (Announced for 23rd March.)
 
* Chiller replacement - work ongoing.
 
* Chiller replacement - work ongoing.
 
* Merge AtlasScratchDisk into larger Atlas disk pool.
 
* Merge AtlasScratchDisk into larger Atlas disk pool.
Line 121: Line 135:
 
| style="background-color: #7c8aaf; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting since the last report.
 
| style="background-color: #7c8aaf; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting since the last report.
 
|}
 
|}
{| border=1 align=center
+
None
|- bgcolor="#7c8aaf"
+
! Service
+
! Scheduled?
+
! Outage/At Risk
+
! Start
+
! End
+
! Duration
+
! Reason
+
|-
+
| Whole site 
+
| UNSCHEDULED
+
| OUTAGE
+
| 09/03/2017 12:10
+
| 09/03/2017 13:52
+
| 1 hour and 42 minutes
+
| Problems with virtualisation infrastructure, some services degraded or unavailable
+
|-
+
| Whole site
+
| SCHEDULED
+
| WARNING
+
| 08/03/2017 07:00
+
| 08/03/2017 11:00
+
| 4 hours
+
| Warning on site during network intervention in preparation for IPv6.
+
|}
+
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 161: Line 150:
 
|-style="background:#b7f1ce"
 
|-style="background:#b7f1ce"
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 +
|-
 +
| 127240
 +
| Green
 +
| Urgent
 +
| In Progress
 +
| 2017-03-21
 +
| 2017-03-21
 +
| CMS
 +
| Staging Test at UK_RAL for Run2
 +
|-
 +
| 127185
 +
| Green
 +
| Urgent
 +
| In Progress
 +
| 2017-03-17
 +
| 2017-03-17
 +
|
 +
| WLGC-IPv6 readiness
 
|-
 
|-
 
| 126905
 
| 126905
 
| Green
 
| Green
 
| Less Urgent
 
| Less Urgent
| In Progress
+
| Waiting Reply
| 2017-03-02
+
 
| 2017-03-02
 
| 2017-03-02
 +
| 2017-03-21
 
| solid
 
| solid
 
| finish commissioning cvmfs server for solidexperiment.org
 
| finish commissioning cvmfs server for solidexperiment.org
Line 213: Line 220:
 
|-style="background:#b7f1ce"
 
|-style="background:#b7f1ce"
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! Atlas HC ECHO !! CMS HC !! Comment
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! Atlas HC ECHO !! CMS HC !! Comment
|-
 
| 08/03/17 || 100 || 100 || style="background-color: lightgrey;" | 60 || style="background-color: lightgrey;" | 96 || 100 || 75 || 196 || 100 || Atlas: Ongoing problems with  SRM test (Atlas Castor restarted to try and fix this - but no effect); CMS - timeouts in SRM tests.
 
 
|-
 
|-
 
| 15/03/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 99 || 100 || 99 || 98 || 99 || Single SRM test failure (timeout)
 
| 15/03/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 99 || 100 || 99 || 98 || 99 || Single SRM test failure (timeout)
 
|-
 
|-
| 16/03/17 || 100 || 99 || 100 || style="background-color: lightgrey;" | 99 || 100 || 79 || 72 || 99 || ALICE: Test failed with 'no compatible resources found in BDII'. CMS: Single SRM test failure  
+
| 16/03/17 || 100 || style="background-color: lightgrey;" | 99 || 100 || style="background-color: lightgrey;" | 99 || 100 || 79 || 72 || 99 || ALICE: Test failed with 'no compatible resources found in BDII'. CMS: Single SRM test failure  
 
|-
 
|-
| 17/03/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 85 || 100 || 100 || 100 || 97 || Problem with glexec tests from CMS. This followed a problem with one of the Hyper-V 2012 hypervisors.
+
| 17/03/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 85 || 100 || 100 || 100 || 97 || Problem with glexec tests from CMS.  
 
|-
 
|-
 
| 18/03/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||  
 
| 18/03/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||  
Line 226: Line 231:
 
| 19/03/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||  
 
| 19/03/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||  
 
|-
 
|-
| 20/03/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || 100 || 100 || 100 || Two SRM test failures on GET (User timeout).
+
| 20/03/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || 100 || 100 || 100 || SRM test failures on GET (User timeout).
 
|-
 
|-
| 21/03/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||  
+
| 21/03/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || 100 || 100 || 99 || SRM test failures on GET (User timeout).
 
|}
 
|}
 
<!-- **********************End Availability Report************************** ----->
 
<!-- **********************End Availability Report************************** ----->
 
<!-- *********************************************************************** ----->
 
<!-- *********************************************************************** ----->
  
===
+
====== ======
 +
<!-- *********************************************************************** ----->
 +
<!-- ****************************Start Notes******************************** ----->
 +
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;"
 +
|-
 +
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 +
|}
 +
* There has been a long running problem whereby there was a higher (than other Tier1s) rate of failures when LHCb batch jobs wrote to storage (Castor). This problem is now regarded as solved. Error rates seen at RAL are now lower and commensurate with that seen at other Tier1s.
 +
* 2016 capacity hardware purchase (CPU & storage) all delivered and in racks awaiting cabling.
 +
* Work preparing for ECHO to go into production underway. Looking at operational procedures and starting out of hours cover etc.

Latest revision as of 15:53, 22 March 2017

RAL Tier1 Operations Report for 22nd March 2017

Review of Issues during the week 15th to 22nd March 2017.
  • There was a problem with the Atlas Castor instance on the evening of Wednesday 15th Mar. The oncall was contacted and Castor services restarted to resolve the problem. The cause was a known bug that causes exhaustion of a particular database resource.
  • A crash of one of the five hypervisors in the Microsoft Hyper-V high availability cluster caused a number of VMs to reboot overnight Thursday-Friday (16-17 Mar). A knock-on effect was that one of the argus servers did not start cleanly and this affected CMS glexec tests.
  • We have an ongoing problem with the SRM SAM tests for Atlas which are failing a lot of the time. We have confirmed this is not affecting Atlas operationally it is just the tests that fails. We still have a GGUS ticket open with Atlas as the test appears to be problematic.
  • There has been a large backlog of Atlas transfers for RAL queued in the FTS. The high number is because Atlas are doing reprocessing and pulling data back from tape. This high request rate is seen at other Tier1s but we are coping worse than other sites.
Resolved Disk Server Issues
  • None
Current operational status and issues
  • We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities but the level of failures is reduced as compared to a few weeks ago.
Ongoing Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • Atlas Pilot (Analysis) 1500
  • CMS Multicore 460
Notable Changes made since the last meeting.
  • Last week nine of the ’14 generation disk servers – each 100TB - were deployed into AtlasDataDisk. (These are from the batch that was used as CEPH test servers).
  • Work ongoing on replacing two of the chillers.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-lhcb.gridpp.rl.ac.uk SCHEDULED OUTAGE 23/03/2017 11:00 23/03/2017 17:00 6 hours Upgrade of Castor SRMs for LHCb to version 2.1.16-10
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Update Castor SRMs starting with LHCb. (Announced for 23rd March.)
  • Chiller replacement - work ongoing.
  • Merge AtlasScratchDisk into larger Atlas disk pool.

Listing by category:

  • Castor:
    • Update SRMs to new version, including updating to SL6.
    • Bring some newer disk servers ('14 generation) into service, replacing some older ('12 generation) servers.
  • Databases
    • Removal of "asmlib" layer on Oracle database nodes. (Ongoing)
  • Networking
    • Enable first services on production network with IPv6 once addressing scheme agreed.
  • Infrastructure:
    • Two of the chillers supplying the air-conditioning for the R89 machine room will be replaced.
Entries in GOC DB starting since the last report.

None

Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
127240 Green Urgent In Progress 2017-03-21 2017-03-21 CMS Staging Test at UK_RAL for Run2
127185 Green Urgent In Progress 2017-03-17 2017-03-17 WLGC-IPv6 readiness
126905 Green Less Urgent Waiting Reply 2017-03-02 2017-03-21 solid finish commissioning cvmfs server for solidexperiment.org
126184 Yellow Less Urgent In Progress 2017-01-26 2017-02-07 Atlas Request of inputs for new sites monitoring
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2017-03-02 CASTOR at RAL not publishing GLUE 2.
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC ECHO = Atlas ECHO (Template 842);CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC Atlas HC ECHO CMS HC Comment
15/03/17 100 100 100 99 100 99 98 99 Single SRM test failure (timeout)
16/03/17 100 99 100 99 100 79 72 99 ALICE: Test failed with 'no compatible resources found in BDII'. CMS: Single SRM test failure
17/03/17 100 100 100 85 100 100 100 97 Problem with glexec tests from CMS.
18/03/17 100 100 100 100 100 100 100 100
19/03/17 100 100 100 100 100 100 100 100
20/03/17 100 100 100 98 100 100 100 100 SRM test failures on GET (User timeout).
21/03/17 100 100 100 98 100 100 100 99 SRM test failures on GET (User timeout).
Notes from Meeting.
  • There has been a long running problem whereby there was a higher (than other Tier1s) rate of failures when LHCb batch jobs wrote to storage (Castor). This problem is now regarded as solved. Error rates seen at RAL are now lower and commensurate with that seen at other Tier1s.
  • 2016 capacity hardware purchase (CPU & storage) all delivered and in racks awaiting cabling.
  • Work preparing for ECHO to go into production underway. Looking at operational procedures and starting out of hours cover etc.