Difference between revisions of "Tier1 Operations Report 2017-04-05"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(16 intermediate revisions by 2 users not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 29th March to 5th April 2017.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 29th March to 5th April 2017.
 
|}
 
|}
* The Upgrade of the LHCb SRMs last Thursday (23rd March) went well. However, there were problems upgrading the Atlas ones yesterday (Tuesday 28th). This problem is now understood and the upgrade is continuing today.
+
* LHCb Castor instance has been running with problems all this last week. Initially it appeared the new SRM version was causing a bottleneck. This was fixed but it then appears the stager was also struggling. Work has been ongoing to resolve this.
* Some batch job submission errors have been seen by CMS and LHCb. These are not yet understood.
+
* Some batch job submission errors have been seen by CMS and LHCb. These are not yet understood. ?? Ongoing
* Over the weekend there were two instances where a number of Tier1 systems lost connections between themselves. This was transitory and connection re-established themselves. The cause of this is not yet understood.
+
* Over the weekend there were problems with the Atlas Frontier systems. Lyon were also affected.
 +
* On Monday problems were reported on one of the ARC CEs (AC-CE4) and its services were restarted.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 23: Line 24:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS647 (LHCbDst - D1T0) crashed in the early hours of Tuesday 28th March. The RAID card had become confused following the failure of a disk drive. It was being drained and was returned to service this morning (29th Mar).
+
* GDSS780 (LHCbDst - D1T0) had failed on Sunday (26th) and was put back in service the following day. However, it crashed again on the 29th March. Following that its disks were swapped into new chassis and that was brought into service (as GDSS780) on Monday 3rd April.
 +
* GDSS733 (LHCbDst - D1T0) was taken out of service on Thursday 30th March. The server seemed to be behaving badly in Castor and was taken out of production and checked over by Fabric Team although nothing was found. It was returned to service the following afternoon.
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 44: Line 46:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
|}
 
|}
* GDSS780 (LHCbDst - D1T0) had failed on Sunday (26th) and was put back in service the following day. However, it crashed again this morning and is currently under investigation.
+
* gdss673 (LHCb-Tape) was removed from production this morning (05/04/2017) due to it having a double disk failure.  
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 57: Line 59:
 
* Atlas Pilot (Analysis) 1500
 
* Atlas Pilot (Analysis) 1500
 
* CMS Multicore 460
 
* CMS Multicore 460
 +
* LHCb 1000
 
<!-- ******************End Limits On Batch System Jobs***************** ----->
 
<!-- ******************End Limits On Batch System Jobs***************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 67: Line 70:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* SRM Upgrades (to version version 2.1.16-10 and the OS to SL6) were carried out for LHCb last Thursday (23rd March) and are ongoing for Atlas. CMS and 'GEN' instances planned for tomorrow.
+
* Six 2014 batch of disk servers have been installed in CMSDisk.
* Work ongoing on replacing two of the chillers. The first replacement has been commissioned. The second one is due to be shipped out tomorrow (Thursday 30th March).
+
* Perfsonar nodes are now working over IPv6 on the production Tier1 network.
* This morning (Wed 29th March) the web alias www.gridpp.rl.ac.uk was flipped to point to a new web server.
+
* The second chiller to be replaced was craned out on last Thursday and the replacement put in its place.
 
* Out of Hours cover for the CEPH ECHO service is being piloted.
 
* Out of Hours cover for the CEPH ECHO service is being piloted.
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
Line 81: Line 84:
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
|}
 
|}
{| border=1 align=center
+
* None
|- bgcolor="#7c8aaf"
+
! Service
+
! Scheduled?
+
! Outage/At Risk
+
! Start
+
! End
+
! Duration
+
! Reason
+
|-
+
| Castor CMS and GEN instances (SRMs)
+
| SCHEDULED
+
| WARNING
+
| 30/03/2017 10:00
+
| 30/03/2017 16:30
+
| 6 hours and 30 minutes
+
|Upgrading SRM software to 2.1.16
+
|-
+
| srm-atlas.gridpp.rl.ac.uk,
+
| UNSCHEDULED
+
| WARNING
+
| 29/03/2017 09:00
+
| 29/03/2017 16:30
+
| 7 hours and 30 minutes
+
| upgrading Atlas srm software to 2.1.16
+
|}
+
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 121: Line 99:
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
'''Pending - but not yet formally announced:'''
 
'''Pending - but not yet formally announced:'''
* Update Castor SRMs - ongoing.
+
* Update Castor SRMs - CMS & GEN still to do. This is awaiting a full understanding of the problem seen with LHCb.
 
* Chiller replacement - work ongoing.
 
* Chiller replacement - work ongoing.
 
* Merge AtlasScratchDisk into larger Atlas disk pool.
 
* Merge AtlasScratchDisk into larger Atlas disk pool.
Line 183: Line 161:
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
|-
 
|-
| 127332
+
| 127388
 
| Green
 
| Green
| Top Priority
+
| Less urgent
 
| In Progress
 
| In Progress
| 2017-03-28
 
 
| 2017-03-29
 
| 2017-03-29
 +
| 2017-04-03
 
| LHCb
 
| LHCb
| Expired host certificate on diskserver gdss828.gridpp.rl.ac.uk
+
| [FATAL] Connection error for some file
|-
+
| 127251
+
| Green
+
| Urgent
+
| Waiting Reply
+
| 2017-03-21
+
| 2017-03-27
+
| Atlas
+
| Encountering: TNetXNGFile::Open ERROR [ERROR] Server responded with an error: [3005] Unable to do async GET request. Disk to disk copy failed after 0 retries. Last error was : Transfer has disappeared from the scheduling system; Unknown error 1015
+
 
|-
 
|-
 
| 127240
 
| 127240
Line 209: Line 178:
 
| CMS
 
| CMS
 
| Staging Test at UK_RAL for Run2
 
| Staging Test at UK_RAL for Run2
|-
 
| 127185
 
| Green
 
| Urgent
 
| In Progress
 
| 2017-03-17
 
| 2017-03-17
 
|
 
| WLGC-IPv6 readiness
 
 
|-
 
|-
 
| 126905
 
| 126905
 
| Green
 
| Green
 
| Less Urgent
 
| Less Urgent
| In Progress
+
| Waiting Reply
 
| 2017-03-02
 
| 2017-03-02
| 2017-03-21
+
| 2017-04-03
 
| solid
 
| solid
 
| finish commissioning cvmfs server for solidexperiment.org
 
| finish commissioning cvmfs server for solidexperiment.org
 
|-
 
|-
 
| 126184
 
| 126184
| Yellow
+
| Amber
 
| Less Urgent
 
| Less Urgent
 
| In Progress
 
| In Progress
Line 283: Line 243:
 
| 03/04/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 97 || style="background-color: lightgrey;" | 82 || 98 || 94 || 99 || SRM test failures
 
| 03/04/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 97 || style="background-color: lightgrey;" | 82 || 98 || 94 || 99 || SRM test failures
 
|-
 
|-
| 04/04/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||  
+
| 04/04/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 97 || style="background-color: lightgrey;" | 96 || 99 || 100 || 100 || SRM test failures
 
|}
 
|}
 
<!-- **********************End Availability Report************************** ----->
 
<!-- **********************End Availability Report************************** ----->

Latest revision as of 09:30, 5 April 2017

RAL Tier1 Operations Report for 5th April2017

Review of Issues during the week 29th March to 5th April 2017.
  • LHCb Castor instance has been running with problems all this last week. Initially it appeared the new SRM version was causing a bottleneck. This was fixed but it then appears the stager was also struggling. Work has been ongoing to resolve this.
  • Some batch job submission errors have been seen by CMS and LHCb. These are not yet understood. ?? Ongoing
  • Over the weekend there were problems with the Atlas Frontier systems. Lyon were also affected.
  • On Monday problems were reported on one of the ARC CEs (AC-CE4) and its services were restarted.
Resolved Disk Server Issues
  • GDSS780 (LHCbDst - D1T0) had failed on Sunday (26th) and was put back in service the following day. However, it crashed again on the 29th March. Following that its disks were swapped into new chassis and that was brought into service (as GDSS780) on Monday 3rd April.
  • GDSS733 (LHCbDst - D1T0) was taken out of service on Thursday 30th March. The server seemed to be behaving badly in Castor and was taken out of production and checked over by Fabric Team although nothing was found. It was returned to service the following afternoon.
Current operational status and issues
  • We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities but the level of failures is reduced as compared to a few weeks ago.
Ongoing Disk Server Issues
  • gdss673 (LHCb-Tape) was removed from production this morning (05/04/2017) due to it having a double disk failure.
Limits on concurrent batch system jobs.
  • Atlas Pilot (Analysis) 1500
  • CMS Multicore 460
  • LHCb 1000
Notable Changes made since the last meeting.
  • Six 2014 batch of disk servers have been installed in CMSDisk.
  • Perfsonar nodes are now working over IPv6 on the production Tier1 network.
  • The second chiller to be replaced was craned out on last Thursday and the replacement put in its place.
  • Out of Hours cover for the CEPH ECHO service is being piloted.
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Update Castor SRMs - CMS & GEN still to do. This is awaiting a full understanding of the problem seen with LHCb.
  • Chiller replacement - work ongoing.
  • Merge AtlasScratchDisk into larger Atlas disk pool.

Listing by category:

  • Castor:
    • Update SRMs to new version, including updating to SL6.
    • Bring some newer disk servers ('14 generation) into service, replacing some older ('12 generation) servers.
  • Networking
    • Enable first services on production network with IPv6 once addressing scheme agreed.
  • Infrastructure:
    • Two of the chillers supplying the air-conditioning for the R89 machine room are being replaced.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-lhcb.gridpp.rl.ac.uk, UNSCHEDULED OUTAGE 31/03/2017 16:00 01/04/2017 10:43 18 hours and 43 minutes There are problems with the LHCb Castor instance.
srm-atlas.gridpp.rl.ac.uk UNSCHEDULED WARNING 29/03/2017 09:00 29/03/2017 14:12 5 hours and 12 minutes upgrading Atlas srm software to 2.1.16
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
127388 Green Less urgent In Progress 2017-03-29 2017-04-03 LHCb [FATAL] Connection error for some file
127240 Green Urgent In Progress 2017-03-21 2017-03-27 CMS Staging Test at UK_RAL for Run2
126905 Green Less Urgent Waiting Reply 2017-03-02 2017-04-03 solid finish commissioning cvmfs server for solidexperiment.org
126184 Amber Less Urgent In Progress 2017-01-26 2017-02-07 Atlas Request of inputs for new sites monitoring
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2017-03-02 CASTOR at RAL not publishing GLUE 2.
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC ECHO = Atlas ECHO (Template 841);CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC Atlas HC ECHO CMS HC Comment
29/03/17 100 100 100 97 83 100 99 100 SRM test failures
30/03/17 100 100 100 100 88 100 100 100 SRM test failures
31/03/17 100 100 100 98 50 96 100 100 SRM test failures
01/04/17 100 100 67 100 79 100 100 100 Atlas: Missing data; LHCb: SRM test failures
02/034/17 100 100 100 98 100 100 89 100 SRM test failures
03/04/17 100 100 100 97 82 98 94 99 SRM test failures
04/04/17 100 100 100 97 96 99 100 100 SRM test failures
Notes from Meeting.
  • None yet