Difference between revisions of "Tier1 Operations Report 2016-12-14"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(14 intermediate revisions by one user not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 7th to 14th December 2016.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 7th to 14th December 2016.
 
|}  
 
|}  
* There was some problems with the CMS Castor instance during last week. A restart of the "transfermanager" on Friday cleared out a backlog of transfer requests that were not progressing - and this enabled the service to work normally.
+
* There was a problem with the Atlas Frontier service on Wednesday (7th). The excess load caused by particular Atlas user. The services on the squid systems needed several restarts through the day and evening.
* There a problem on one of the Power Distribution Units  to a rack in the UPS room during the early hours of Monday morning (5th Dec). This affected two network switches - which in turn affected some core services (including the TopBDII). This was resolved by a member of staff attending during the night.
+
* Since yesterday (Tuesday 13th Dec) we are seeing high load on CMSTape in Castor - and are failing SAM tests as a result.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 22: Line 22:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS726 (CMSDisk - D1T0) reported FSProbe errors on Thursday 1st Dec. and was taken out of service. It was returned to service the following day although the tests did not find any problems.
+
* GDSS657 (lhcbRawRdst - D0T1) failed on Saturday morning, 10th Dec. It was put back read-only later that day and the eight files awaiting migration to tape were flushed off. The server was then taken down on Monday (12th) for further investigation - which was transparent to the VO. No faults were found and the server was returned to service yesterday (13th Dec.)
* GDSS747 (AtlasDataDisk - D1T0) also failed on Thursday 1st Dec. Two failed disks were found. The server was returned to service on Monday (5th Dec).
+
* GDSS650 (LHCbUser - D1T0) failed on Saturday morning, 3rd Dec. It was returned to service yesterday (6th Dec). A disk had failed - and the replacement to that disk also failed. During teh RAID rebuild a further disk drive started reporting problems and was also swapped.
+
* GDSS701 (LHCbDst - D1T0) was taken out of service on Saturday (3rd Dec) when it reported FSProbe errors when a disk was replaced. It was returned to service on the 5th Dec.
+
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 36: Line 33:
 
|}
 
|}
 
* There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
 
* There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
* The intermittent, low-level, load-related packet loss that has been seen over external connections is still being monitored. The replacement of the UKLight router appears to have reduced this. However, we are closely monitoring the links to confirm that any remaining error rates are low and typical for this type of wide area link.
+
* We had been  reporting a problem with intermittent, low-level, load-related packet loss over external connections. We have been tracking this - particularly since the replacement of the UKLight router. The rate of packet loss has reduced and we now conclude there is no longer a significant problem. Loss rates seen are comparable with other sites.
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 58: Line 55:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* Nothing p[articular to report.
+
* On Monday (5th Dec) the bulk of the HPE Worker Nodes (the second tranche of last year's procurement) were reinstalled with SL6 and put into production. This follows a period of testing with another configuration in readiness for CEPH deployment. (This item overdue from last week's report).
 +
* On Thursday (8th Dec) The small LhcbUser disk pool was merged into the larger LhcbDst pool.
 +
* FTS Database adjustments requested by ATLAS
 +
* (Ongoing at time of meeting: Firmware updates of RAID cards on Clustervision '13 batch of disk servers.)
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 69: Line 69:
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
|}
 
|}
None
+
{| border=1 align=center
 +
|- bgcolor="#7c8aaf"
 +
! Service
 +
! Scheduled?
 +
! Outage/At Risk
 +
! Start
 +
! End
 +
! Duration
 +
! Reason
 +
|-
 +
| Castor CMS instance
 +
| SCHEDULED
 +
| OUTAGE
 +
| 31/01/2017 10:00
 +
| 31/01/2017 16:00
 +
| 6 hours
 +
| Castor 2.1.15 Upgrade. Only affecting CMS instance. (CMS stager component being upgraded).
 +
|-
 +
| Castor GEN instance
 +
| SCHEDULED
 +
| OUTAGE
 +
| 26/01/2017 10:00
 +
| 26/01/2017 16:00
 +
| 6 hours
 +
| Castor 2.1.15 Upgrade. Only affecting GEN instance. (GEN stager component being upgraded).
 +
|-
 +
| Castor Atlas instance
 +
| SCHEDULED
 +
| OUTAGE
 +
| 24/01/2017 10:00
 +
| 24/01/2017 16:00
 +
| 6 hours
 +
| Castor 2.1.15 Upgrade. Only affecting Atlas instance. (Atlas stager component being upgraded).
 +
|-
 +
| Castor LHCb instance
 +
| SCHEDULED
 +
| OUTAGE
 +
| 17/01/2017 10:00
 +
| 17/01/2017 16:00
 +
| 6 hours
 +
| Castor 2.1.15 Upgrade. Only affecting LHCb instance. (LHCb stager component being upgraded).
 +
|-
 +
| All Castor (all SRM endpoints)
 +
| SCHEDULED
 +
| OUTAGE
 +
| 10/01/2017 10:00
 +
| 10/01/2017 16:00
 +
| 6 hours
 +
| Castor 2.1.15 Upgrade. Upgrade of Nameserver component. All instances affected.
 +
|-
 +
| gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk,
 +
| SCHEDULED
 +
| OUTAGE
 +
| 15/12/2016 09:00
 +
| 15/12/2016 17:00
 +
| 8 hours
 +
| Re-install of Echo Cluster
 +
|-
 +
| srm-atlas.gridpp.rl.ac.uk, srm-cms-disk.gridpp.rl.ac.uk, srm-lhcb.gridpp.rl.ac.uk,
 +
| SCHEDULED
 +
| WARNING
 +
| 14/12/2016 10:00
 +
| 14/12/2016 17:00
 +
| 7 hours
 +
| Warning while some disk servers have rolling firmware updates and are rebooted. Temporary loss of access to files.
 +
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 84: Line 149:
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
'''Pending - but not yet formally announced:'''
 
'''Pending - but not yet formally announced:'''
* Firmware update on Clustervision '13 disk servers. These are distributed as follows: AtlasDataDisk: 12; CMSDisk: 5; LHCbDst: 12.
+
* Merge AtlasScratchDisk into larger Atlas disk pool.
* Merge AtlasScratchDisk and LhcbUser into larger disk pools. For LHCbUser this will be done on Thursday 8th Dec.
+
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Castor:  
 
* Castor:  
** Merge AtlasScratchDisk and LhcbUser into larger disk pools
+
** Update to Castor version 2.1.15. Dates announced via GOC DB for early 2017.
** Update to Castor version 2.1.15. Planning to roll out January 2017. (Proposed dates: 10th Jan: Nameserver; 17th Jan: First stager (LHCb); 24th Jan: Stager (Atlas); 26th Jan: Stager (GEN); 31st Jan: Final stager (CMS)).
+
 
** Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
 
** Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
 
* Fabric
 
* Fabric
Line 136: Line 199:
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
|-
 
|-
| 125348
+
| 125553
 
| Green
 
| Green
| Top Priority
+
| Urgent
 
| In Progress
 
| In Progress
| 2016-12-05
+
| 2016-12-13
| 2016-12-05
+
| 2016-12-13
 
| CMS
 
| CMS
| Request to update phedex
+
| Transfers failing from T1_UK_RAL_Buffer
 +
|-
 +
| 125551
 +
| Green
 +
| Urgent
 +
| In Progress
 +
| 2016-12-13
 +
| 2016-12-13
 +
| CMS
 +
| T1_UK_RAL SAM3_SRM Critical > 2h
 +
|-
 +
| 125480
 +
| Green
 +
| Less Urgent
 +
| In Progress
 +
| 2016-12-09
 +
| 2016-12-12
 +
|
 +
| total Physical and Logical CPUs values
 
|-
 
|-
 
| 125157
 
| 125157
Line 166: Line 247:
 
| Red
 
| Red
 
| Top Priority
 
| Top Priority
| In Progress
+
| Waiting for Reply
 
| 2016-10-24
 
| 2016-10-24
| 2016-12-02
+
| 2016-12-09
 
| CMS
 
| CMS
 
| Consistency Check for T1_UK_RAL
 
| Consistency Check for T1_UK_RAL
Line 177: Line 258:
 
| In Progress
 
| In Progress
 
| 2016-11-18
 
| 2016-11-18
| 2016-11-18
+
| 2016-12-13
 
| NA62
 
| NA62
 
| Jobs submitted via RAL WMS stuck in state READY forever and ever and ever
 
| Jobs submitted via RAL WMS stuck in state READY forever and ever and ever
|-
 
| 122827
 
| Green
 
| Less Urgent
 
| In Progress
 
| 2016-07-12
 
| 2016-12-01
 
| SNO+
 
| Disk area at RAL
 
 
|-
 
|-
 
| 117683
 
| 117683
Line 195: Line 267:
 
| On Hold
 
| On Hold
 
| 2015-11-18
 
| 2015-11-18
| 2016-10-05
+
| 2016-12-07
 
|  
 
|  
 
| CASTOR at RAL not publishing GLUE 2. Plan to revisit week starting 19th.
 
| CASTOR at RAL not publishing GLUE 2. Plan to revisit week starting 19th.
Line 239: Line 311:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
|}
 
|}
* None yet.
+
* The meeting did not take place as there were no VO representatives present.

Latest revision as of 12:48, 21 December 2016

RAL Tier1 Operations Report for 14th December 2016

Review of Issues during the week 7th to 14th December 2016.
  • There was a problem with the Atlas Frontier service on Wednesday (7th). The excess load caused by particular Atlas user. The services on the squid systems needed several restarts through the day and evening.
  • Since yesterday (Tuesday 13th Dec) we are seeing high load on CMSTape in Castor - and are failing SAM tests as a result.
Resolved Disk Server Issues
  • GDSS657 (lhcbRawRdst - D0T1) failed on Saturday morning, 10th Dec. It was put back read-only later that day and the eight files awaiting migration to tape were flushed off. The server was then taken down on Monday (12th) for further investigation - which was transparent to the VO. No faults were found and the server was returned to service yesterday (13th Dec.)
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
  • We had been reporting a problem with intermittent, low-level, load-related packet loss over external connections. We have been tracking this - particularly since the replacement of the UKLight router. The rate of packet loss has reduced and we now conclude there is no longer a significant problem. Loss rates seen are comparable with other sites.
Ongoing Disk Server Issues
  • None
Notable Changes made since the last meeting.
  • On Monday (5th Dec) the bulk of the HPE Worker Nodes (the second tranche of last year's procurement) were reinstalled with SL6 and put into production. This follows a period of testing with another configuration in readiness for CEPH deployment. (This item overdue from last week's report).
  • On Thursday (8th Dec) The small LhcbUser disk pool was merged into the larger LhcbDst pool.
  • FTS Database adjustments requested by ATLAS
  • (Ongoing at time of meeting: Firmware updates of RAID cards on Clustervision '13 batch of disk servers.)
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
Castor CMS instance SCHEDULED OUTAGE 31/01/2017 10:00 31/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Only affecting CMS instance. (CMS stager component being upgraded).
Castor GEN instance SCHEDULED OUTAGE 26/01/2017 10:00 26/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Only affecting GEN instance. (GEN stager component being upgraded).
Castor Atlas instance SCHEDULED OUTAGE 24/01/2017 10:00 24/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Only affecting Atlas instance. (Atlas stager component being upgraded).
Castor LHCb instance SCHEDULED OUTAGE 17/01/2017 10:00 17/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Only affecting LHCb instance. (LHCb stager component being upgraded).
All Castor (all SRM endpoints) SCHEDULED OUTAGE 10/01/2017 10:00 10/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Upgrade of Nameserver component. All instances affected.
gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk, SCHEDULED OUTAGE 15/12/2016 09:00 15/12/2016 17:00 8 hours Re-install of Echo Cluster
srm-atlas.gridpp.rl.ac.uk, srm-cms-disk.gridpp.rl.ac.uk, srm-lhcb.gridpp.rl.ac.uk, SCHEDULED WARNING 14/12/2016 10:00 14/12/2016 17:00 7 hours Warning while some disk servers have rolling firmware updates and are rebooted. Temporary loss of access to files.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Merge AtlasScratchDisk into larger Atlas disk pool.

Listing by category:

  • Castor:
    • Update to Castor version 2.1.15. Dates announced via GOC DB for early 2017.
    • Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
  • Fabric
    • Firmware updates on older disk servers.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-atlas.gridpp.rl.ac.uk, srm-cms-disk.gridpp.rl.ac.uk, srm-lhcb.gridpp.rl.ac.uk, SCHEDULED WARNING 14/12/2016 10:00 14/12/2016 17:00 7 hours Warning while some disk servers have rolling firmware updates and are rebooted. Temporary loss of access to files.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
125553 Green Urgent In Progress 2016-12-13 2016-12-13 CMS Transfers failing from T1_UK_RAL_Buffer
125551 Green Urgent In Progress 2016-12-13 2016-12-13 CMS T1_UK_RAL SAM3_SRM Critical > 2h
125480 Green Less Urgent In Progress 2016-12-09 2016-12-12 total Physical and Logical CPUs values
125157 Green Less Urgent In Progress 2016-11-24 2016-12-07 Creation of a repository within the EGI CVMFS infrastructure
124876 Green Less Urgent On Hold 2016-11-07 2016-11-21 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
124606 Red Top Priority Waiting for Reply 2016-10-24 2016-12-09 CMS Consistency Check for T1_UK_RAL
124478 Green Less Urgent In Progress 2016-11-18 2016-12-13 NA62 Jobs submitted via RAL WMS stuck in state READY forever and ever and ever
117683 Red Less Urgent On Hold 2015-11-18 2016-12-07 CASTOR at RAL not publishing GLUE 2. Plan to revisit week starting 19th.
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 808); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
07/12/16 100 100 100 98 100 N/A 100 Single SRM test failure: User timeout over
08/12/16 100 100 100 100 100 N/A 100
09/12/16 100 100 100 100 100 N/A 100
10/12/16 100 100 100 100 100 N/A 100
11/12/16 100 100 100 100 100 N/A 100
12/12/16 100 100 100 97 100 N/A 100 Some user timeout failures on SRM tests.
13/12/16 100 100 100 47 100 N/A 100 Load problem on CMS_Tape causing multiple test failures.
Notes from Meeting.
  • The meeting did not take place as there were no VO representatives present.