Difference between revisions of "Tier1 Operations Report 2018-01-10"

From GridPP Wiki
Jump to: navigation, search
()
 
(44 intermediate revisions by 2 users not shown)
Line 9: Line 9:
 
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;"
 
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;"
 
|-
 
|-
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 21st December 2017 to 3rd January 2018
+
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 3rd January 2018 to 10th January 2018
 
|}
 
|}
* Network connectivity issues on Stack 9 in the UPS room overnight Thursday/Friday 21/22 December affected some internal systems (mainly monitoring). A member of staff attended overnight and a faulty transceiver was found the be the cause and replaced. External services were unaffected.
+
* Disk server gdss745 (AtlasDataDisk - D1T0) has failed with loss of all data on the server. The problem was triggered by a failed drive. However, errors seen on other disk drives while this one was rebuilding led to loss of the RAID6 array. There were around 960,000 files on the server - around half of which were unique. A post mortem investigation of this incident will be carried out.
* Operations over the Christmas and New year holiday were generally stable although not completely quiet for the oncall team. There were some Castor disk server failures and staff did attend site over the holiday to replace failed disk drives. Our monitoring flagged problems with LHCb SAM tests - although these turned out to be a caused by a problem with a LHCb certificate.
+
* There was a problem with the Castor  GEN instance yesterday morning (Tuesday 9th). Staff restarted some of the Castor process and the service was restored by the end of the morning.
* The number of Atlas batch jobs being run is lower than expected. The batch (Condor) scheduling will be looked at to try and understand and improve this.
+
* Patching ongoing for recent security issues.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 24: Line 24:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
|}
 
|}
* None
+
* Ongoing security patching.
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 35: Line 35:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Castor Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Castor Disk Server Issues
 
|}
 
|}
* GDSS688 (cmsDisk - D1T0) is back in production.
+
* gdss736 (LHCb - D1T0) - Has been rebuilt and returned to production
* GDSS743 (atlasStripInput - D1T0) is back in production.
+
* gdss756 (CMS - D1T0) - Disk server crashed double disk failure port 24,port 28, port 3 and port 8
* GDSS757 (cmsDisk - D1T0) is back in production.
+
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 47: Line 46:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Castor Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Castor Disk Server Issues
 
|}
 
|}
* GDSS756 (cmsDisk - D1T0) not in production.
+
* None
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 69: Line 68:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* The ATLAS quota on Echo was increased by 500TB yesterday (2nd Jan). It now stands at 4.6PBytes.
+
* The termination of the WMS service had been announced - with a drain (i.e. not accepting new jobs) planned to start on 1st February. However,  as the WMS is not being used by VOs at the moment and security patches need to be applied urgently the drain was brought forward and started this morning (10th Jan).
* A round of Linux kernel patching of back-end database systems has been completed.  
+
* Other security patching done or underway.
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 92: Line 91:
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
|}
 
|}
* None
+
{| border=1 align=center
 +
|- bgcolor="#7c8aaf"
 +
! Service
 +
! Scheduled?
 +
! Outage/At Risk
 +
! Start
 +
! End
 +
! Duration
 +
! Reason
 +
|-
 +
| WMS Service: lcglb01, lcglb02, lcgwms04, lcgwms05
 +
| SCHEDULED
 +
| OUTAGE
 +
| 12/01/2018 10:00
 +
| 19/01/2018 12:00
 +
| 7 days, 2 hours
 +
| WMS Decommissioning RAL Tier1
 +
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 120: Line 136:
 
** DNS servers will be rolled out within the Tier1 network.
 
** DNS servers will be rolled out within the Tier1 network.
 
** Infrastructure
 
** Infrastructure
* Testing of power distribution boards in the R89 machine room is being scheduled for some time in March. The effect of this on our services is being discussed.
+
* Testing of power distribution boards in the R89 machine room is being scheduled for some time late July / Early August. The effect of this on our services is being discussed.
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ************************************************************************** ----->
 
<!-- ************************************************************************** ----->
Line 268: Line 284:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Hammercloud Test Report
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Hammercloud Test Report
 
|}
 
|}
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud
+
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_UCORE, Template 841); CMS HC = CMS HammerCloud
 
{|border="1" cellpadding="1",center;
 
{|border="1" cellpadding="1",center;
 
|+
 
|+
 
|-style="background:#b7f1ce"
 
|-style="background:#b7f1ce"
! Day !! Atlas HC !! Atlas HC Echo !! CMS HC !! Comment
+
! Day !! Atlas HC !! CMS HC !! Comment
 
|-
 
|-
| 20/12/17 || 100 || style="background-color: grey;" | 0 || 100 || Atlas HC Echo - No test run in time bin
+
| 03/01/18 || 100 || 100 ||  
 
|-
 
|-
| 21/12/17 || style="background-color: yellow;" | 98  || style="background-color: grey;" | 0|| 100 || Atlas HC Echo - No test run in time bin
+
| 04/01/18 || 100 || 100 ||  
 
|-
 
|-
| 22/12/17 || 100 || style="background-color: grey;" | 0|| style="background-color: yellow;" | 98 || Atlas HC Echo - No test run in time bin
+
| 05/01/18 || 100 || 100 ||  
 
|-
 
|-
| 23/12/17 || 100|| style="background-color: grey;" | 0 || 100 || Atlas HC Echo - No test run in time bin
+
| 06/01/18 || 100 || 100 ||  
 
|-
 
|-
| 24/12/17 || style="background-color: red;" | 0|| style="background-color: grey;" | 0 || 100 || Atlas HC Echo - No test run in time bin
+
| 07/01/18 || 100 || 100 ||  
 
|-
 
|-
| 25/12/17 || 100 || style="background-color: grey;" | 0 || 100 || Atlas HC Echo - No test run in time bin
+
| 08/01/18 || 100 || 100 ||  
 
|-
 
|-
| 26/12/17 || 100||style="background-color: grey;" | 0|| 100 || Atlas HC Echo - No test run in time bin
+
| 09/01/18 || 100 || 100 ||  
|-
+
| 27/12/17 || 100||style="background-color: grey;" | 0|| 100 || Atlas HC Echo - No test run in time bin
+
|-
+
| 28/12/17 || 100||style="background-color: grey;" | 0|| 100 || Atlas HC Echo - No test run in time bin
+
|-
+
| 29/12/17 || 100||style="background-color: grey;" | 0|| 100 || Atlas HC Echo - No test run in time bin
+
|-
+
| 30/12/17 || style="background-color: yellow;" | 93 || style="background-color: grey;" | 0|| 100 || Atlas HC Echo - No test run in time bin
+
|-
+
| 31/12/17 || 100||style="background-color: grey;" | 0|| 100 || Atlas HC Echo - No test run in time bin
+
|-
+
| 01/01/18 || 100||style="background-color: grey;" | 0|| 100 || Atlas HC Echo - No test run in time bin
+
|-
+
| 02/01/18 || 100||style="background-color: grey;" | 0|| 100 || Atlas HC Echo - No test run in time bin
+
 
|}
 
|}
 +
 
<!-- **********************End Hammercloud Test Report************************** ----->
 
<!-- **********************End Hammercloud Test Report************************** ----->
 
<!-- *********************************************************************** ----->
 
<!-- *********************************************************************** ----->
Line 312: Line 315:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
|}
 
|}
* CMS quota on Echo will be increased from 250 to 500TB. CMS will then do more stress testing.
+
* None yet.
* MICE have probably taken their final data. Some problems with rate of transfer of data into Castor will no longer be relevant.
+

Latest revision as of 13:08, 17 January 2018

RAL Tier1 Operations Report for 10th January 2018

Review of Issues during the week 3rd January 2018 to 10th January 2018
  • Disk server gdss745 (AtlasDataDisk - D1T0) has failed with loss of all data on the server. The problem was triggered by a failed drive. However, errors seen on other disk drives while this one was rebuilding led to loss of the RAID6 array. There were around 960,000 files on the server - around half of which were unique. A post mortem investigation of this incident will be carried out.
  • There was a problem with the Castor GEN instance yesterday morning (Tuesday 9th). Staff restarted some of the Castor process and the service was restored by the end of the morning.
  • Patching ongoing for recent security issues.
Current operational status and issues
  • Ongoing security patching.
Resolved Castor Disk Server Issues
  • gdss736 (LHCb - D1T0) - Has been rebuilt and returned to production
  • gdss756 (CMS - D1T0) - Disk server crashed double disk failure port 24,port 28, port 3 and port 8
Ongoing Castor Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • The termination of the WMS service had been announced - with a drain (i.e. not accepting new jobs) planned to start on 1st February. However, as the WMS is not being used by VOs at the moment and security patches need to be applied urgently the drain was brought forward and started this morning (10th Jan).
  • Other security patching done or underway.
Entries in GOC DB starting since the last report.

No downtime scheduled in the GOCDB between 2017-12-20 and 2018-01-03

Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
WMS Service: lcglb01, lcglb02, lcgwms04, lcgwms05 SCHEDULED OUTAGE 12/01/2018 10:00 19/01/2018 12:00 7 days, 2 hours WMS Decommissioning RAL Tier1
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Ongoing or Pending - but not yet formally announced:

Listing by category:

  • Castor:
    • Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
    • Move to generic Castor headnodes.
  • Echo:
    • Update to next CEPH version ("Luminous").
  • Networking
    • Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
  • Services
  • Internal
    • DNS servers will be rolled out within the Tier1 network.
    • Infrastructure
  • Testing of power distribution boards in the R89 machine room is being scheduled for some time late July / Early August. The effect of this on our services is being discussed.
Open GGUS Tickets (Snapshot during morning of meeting)
Ticket-ID Type VO Notified Site Resp. Unit Status Priority Creation Last Update ToI Subject
132748 USER ops RAL-LCG2 NGI_UK in progress less urgent 2018-01-08 14:45:00 2018-01-09 09:58:00 Operations [Rod Dashboard] Issues detected at RAL-LCG2
132712 USER other RAL-LCG2 NGI_UK assign to:lcg-support@gridpp.rl.ac.uk in progress less urgent 2018-01-04 16:22:00 2018-01-08 13:35:00 Other support for the hyperk VO (RAL-LCG2)
132589 TEAM lhcb RAL-LCG2 NGI_UK in progress very urgent 2017-12-21 06:45:00 2018-01-03 15:21:00 Local Batch System Killed pilots at RAL
131815 USER t2k.org RAL-LCG2 NGI_UK in progress less urgent 2017-11-13 14:42:00 2017-12-01 19:30:00 Storage Systems Extremely long download times for T2K files on tape at RAL
127597 USER cms RAL-LCG2 NGI_UK assign to:lcg-support@gridpp.rl.ac.uk share with:sexton@fnal.gov on hold urgent 2017-04-07 10:34:00 2017-10-05 09:14:00 File Transfer Check networking and xrootd RAL-CERN performance
124876 USER ops RAL-LCG2 NGI_UK assign to:lcg-support@gridpp.rl.ac.uk on hold less urgent 2016-11-07 12:06:00 2017-11-13 16:55:00 Operations [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 USER none RAL-LCG2 NGI_UK assign to:lcg-support@gridpp.rl.ac.uk on hold less urgent 2015-11-18 11:36:00 2018-01-03 15:26:00 Information System CASTOR at RAL not publishing GLUE 2
Availability Report
Day OPS Alice Atlas CMS LHCb Atlas Echo Comment
03/01/18 100 100 100 100 100 100
04/01/18 100 100 100 100 100 100
05/01/18 100 100 100 100 100 100
06/01/18 100 100 100 100 100 100
07/01/18 100 100 100 100 100 100
08/01/18 100 100 100 100 100 100
09/01/18 100 100 100 100 100 100
Hammercloud Test Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_UCORE, Template 841); CMS HC = CMS HammerCloud

Day Atlas HC CMS HC Comment
03/01/18 100 100
04/01/18 100 100
05/01/18 100 100
06/01/18 100 100
07/01/18 100 100
08/01/18 100 100
09/01/18 100 100


Notes from Meeting.
  • None yet.