Difference between revisions of "Tier1 Operations Report 2014-06-18"

From GridPP Wiki
Jump to: navigation, search
(Created page with "=RAL Tier1 Operations Report for 18th June 2014= __NOTOC__ ====== ====== <!-- ************************************************************* -----> <!-- ***********Start Revie...")
 
m (RAL Tier1 Operations Report for 18th June 2014)
 
(19 intermediate revisions by one user not shown)
Line 9: Line 9:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 11th to 18th June 2014.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 11th to 18th June 2014.
 
|}
 
|}
* On Sunday 1st June there was a problem with the argus server affecting job submissions. The on-call team resolved the problem.
+
* On Friday (13th Jun) there was a problem with one of the FTS2 front-end systems - fixed by appropriate daemon restart.
* During the afternoon of Thursday 5th June a problem on one of the network switch stacks stopped access to some disk servers that provide the caches for Atlas and LHCb tape (D0T1) service classes. This was declared in the GOC DB. The problem lasted around 75 minutes.
+
* There was a problem with cream-ce01 which started failing tests on Friday  (13th Jun). It was put into a downtime and drained out over the weekend after which its database was reset. It was returned to service on Monday (16th Jun).
* There was a further problem with the argus server affecting job submission this morning (11th June).
+
* The problems with the WMSs reported in previous weeks are no longer present. The WMS developers were involved in looking at the problems although they were not finally understood.
+
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 23: Line 21:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS586 (AtlasDataDisk - D1T0) failed to restart after kernel/errata updates applied during the Castor update on 10th June. It was returned to production just befor this meeting (11th June).
+
* None
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
Line 34: Line 32:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
|}
 
|}
* None
+
* The CMS Castor stager update to version 2.1.14-13 took place yesterday (Tuesday) as planned. There were some difficulties caused by the disk server re-balancer initially after the upgrade. However, these were understood and the service was restored within the announced outage. Nevertheless there have been some ongoing problems and the service was placed in a 'warning' state in the GOC DB for the rest of the day and overnight. The problems are being followed up with the developers at CERN.
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 56: Line 54:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
|}
 
|}
* Yesterday (10th June) the Castor namserver was updated to version 2.1.14-13. While Castor was down the opportunity was taken to update the firmware in some network switches and apply kernel/errata updates to the Castor disk servers.
+
* Yesterday (17th June) the CMS Castor stager was updated to version 2.1.14-13.
* A new ARC CE, arc-ce04 has been brought into production.
+
* On Monday (16th June) Removed LHCb access from cream-ce02
* ARC CEs arc-ce02 & arc-ce03 have been upgraded to version 4.1.0. (All ARC CEs now updated).
+
* The host certificate on arc-ce02 has been updated andthe new certificate is SHA-2 signed. There was an initial error during the application of this certificate, but that was corrected and the service is now running OK with the new (SHA-2) certificate.
+
* On the 2nd June LHCb access was removed from cream-ce01.
+
* Today (11th June) a new tape controller system (ACSLS) is being installed. There have been some problems with the new server. However, last test (last week) was successful.
+
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 87: Line 81:
 
|}
 
|}
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
* Dates for the Castor 2.1.14 stager upgrades: CMS- Tue 17th June; LHCb - Thu 19th June; GEN - Tue 24th June; Atlas - Thu 26th June.
+
* Dates for the Castor 2.1.14 stager upgrades: GEN - Tue 24th June; LHCb - Thu 26th June; Atlas - Tues 1st July.
 
* We are starting to plan the termination of the FTS2 service now that almost all use is on FTS3.
 
* We are starting to plan the termination of the FTS2 service now that almost all use is on FTS3.
 
'''Listing by category:'''
 
'''Listing by category:'''
Line 93: Line 87:
 
** Switch LFC/FTS/3D to new Database Infrastructure.
 
** Switch LFC/FTS/3D to new Database Infrastructure.
 
* Castor:
 
* Castor:
** The Castor 2.1.14 upgrade is underway. Some checks are ongoing before finally deciding whether the stagers will go directly to minor version 2.1.14-13 (rather than 2.1.14-11 as previously planned).
+
** The Castor 2.1.14 upgrade is underway.  
** CIP-2.2.15 publishes online resources correctly for CASTOR 2.1.14 but not nearline (they appear as zero, due to a modification to how data is held in CASTOR).  CIP will be updated.  We can double check the Rajanian Problem at the same time which was understood and fixed earlier.
+
** The CIP is compatible with Castor version 2.1.14. There is an issue reported by LHCb to be investigated.
 
* Networking:
 
* Networking:
 
** Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
 
** Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
Line 122: Line 116:
 
! Reason
 
! Reason
 
|-
 
|-
| Castor (all SRM endpoints) and batch (all CEs)
+
|srm-cms.gridpp.rl.ac.uk,
| SCHEDULED
+
| OUTAGE
+
| 10/06/2014 08:50
+
| 10/06/2014 12:45
+
| 3 hours and 55 minutes
+
| Castor and batch services down during upgrade of Castor Nameserver to version 2.1.14.
+
|-
+
| arc-ce04.gridpp.rl.ac.uk
+
 
| UNSCHEDULED
 
| UNSCHEDULED
| OUTAGE
+
| WARNING
| 10/06/2014 07:05
+
| 17/06/2014 16:30
| 10/06/2014 12:45
+
| 18/06/2014 12:00
| 5 hours and 40 minutes
+
| 19 hours and 30 minutes
|Stopping work on new CE around upgrade of Castor Storage System.
+
| Investigating some problems following the Castor 2.1.14 update of the CMS stager.
 
|-
 
|-
| Castor (all SRM endpoints) and batch (all CEs)
+
|srm-cms.gridpp.rl.ac.uk,
 
| SCHEDULED
 
| SCHEDULED
 
| OUTAGE
 
| OUTAGE
| 10/06/2014 06:50
+
| 17/06/2014 10:00
| 10/06/2014 08:50
+
| 17/06/2014 16:30
| 2 hours
+
| 6 hours and 30 minutes
| Castor and batch services down for Networking Change before upgrade of Castor Nameserver to version 2.1.14.
+
| CMS Castor instance down for Castor 2.1.14 Stager Update.  
 
|-
 
|-
| cream-ce01.gridpp.rl.ac.uk,  
+
| cream-ce01.gridpp.rl.ac.uk, cream-ce01.gridpp.rl.ac.uk,  
| SCHEDULED
+
| OUTAGE
+
| 07/06/2014 10:00
+
| 10/06/2014 12:00
+
| 3 days, 2 hours
+
| EMI-3 update 14 upgrade
+
|-
+
| srm-lhcb-tape.gridpp.rl.ac.uk,  
+
 
| UNSCHEDULED
 
| UNSCHEDULED
 
| OUTAGE
 
| OUTAGE
| 05/06/2014 15:25
+
| 13/06/2014 16:30
| 05/06/2014 16:42
+
| 16/06/2014 13:11
| 1 hour and 17 minutes
+
| 2 days, 20 hours and 41 minutes
| Problem affecting disk cache in front of tape.
+
| Problems on this CE being investigated. (All other CEs OK).
|-
+
| srm-atlas.gridpp.rl.ac.uk,
+
| UNSCHEDULED
+
| WARNING
+
| 05/06/2014 15:25
+
| 05/06/2014 16:41
+
| 1 hour and 16 minutes
+
| Problem affecting disk cache in front of tape. Non-Tape service clasess unaffected.
+
|-
+
| All srm endpoints
+
| SCHEDULED
+
| WARNING
+
| 04/06/2014 08:00
+
| 04/06/2014 17:00
+
|  9 hours
+
|Warning (At Risk) on tape systems during testing of new tape library controller.
+
|-
+
| arc-ce02.gridpp.rl.ac.uk, arc-ce03.gridpp.rl.ac.uk
+
| SCHEDULED
+
| WARNING
+
| 02/06/2014 10:00
+
| 02/06/2014 12:00
+
| 2 hours
+
|Upgrade arc-ce02 and arc-ce03 to v. 4.1.0.
+
|-
+
| arc-ce01.gridpp.rl.ac.uk
+
| SCHEDULED
+
| WARNING
+
| 28/05/2014 10:00
+
| 28/05/2014 12:00
+
| 2 hours
+
| Upgrade of ARC CE to version 4.1.0.
+
|-
+
| All Castor (all srm endpoints): srm-alice, srm-atlas, srm-biomed, srm-cert, srm-cms, srm-dteam, srm-hone, srm-ilc, srm-lhcb, srm-mice, srm-minos, srm-na62, srm-preprod, srm-snoplus, srm-superb, srm-t2k
+
| SCHEDULED
+
| WARNING
+
| 28/05/2014 09:30
+
| 28/05/2014 11:30
+
| 2 hours
+
| At Risk on Castor (All SRM endpoints) during small internal network change.
+
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 217: Line 155:
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
|-
 
|-
| 106090
+
| 106280
 
| Green
 
| Green
| Urgent
+
| Less Urgent
 
| In Progress
 
| In Progress
| 2014-06-11
+
| 2014-06-17
| 2014-06-11
+
| 2014-06-18
 
| Atlas
 
| Atlas
| RAL-LCG2 DATADISK is failing as a source
+
| Lost file in RAL-LCG2
 
|-
 
|-
 
| 105571
 
| 105571
| Yellow
+
| Amber
 
| Less Urgent
 
| Less Urgent
 
| In Progress
 
| In Progress
Line 256: Line 194:
 
| Red
 
| Red
 
| Urgent
 
| Urgent
| Waiting Reply
+
| In Progress
 
| 2013-10-21
 
| 2013-10-21
| 2014-06-02
+
| 2014-06-17
 
| SNO+
 
| SNO+
 
| please configure cvmfs stratum-0 for SNO+ at RAL T1
 
| please configure cvmfs stratum-0 for SNO+ at RAL T1
Line 280: Line 218:
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! CMS HC !! Comment
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! CMS HC !! Comment
 
|-
 
|-
| 28/05/14 || 100 || 100 || 100 || 100 || 100 || 100 || 99 ||
+
| 11/06/14 || 100 || style="background-color: lightgrey;" | 91.3 || 100 || style="background-color: lightgrey;" | 99.3 || 100 || 99 || 99 || Problem with Argus server.
|-
+
| 29/05/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 30/05/14 || 100 || 100 || 100 || 100 || 100 || 100 || 99 ||
+
|-
+
| 31/05/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 01/06/14 || 100 || 100 || 100 || style="background-color: lightgrey;" | 94.8 || 100 || 100 || 100 || Problem with Argus server.
+
|-
+
| 02/06/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 03/06/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 04/06/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 05/06/14 || 100 || 100 || style="background-color: lightgrey;" | 99.1 || 100 || 100 || 100 || 99 || One SRM Delete test failure ([SRM_INVALID_PATH] No such file or directory)
+
|-
+
| 06/06/14 || 100 || 100 || 100 || 100 || 100 || 100 || 98 ||
+
|-
+
| 07/06/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 08/06/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 09/06/14 || 100 || 100 || 100 || 100 || 100 || 99 || 99 ||
+
|-
+
| 10/06/14 || style="background-color: lightgrey;" | 86.2 || style="background-color: lightgrey;" | 77.5 || style="background-color: lightgrey;" | 75.3 || style="background-color: lightgrey;" | 75.3 || style="background-color: lightgrey;" | 75.3 || 91 || 99 || Castor Namserver 2.1.14 update.
+
 
+
|-
+
| 11/06/14 || 100 || 91.3 || 100 || 99.3 || 100 || 100 || 100 || Problem with Argus server.
+
 
|-
 
|-
| 12/06/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
| 12/06/14 || 100 || 100 || 100 || 100 || 100 || 100 || 97 ||
 
|-
 
|-
| 13/06/14 || 100 || 100 || 95.3 || 100 || 100 || 100 || 100 ||
+
| 13/06/14 || 100 || 100 || style="background-color: lightgrey;" | 95.3 || 100 || 100 || 98 || 99 || Failed several SRM (Put) tests with " Invalid argument"
 
|-
 
|-
 
| 14/06/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
 
| 14/06/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
Line 319: Line 228:
 
| 15/06/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
 
| 15/06/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
 
|-
 
|-
| 16/06/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
| 16/06/14 || 100 || 100 || 100 || 100 || 100 || 99 || 100 ||
 
|-
 
|-
| 17/06/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
| 17/06/14 || 100 || 100 || 100 || style="background-color: lightgrey;" | 72.9 || 100 || 96 || 100 || CMS Castor stager 2.1.14 upgrade.
 
|}
 
|}
 
<!-- **********************End Availability Report************************** ----->
 
<!-- **********************End Availability Report************************** ----->
 
<!-- *********************************************************************** ----->
 
<!-- *********************************************************************** ----->

Latest revision as of 13:13, 18 June 2014

RAL Tier1 Operations Report for 18th June 2014

Review of Issues during the week 11th to 18th June 2014.
  • On Friday (13th Jun) there was a problem with one of the FTS2 front-end systems - fixed by appropriate daemon restart.
  • There was a problem with cream-ce01 which started failing tests on Friday (13th Jun). It was put into a downtime and drained out over the weekend after which its database was reset. It was returned to service on Monday (16th Jun).
Resolved Disk Server Issues
  • None
Current operational status and issues
  • The CMS Castor stager update to version 2.1.14-13 took place yesterday (Tuesday) as planned. There were some difficulties caused by the disk server re-balancer initially after the upgrade. However, these were understood and the service was restored within the announced outage. Nevertheless there have been some ongoing problems and the service was placed in a 'warning' state in the GOC DB for the rest of the day and overnight. The problems are being followed up with the developers at CERN.
Ongoing Disk Server Issues
  • None
Notable Changes made this last week.
  • Yesterday (17th June) the CMS Castor stager was updated to version 2.1.14-13.
  • On Monday (16th June) Removed LHCb access from cream-ce02
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Dates for the Castor 2.1.14 stager upgrades: GEN - Tue 24th June; LHCb - Thu 26th June; Atlas - Tues 1st July.
  • We are starting to plan the termination of the FTS2 service now that almost all use is on FTS3.

Listing by category:

  • Databases:
    • Switch LFC/FTS/3D to new Database Infrastructure.
  • Castor:
    • The Castor 2.1.14 upgrade is underway.
    • The CIP is compatible with Castor version 2.1.14. There is an issue reported by LHCb to be investigated.
  • Networking:
    • Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
    • Make routing changes to allow the removal of the UKLight Router.
  • Fabric
    • We are phasing out the use of the software server used by the small VOs.
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during 2014.
Entries in GOC DB starting between the 11th and 18th June 2014.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-cms.gridpp.rl.ac.uk, UNSCHEDULED WARNING 17/06/2014 16:30 18/06/2014 12:00 19 hours and 30 minutes Investigating some problems following the Castor 2.1.14 update of the CMS stager.
srm-cms.gridpp.rl.ac.uk, SCHEDULED OUTAGE 17/06/2014 10:00 17/06/2014 16:30 6 hours and 30 minutes CMS Castor instance down for Castor 2.1.14 Stager Update.
cream-ce01.gridpp.rl.ac.uk, cream-ce01.gridpp.rl.ac.uk, UNSCHEDULED OUTAGE 13/06/2014 16:30 16/06/2014 13:11 2 days, 20 hours and 41 minutes Problems on this CE being investigated. (All other CEs OK).
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
106280 Green Less Urgent In Progress 2014-06-17 2014-06-18 Atlas Lost file in RAL-LCG2
105571 Amber Less Urgent In Progress 2014-05-21 2014-06-02 LHCb BDII and SRM publish inconsistent storage capacity numbers
105405 Red Urgent In Progress 2014-05-14 2014-06-10 please check your Vidyo router firewall configuration
105100 Red Less Urgent In Progress 2014-05-02 2014-05-30 CMS T1_UK_RAL Consistency Check (May14)
98249 Red Urgent In Progress 2013-10-21 2014-06-17 SNO+ please configure cvmfs stratum-0 for SNO+ at RAL T1
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
11/06/14 100 91.3 100 99.3 100 99 99 Problem with Argus server.
12/06/14 100 100 100 100 100 100 97
13/06/14 100 100 95.3 100 100 98 99 Failed several SRM (Put) tests with " Invalid argument"
14/06/14 100 100 100 100 100 100 100
15/06/14 100 100 100 100 100 100 100
16/06/14 100 100 100 100 100 99 100
17/06/14 100 100 100 72.9 100 96 100 CMS Castor stager 2.1.14 upgrade.