Difference between revisions of "Tier1 Operations Report 2014-06-11"

From GridPP Wiki
Jump to: navigation, search
()
 
(12 intermediate revisions by one user not shown)
Line 9: Line 9:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the fortnight 28th May to 11th June 2014.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the fortnight 28th May to 11th June 2014.
 
|}
 
|}
* Maintenance on the diesel generator was carried out as planned on the morning of Thursday 22nd May.
+
* On Sunday 1st June there was a problem with the argus server affecting job submissions. The on-call team resolved the problem.
* There were problems with tape access late Tuesday and Wednesday (20/21 May). On the Tuesday morning a new tape controller server (ACSLS) had been put into operation. Some six hours later problems started to appear. This change was reverted on Wednesday afternoon. The revertion was to put the old server back into service.
+
* During the afternoon of Thursday 5th June a problem on one of the network switch stacks stopped access to some disk servers that provide the caches for Atlas and LHCb tape (D0T1) service classes. This was declared in the GOC DB. The problem lasted around 75 minutes.
* This morning's planned network reconfiguration, with an 'at risk' on Castor ran into some problems causing a break in access to some disk servers for around 25 minutes. The affcted servers were the OCF'12 batch, which are in AtlasDataDisk, CMSDisk and LHCbDst. Castor recovered OK from this. The network change itself was carried out to completion.
+
* There was a further problem with the argus server affecting job submission this morning (11th June).
 +
* The problems with the WMSs reported in previous weeks are no longer present. The WMS developers were involved in looking at the problems although they were not finally understood.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 22: Line 23:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* None.
+
* GDSS586 (AtlasDataDisk - D1T0) failed to restart after kernel/errata updates applied during the Castor update on 10th June. It was returned to production just befor this meeting (11th June).
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
Line 33: Line 34:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
|}
 
|}
* Grumbly problems with the WMSs reported last week ongoing. The developers have been contacted.
+
* None
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 55: Line 56:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
|}
 
|}
* Completion of roll out of CVMFS Client version 2.1.19 to whole farm.
+
* Yesterday (10th June) the Castor namserver was updated to version 2.1.14-13. While Castor was down the opportunity was taken to update the firmware in some network switches and apply kernel/errata updates to the Castor disk servers.
* This morning (28th May) arc-ce01 was updated to version 4.1.0-1.
+
* A new ARC CE, arc-ce04 has been brought into production.
* This morning (28th May) two network switches that provide connectivity to the 2012 batches of Castor disk servers were moved to the mesh network.
+
* ARC CEs arc-ce02 & arc-ce03 have been upgraded to version 4.1.0. (All ARC CEs now updated).
* Although not a Tier1 issue we note that the UK EScience CA has switched to issuing SHA2 signed certificates this morning.
+
* The host certificate on arc-ce02 has been updated andthe new certificate is SHA-2 signed. There was an initial error during the application of this certificate, but that was corrected and the service is now running OK with the new (SHA-2) certificate.
 +
* On the 2nd June LHCb access was removed from cream-ce01.
 +
* Today (11th June) a new tape controller system (ACSLS) is being installed. There have been some problems with the new server. However, last test (last week) was successful.
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 84: Line 87:
 
|}
 
|}
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
* Dates for the Castor 2.1.14 upgrade: Nameserver: Tuesday 10th June (now in GOC DB); Stagers: CMS- Tue 17th June; LHCb - Thu 19th June; GEN - Tue 24th June; Atlas - Thu 26th June.
+
* Dates for the Castor 2.1.14 stager upgrades: CMS- Tue 17th June; LHCb - Thu 19th June; GEN - Tue 24th June; Atlas - Thu 26th June.
 
* We are starting to plan the termination of the FTS2 service now that almost all use is on FTS3.
 
* We are starting to plan the termination of the FTS2 service now that almost all use is on FTS3.
 
'''Listing by category:'''
 
'''Listing by category:'''
Line 90: Line 93:
 
** Switch LFC/FTS/3D to new Database Infrastructure.
 
** Switch LFC/FTS/3D to new Database Infrastructure.
 
* Castor:
 
* Castor:
** The Castor 2.1.14-11 upgrade is now scheduled. There is a further minor upgrade (to 2.1.14-13) that will be tested ahead of a later deployment.
+
** The Castor 2.1.14 upgrade is underway. Some checks are ongoing before finally deciding whether the stagers will go directly to minor version 2.1.14-13 (rather than 2.1.14-11 as previously planned).
 
** CIP-2.2.15 publishes online resources correctly for CASTOR 2.1.14 but not nearline (they appear as zero, due to a modification to how data is held in CASTOR).  CIP will be updated.  We can double check the Rajanian Problem at the same time which was understood and fixed earlier.
 
** CIP-2.2.15 publishes online resources correctly for CASTOR 2.1.14 but not nearline (they appear as zero, due to a modification to how data is held in CASTOR).  CIP will be updated.  We can double check the Rajanian Problem at the same time which was understood and fixed earlier.
 
* Networking:
 
* Networking:
Line 214: Line 217:
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
|-
 
|-
| 105571
+
| 106090
 
| Green
 
| Green
 +
| Urgent
 +
| In Progress
 +
| 2014-06-11
 +
| 2014-06-11
 +
| Atlas
 +
| RAL-LCG2 DATADISK is failing as a source
 +
|-
 +
| 105571
 +
| Yellow
 
| Less Urgent
 
| Less Urgent
 
| In Progress
 
| In Progress
 
| 2014-05-21
 
| 2014-05-21
| 2014-05-27
+
| 2014-06-02
 
| LHCb
 
| LHCb
 
| BDII and SRM publish inconsistent storage capacity numbers
 
| BDII and SRM publish inconsistent storage capacity numbers
 
|-
 
|-
 
| 105405
 
| 105405
| Yellow
+
| Red
 
| Urgent
 
| Urgent
 
| In Progress
 
| In Progress
 
| 2014-05-14
 
| 2014-05-14
| 2014-05-15
+
| 2014-06-10
 
|  
 
|  
 
| please check your Vidyo router firewall configuration
 
| please check your Vidyo router firewall configuration
|-
 
| 105308
 
| Yellow
 
| Less Urgent
 
| On Hold
 
| 2014-05-11
 
| 2014-05-27
 
| Atlas
 
| Jobs at RAL-LCG2_MCORE are failing with "Failed to open shared memory object: Permission denied"
 
|-
 
| 105161
 
| Amber
 
| Less Urgent
 
| In Progress
 
| 2014-05-05
 
| 2014-05-16
 
| H1
 
| hone jobs submitted into CREAM queues through lcgwms05.gridpp.rl.ac.uk & lcgwms06.gridpp.rl.ac.uk WMSs are are Ready status long time (more as 5 hours)
 
 
|-
 
|-
 
| 105100
 
| 105100
 
| Red
 
| Red
| Urgent
+
| Less Urgent
 
| In Progress
 
| In Progress
 
| 2014-05-02
 
| 2014-05-02
| 2014-05-12
+
| 2014-05-30
 
| CMS
 
| CMS
 
| T1_UK_RAL Consistency Check (May14)
 
| T1_UK_RAL Consistency Check (May14)
Line 264: Line 258:
 
| Waiting Reply
 
| Waiting Reply
 
| 2013-10-21
 
| 2013-10-21
| 2014-05-21
+
| 2014-06-02
 
| SNO+
 
| SNO+
 
| please configure cvmfs stratum-0 for SNO+ at RAL T1
 
| please configure cvmfs stratum-0 for SNO+ at RAL T1

Latest revision as of 12:26, 11 June 2014

RAL Tier1 Operations Report for 11th June 2014

Review of Issues during the fortnight 28th May to 11th June 2014.
  • On Sunday 1st June there was a problem with the argus server affecting job submissions. The on-call team resolved the problem.
  • During the afternoon of Thursday 5th June a problem on one of the network switch stacks stopped access to some disk servers that provide the caches for Atlas and LHCb tape (D0T1) service classes. This was declared in the GOC DB. The problem lasted around 75 minutes.
  • There was a further problem with the argus server affecting job submission this morning (11th June).
  • The problems with the WMSs reported in previous weeks are no longer present. The WMS developers were involved in looking at the problems although they were not finally understood.
Resolved Disk Server Issues
  • GDSS586 (AtlasDataDisk - D1T0) failed to restart after kernel/errata updates applied during the Castor update on 10th June. It was returned to production just befor this meeting (11th June).
Current operational status and issues
  • None
Ongoing Disk Server Issues
  • None
Notable Changes made this last week.
  • Yesterday (10th June) the Castor namserver was updated to version 2.1.14-13. While Castor was down the opportunity was taken to update the firmware in some network switches and apply kernel/errata updates to the Castor disk servers.
  • A new ARC CE, arc-ce04 has been brought into production.
  • ARC CEs arc-ce02 & arc-ce03 have been upgraded to version 4.1.0. (All ARC CEs now updated).
  • The host certificate on arc-ce02 has been updated andthe new certificate is SHA-2 signed. There was an initial error during the application of this certificate, but that was corrected and the service is now running OK with the new (SHA-2) certificate.
  • On the 2nd June LHCb access was removed from cream-ce01.
  • Today (11th June) a new tape controller system (ACSLS) is being installed. There have been some problems with the new server. However, last test (last week) was successful.
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Dates for the Castor 2.1.14 stager upgrades: CMS- Tue 17th June; LHCb - Thu 19th June; GEN - Tue 24th June; Atlas - Thu 26th June.
  • We are starting to plan the termination of the FTS2 service now that almost all use is on FTS3.

Listing by category:

  • Databases:
    • Switch LFC/FTS/3D to new Database Infrastructure.
  • Castor:
    • The Castor 2.1.14 upgrade is underway. Some checks are ongoing before finally deciding whether the stagers will go directly to minor version 2.1.14-13 (rather than 2.1.14-11 as previously planned).
    • CIP-2.2.15 publishes online resources correctly for CASTOR 2.1.14 but not nearline (they appear as zero, due to a modification to how data is held in CASTOR). CIP will be updated. We can double check the Rajanian Problem at the same time which was understood and fixed earlier.
  • Networking:
    • Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
    • Make routing changes to allow the removal of the UKLight Router.
  • Fabric
    • We are phasing out the use of the software server used by the small VOs.
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during 2014.
Entries in GOC DB starting between the 28th May and 11th June 2014.
Service Scheduled? Outage/At Risk Start End Duration Reason
Castor (all SRM endpoints) and batch (all CEs) SCHEDULED OUTAGE 10/06/2014 08:50 10/06/2014 12:45 3 hours and 55 minutes Castor and batch services down during upgrade of Castor Nameserver to version 2.1.14.
arc-ce04.gridpp.rl.ac.uk UNSCHEDULED OUTAGE 10/06/2014 07:05 10/06/2014 12:45 5 hours and 40 minutes Stopping work on new CE around upgrade of Castor Storage System.
Castor (all SRM endpoints) and batch (all CEs) SCHEDULED OUTAGE 10/06/2014 06:50 10/06/2014 08:50 2 hours Castor and batch services down for Networking Change before upgrade of Castor Nameserver to version 2.1.14.
cream-ce01.gridpp.rl.ac.uk, SCHEDULED OUTAGE 07/06/2014 10:00 10/06/2014 12:00 3 days, 2 hours EMI-3 update 14 upgrade
srm-lhcb-tape.gridpp.rl.ac.uk, UNSCHEDULED OUTAGE 05/06/2014 15:25 05/06/2014 16:42 1 hour and 17 minutes Problem affecting disk cache in front of tape.
srm-atlas.gridpp.rl.ac.uk, UNSCHEDULED WARNING 05/06/2014 15:25 05/06/2014 16:41 1 hour and 16 minutes Problem affecting disk cache in front of tape. Non-Tape service clasess unaffected.
All srm endpoints SCHEDULED WARNING 04/06/2014 08:00 04/06/2014 17:00 9 hours Warning (At Risk) on tape systems during testing of new tape library controller.
arc-ce02.gridpp.rl.ac.uk, arc-ce03.gridpp.rl.ac.uk SCHEDULED WARNING 02/06/2014 10:00 02/06/2014 12:00 2 hours Upgrade arc-ce02 and arc-ce03 to v. 4.1.0.
arc-ce01.gridpp.rl.ac.uk SCHEDULED WARNING 28/05/2014 10:00 28/05/2014 12:00 2 hours Upgrade of ARC CE to version 4.1.0.
All Castor (all srm endpoints): srm-alice, srm-atlas, srm-biomed, srm-cert, srm-cms, srm-dteam, srm-hone, srm-ilc, srm-lhcb, srm-mice, srm-minos, srm-na62, srm-preprod, srm-snoplus, srm-superb, srm-t2k SCHEDULED WARNING 28/05/2014 09:30 28/05/2014 11:30 2 hours At Risk on Castor (All SRM endpoints) during small internal network change.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
106090 Green Urgent In Progress 2014-06-11 2014-06-11 Atlas RAL-LCG2 DATADISK is failing as a source
105571 Yellow Less Urgent In Progress 2014-05-21 2014-06-02 LHCb BDII and SRM publish inconsistent storage capacity numbers
105405 Red Urgent In Progress 2014-05-14 2014-06-10 please check your Vidyo router firewall configuration
105100 Red Less Urgent In Progress 2014-05-02 2014-05-30 CMS T1_UK_RAL Consistency Check (May14)
98249 Red Urgent Waiting Reply 2013-10-21 2014-06-02 SNO+ please configure cvmfs stratum-0 for SNO+ at RAL T1
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
28/05/14 100 100 100 100 100 100 99
29/05/14 100 100 100 100 100 100 100
30/05/14 100 100 100 100 100 100 99
31/05/14 100 100 100 100 100 100 100
01/06/14 100 100 100 94.8 100 100 100 Problem with Argus server.
02/06/14 100 100 100 100 100 100 100
03/06/14 100 100 100 100 100 100 100
04/06/14 100 100 100 100 100 100 100
05/06/14 100 100 99.1 100 100 100 99 One SRM Delete test failure ([SRM_INVALID_PATH] No such file or directory)
06/06/14 100 100 100 100 100 100 98
07/06/14 100 100 100 100 100 100 100
08/06/14 100 100 100 100 100 100 100
09/06/14 100 100 100 100 100 99 99
10/06/14 86.2 77.5 75.3 75.3 75.3 91 99 Castor Namserver 2.1.14 update.