Difference between revisions of "Tier1 Operations Report 2015-02-25"

From GridPP Wiki
Jump to: navigation, search
()
Line 2: Line 2:
 
__NOTOC__
 
__NOTOC__
 
====== ======
 
====== ======
 
DRAFT
 
  
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
<!-- ***********Start Review of Issues during last three weeks*********** ----->
+
<!-- ***********Start Review of Issues during last week*********** ----->
 
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;"
 
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;"
 
|-
 
|-
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 18th to 25th February 2015.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 18th to 25th February 2015.
 
|}
 
|}
* On Monday morning the Tier1 data link failed. This is the link between the Tier1 routers and the UKLight router). All data transfers stopped. The problem lasted for around an hour from 09:00.
+
* A test of the problematic Tier1 router was carried out on Thursday morning, 19th Feb.
* A single file loss was reported to Atlas. This was picked up by the regular checksum valdiation process.
+
* Testerday (Tuesday) there was an outage of part of Castor as some racks containing disk servers (the 2011 batches) were shutdown while they were moved within the machine room to make room for new deliveries.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 23: Line 21:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS656 (LHCbRawRdst - D1T0) failed on Thursday afternoon (12th Feb). It was returned to service on the Saturday morning (14th Feb). A disk drive failed while verifying the RAID array.
+
* GDSS757 (cmsdISK - D1T0) failed early on Sunday morning (22nd Feb). Following checking it was returned tos ervice the following morning.
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
Line 56: Line 54:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
|}
 
|}
* Condor update to version 8.2.7 applied to CEs and other internal Condor systems.
+
** The 2011 disk servers were moved within the machie room to make space for new deliveries. As part of this teh connections to these servers was migrated to the Tier1 mesh network.
* Test/first roll-out deployments of condor 8.2.7 and cvmfs-client 2.1.20 on Worker nodes.
+
* There were a couple of breaks in network connectivity between 07:00 and 08:00 yesterday (Tuesday 24th Feb) while core sitenetwork switches were upgraded.
* Both production Atlas Frontier machines are now using Cronos as the Atlas 3D database backend instead of Ogma.
+
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 83: Line 80:
 
|}
 
|}
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
* Investigate problems on the primary Tier1 router. Discussions with the vendor are ongoing and a test is scheduled for tomorrow morning (19th Feb).
+
* Investigate problems on the primary Tier1 router. Discussions with the vendor are ongoing.
* Physical move of 2011 disk server racks to make space for new delivery.
+
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Databases:
 
* Databases:
Line 95: Line 91:
 
* Networking:
 
* Networking:
 
** Resolve problems with primary Tier1 Router
 
** Resolve problems with primary Tier1 Router
** Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
+
 
 
** Make routing changes to allow the removal of the UKLight Router.
 
** Make routing changes to allow the removal of the UKLight Router.
 
** Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
 
** Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
 
* Fabric
 
* Fabric
** Physical move of 2011 disk server racks to make space for new delivery.
 
 
** Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
 
** Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ***************End Advanced warning for other interventions*************** ----->

Revision as of 12:40, 25 February 2015

RAL Tier1 Operations Report for 25th February 2015

Review of Issues during the week 18th to 25th February 2015.
  • A test of the problematic Tier1 router was carried out on Thursday morning, 19th Feb.
  • Testerday (Tuesday) there was an outage of part of Castor as some racks containing disk servers (the 2011 batches) were shutdown while they were moved within the machine room to make room for new deliveries.
Resolved Disk Server Issues
  • GDSS757 (cmsdISK - D1T0) failed early on Sunday morning (22nd Feb). Following checking it was returned tos ervice the following morning.
Current operational status and issues
  • We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
Ongoing Disk Server Issues
  • None
Notable Changes made this last week.
    • The 2011 disk servers were moved within the machie room to make space for new deliveries. As part of this teh connections to these servers was migrated to the Tier1 mesh network.
  • There were a couple of breaks in network connectivity between 07:00 and 08:00 yesterday (Tuesday 24th Feb) while core sitenetwork switches were upgraded.
Declared in the GOC DB

None

Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Investigate problems on the primary Tier1 router. Discussions with the vendor are ongoing.

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
    • Update to Oracle 11.2.0.4
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
    • Update Castor to 2.1-14-latest.
  • Networking:
    • Resolve problems with primary Tier1 Router
    • Make routing changes to allow the removal of the UKLight Router.
    • Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
Many Castor services (srm-alice, srm-atlas, srm-biomed, srm-dteam, srm-ilc, srm-lhcb-tape, srm-lhcb, srm-mice, srm-na62, srm-pheno, srm-snoplus, srm-superb, srm-t2k SCHEDULED OUTAGE 24/02/2015 08:30 24/02/2015 17:06 8 hours and 36 minutes Outage of some parts of the Castor storage while disk servers are moved to make space for new equipment.
Whole site SCHEDULED WARNING 19/02/2015 07:45 19/02/2015 08:30 45 minutes Warning during test of network router. We expect a 5 minute break in connectivity to site sometime during this window.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
111856 Green Less Urgent In Progress 2015-02-19 2015-02-24 LHCb Stalled LHCb jobs over night
111699 Green Less Urgent In Progress 2015-02-10 2015-02-18 Atlas gLExec hammercloud jobs keep failing at RAL-LCG2 & RALPP
109694 Red Urgent Waiting Reply 2014-11-03 2015-02-24 SNO+ gfal-copy failing for files at RAL
108944 Red Urgent In Progress 2014-10-01 2015-02-24 CMS AAA access test failing at T1_UK_RAL
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
18/02/15 100 100 100 100 100 100 100
19/02/15 100 100 98.0 100 100 98 97 Single SRM test failure
20/02/15 100 100 98.0 100 100 100 100 Single SRM test failure
21/02/15 100 100 100 100 100 100 99
22/02/15 100 100 98.0 100 100 100 99 Single SRM test failure
23/02/15 100 100 100 100 100 97 99
24/02/15 100 100 62.0 58.0 59.0 94 92 Planned move of disk server racks.