Difference between revisions of "Tier1 Operations Report 2015-03-04"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(7 intermediate revisions by one user not shown)
Line 9: Line 9:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 25th February to 4th March 2015.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 25th February to 4th March 2015.
 
|}
 
|}
* A test of the problematic Tier1 router was carried out on Thursday morning, 19th Feb.
+
* On Thursday (26th Feb) there was a problem with our Argus server that stopped batch job submission starting for an hour or so.
* Yesterday (Tuesday) there was an outage of part of Castor as some racks containing disk servers (the 2011 batches) were shutdown while they were moved within the machine room to make room for new deliveries.
+
* A problem was uncovered by LHCb when a particular file could not be read from Castor. A manual fixup enabled access to the file again. However, investigations showed a problem when the file was created a few weeks ago - which was the second occurrence of a rare bug in Castor.
* There were a couple of breaks in network connectivity between 07:00 and 08:00 yesterday (Tuesday 24th Feb) while core site network switches were upgraded.
+
* Five files have been declared lost to CMS. These were all picked up by the checksum checker at various times in the last weeks.
* On Wednesday (18th Feb) a single file was reportd lost to CMS. This file had been picked up by the Castor checksum checker.
+
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 23: Line 22:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS757 (CMSDisk - D1T0) failed early on Sunday morning (22nd Feb). Following checking it was returned to service the following morning.
+
* GDSS757 (CMSDisk - D1T0) failed again early on Friday morning (27th Feb). As this server has failed multiple times it has been put back readonly and is being drained.
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
Line 56: Line 55:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
|}
 
|}
* The 2011 disk servers were moved within the machine room to make space for new deliveries. As part of this the connections to these servers was migrated to the Tier1 mesh network.
+
* Removed DNs from FTS3 monitoring messages
* On Wednesday (18th Feb) the Argus policy was updated to support the MICE reco role.
+
* Cap on maximum number of ALICE batch jobs raised from 3500 to 6000.
* A Castor namesever box has been set-up to enable queries against Castor metadata to be made without affecting the throughput of production work.
+
* A new Stratum-1 system for the cern.ch domain set-up.
* A system has been set-up to provide Atlas with Castor information that is not supplied by the SRM.
+
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 70: Line 68:
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
|}
 
|}
None
+
 
 +
 
 +
 
 +
{| border=1 align=center
 +
|- bgcolor="#7c8aaf"
 +
! Service
 +
! Scheduled?
 +
! Outage/At Risk
 +
! Start
 +
! End
 +
! Duration
 +
! Reason
 +
|-
 +
| Whole site
 +
| SCHEDULED
 +
| WARNING
 +
| 11/03/2015 10:30
 +
| 11/03/2015 11:00
 +
| 30 minutes
 +
| Warning on site access during firmware updates on pair of network switches.
 +
|-
 +
|lcgfts3.gridpp.rl.ac.uk,
 +
| SCHEDULED
 +
| WARNING
 +
| 05/03/2015 10:00
 +
| 05/03/2015 12:00
 +
| 2 hours
 +
| Warning on FTS3 service during upgrade to version v3.2.32
 +
|-
 +
| Whole site
 +
| UNSCHEDULED
 +
| WARNING
 +
| 05/03/2015 07:45
 +
| 05/03/2015 08:30
 +
| 45 minutes
 +
| Warning following installation of a replacement network router.
 +
|}
 +
 
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 84: Line 119:
 
|}
 
|}
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
* Investigate problems on the primary Tier1 router. Discussions with the vendor are ongoing.
+
* Investigate problems on the primary Tier1 router. Discussions with the vendor are ongoing. (Swap with replacement unit planned for tomorrow morning, 5th March).
 +
* Update firmware on resilient pair of switches in the Tier1 network - planned for Wednesday 11th March.
 +
* FTS3 update planned for Thursday 5th March.
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Databases:
 
* Databases:

Latest revision as of 14:56, 4 March 2015

RAL Tier1 Operations Report for 4th March 2015

Review of Issues during the week 25th February to 4th March 2015.
  • On Thursday (26th Feb) there was a problem with our Argus server that stopped batch job submission starting for an hour or so.
  • A problem was uncovered by LHCb when a particular file could not be read from Castor. A manual fixup enabled access to the file again. However, investigations showed a problem when the file was created a few weeks ago - which was the second occurrence of a rare bug in Castor.
  • Five files have been declared lost to CMS. These were all picked up by the checksum checker at various times in the last weeks.
Resolved Disk Server Issues
  • GDSS757 (CMSDisk - D1T0) failed again early on Friday morning (27th Feb). As this server has failed multiple times it has been put back readonly and is being drained.
Current operational status and issues
  • We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
Ongoing Disk Server Issues
  • None
Notable Changes made this last week.
  • Removed DNs from FTS3 monitoring messages
  • Cap on maximum number of ALICE batch jobs raised from 3500 to 6000.
  • A new Stratum-1 system for the cern.ch domain set-up.
Declared in the GOC DB


Service Scheduled? Outage/At Risk Start End Duration Reason
Whole site SCHEDULED WARNING 11/03/2015 10:30 11/03/2015 11:00 30 minutes Warning on site access during firmware updates on pair of network switches.
lcgfts3.gridpp.rl.ac.uk, SCHEDULED WARNING 05/03/2015 10:00 05/03/2015 12:00 2 hours Warning on FTS3 service during upgrade to version v3.2.32
Whole site UNSCHEDULED WARNING 05/03/2015 07:45 05/03/2015 08:30 45 minutes Warning following installation of a replacement network router.


Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Investigate problems on the primary Tier1 router. Discussions with the vendor are ongoing. (Swap with replacement unit planned for tomorrow morning, 5th March).
  • Update firmware on resilient pair of switches in the Tier1 network - planned for Wednesday 11th March.
  • FTS3 update planned for Thursday 5th March.

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
    • Update to Oracle 11.2.0.4
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
    • Update Castor to 2.1-14-latest.
  • Networking:
    • Resolve problems with primary Tier1 Router
    • Make routing changes to allow the removal of the UKLight Router.
    • Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report.
  • None
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
111856 Green Less Urgent Waiting Reply 2015-02-19 2015-03-03 LHCb Stalled LHCb jobs over night
111699 Green Less Urgent In Progress 2015-02-10 2015-02-27 Atlas gLExec hammercloud jobs keep failing at RAL-LCG2 & RALPP
109694 Red Urgent In Progress 2014-11-03 2015-03-04 SNO+ gfal-copy failing for files at RAL
108944 Red Urgent Waiting Reply 2014-10-01 2015-03-03 CMS AAA access test failing at T1_UK_RAL
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
25/02/15 100 100 100 100 100 100 99
26/02/15 100 100 100 100 100 100 99
27/02/15 100 100 100 100 100 100 95
28/02/15 100 100 100 100 100 100 95
01/03/15 100 100 100 100 100 0 98
02/03/15 100 100 100 100 100 100 97
03/03/15 100 100 100 100 100 98 100