Difference between revisions of "Tier1 Operations Report 2015-02-04"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(9 intermediate revisions by one user not shown)
Line 9: Line 9:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 28th January to 4th February 2015.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 28th January to 4th February 2015.
 
|}
 
|}
* A test was carried on the problematic router during Thursday afternoon (22nd) when it failed within a few minutes of taking over as the master. A manual flip back to the other router was then carried out. This caused a 5-minute break in network connectivity to the Tier1.
+
* During the afternoon of Monday (2nd Feb) there was a problem with some database services running on non-optimal nodes within the RAC. After these were moved to their correct locations some Castor service restarts were carried out to ensure the connections to the database were correct.  
* There was a problem with the LHCb SRMs yesterday (Tuesday 27th Jan). Some processes didn't re-connect to the databases following required reboots to pick up security updates.
+
* A single file was reported lost to CMS. This was an old file (2012) file picked up during a consistency check.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 21: Line 21:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* None
+
* GDSS778 (LHCbDst - D1T0) did not come back after being shutdown for a reboot during scheduled work on Castor on Monday (2nd Feb). It was found to have a faulty disk drive. After being checked out it was returned to production during Tuesday afternoon (3rd) in read-only mode. It was put back in full production (read/write) this morning (4th Feb).
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
Line 33: Line 33:
 
|}
 
|}
 
* We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
 
* We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
 +
* There was a problem overnight with (some) CMS Condor job submissions. This is not yet understood.
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 54: Line 55:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
|}
 
|}
* The safety checking of the electrical power circuits in the machine room has been completed.  
+
* Numbers of system reboots to pick up recent security patches.
* The migration of all data off T10000A & B media has been completed.
+
* Application of Oracle patches to some database nodes.
* On Friday (23rd Jan) the FTS3 srevice was upgraded to version 3.2.31.
+
* The backup CERN OPN link has been connected via a new route. Traffic was routed over the link for 24hours as a test.
* On Monday (26th Jan) some redundant Castor stager schemas were cleaned up.
+
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 78: Line 78:
 
! Reason
 
! Reason
 
|-
 
|-
| Castor CMS & GEN instances (srm-alice, srm-biomed, srm-cms, srm-cms-disk, srm-dteam, srm-hone, srm-ilc, srm-mice, srm-minos, srm-na62, srm-pheno, srm-snoplus, srm-superb, srm-t2k.
+
| All Castor (All SRMs)
 
| SCHEDULED
 
| SCHEDULED
 
| WARNING
 
| WARNING
| 29/01/2015 09:00
+
| 11/02/2015 08:30
| 29/01/2015 16:00
+
| 11/02/2015 15:00
| 7 hours
+
| 6 hours and 30 minutes
| Warning while patching Castor disk servers
+
| Castor services At Risk during application of regular patches to back end database systems.
 
|-
 
|-
| srm-atlas.gridpp.rl.ac.uk,
+
| Castor Atlas and GEN instances
 
| SCHEDULED
 
| SCHEDULED
 
| WARNING
 
| WARNING
| 28/01/2015 09:00
+
| 04/02/2015 08:30
| 28/01/2015 16:00
+
| 04/02/2015 15:00
| 7 hours
+
| 6 hours and 30 minutes
| Warning while patching Castor disk servers
+
| Castor services At Risk during application of regular patches to back end database systems.
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 108: Line 108:
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
* Investigate problems on the primary Tier1 router. Discussions with the vendor are ongoing.
 
* Investigate problems on the primary Tier1 router. Discussions with the vendor are ongoing.
* Track appropriate security updates.
+
* Physical move of 2011 disk server racks to make space for new delivery.
* Move of connection for CERN Backup link on Tuesday 3rd Feb.
+
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Databases:
 
* Databases:
** Application of Oracle PSU patches to database systems.
+
** Application of Oracle PSU patches to database systems (ongoing)
 
** A new database (Oracle RAC) has been set-up to host the Atlas 3D database. This is updated from CERN via Oracle GoldenGate. This system is yet to be brought into use. (Currently Atlas 3D/Frontier still uses the OGMA datase system, although this was also changed to update from CERN using Oracle Golden Gate.)
 
** A new database (Oracle RAC) has been set-up to host the Atlas 3D database. This is updated from CERN via Oracle GoldenGate. This system is yet to be brought into use. (Currently Atlas 3D/Frontier still uses the OGMA datase system, although this was also changed to update from CERN using Oracle Golden Gate.)
 
** Switch LFC/3D to new Database Infrastructure.
 
** Switch LFC/3D to new Database Infrastructure.
Line 126: Line 125:
 
** Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
 
** Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
 
* Fabric
 
* Fabric
 +
** Physical move of 2011 disk server racks to make space for new delivery.
 
** Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
 
** Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ***************End Advanced warning for other interventions*************** ----->
Line 135: Line 135:
 
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;"
 
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;"
 
|-
 
|-
| style="background-color: #7c8aaf; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting between the 21st and 28th January 2015.
+
| style="background-color: #7c8aaf; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting since the last report.
 
|}
 
|}
 
{| border=1 align=center
 
{| border=1 align=center
Line 147: Line 147:
 
! Reason
 
! Reason
 
|-
 
|-
| srm-atlas.gridpp.rl.ac.uk,
+
| Atlas and GEN Castor instances.
 
| SCHEDULED
 
| SCHEDULED
 
| WARNING
 
| WARNING
| 28/01/2015 09:00
+
| 04/02/2015 08:30
| 28/01/2015 16:00
+
| 04/02/2015 15:00
| 7 hours
+
| 6 hours and 30 minutes
| Warning while patching Castor disk servers
+
| Castor services At Risk during application of regular patches to back end database systems.
 
|-
 
|-
| srm-lhcb.gridpp.rl.ac.uk
+
| All Castor (all SRMs).
 
| SCHEDULED
 
| SCHEDULED
| WARNING
+
| OUTAGE
| 27/01/2015 09:00
+
| 02/02/2015 10:00
| 27/01/2015 11:52
+
| 02/02/2015 12:21
| 2 hours and 52 minutes
+
| 2 hours and 21 minutes
| Warning while patching Castor disk servers
+
| Castor Storage System Stop for updates and reboots.
 
|-
 
|-
| lcgfts3
+
| srm-atlas.gridpp.rl.ac.uk
| UNSCHEDULED
+
| WARNING
+
| 23/01/2015 10:00
+
| 23/01/2015 11:00
+
| 1 hour
+
| Upgrade of FTS3 service to version 3.2.31
+
|-
+
| Whole site
+
 
| SCHEDULED
 
| SCHEDULED
 
| WARNING
 
| WARNING
| 20/01/2015 08:30
+
| 28/01/2015 09:00
| 22/01/2015 18:00
+
| 28/01/2015 12:03
| 2 days, 9 hours and 30 minutes
+
| 3 hours and 3 minutes
|Warning during safety checks on power circuits in machine room. Testing carried out during working hours on each day.
+
| Warning while patching Castor disk servers
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 198: Line 190:
 
| Less Urgent
 
| Less Urgent
 
| In Progress
 
| In Progress
| 2015-01-22
+
| 2015-01-29
| 2015-01-26
+
| 2015-02-02
 
| CMS
 
| CMS
 
| RAL Staging Test for Run2
 
| RAL Staging Test for Run2
Line 206: Line 198:
 
| Green
 
| Green
 
| Urgent
 
| Urgent
| Waiting Reply
+
| In Progress
 
| 2015-01-22
 
| 2015-01-22
| 2015-01-26
+
| 2015-02-03
 
| CMS
 
| CMS
 
| T1_UK_RAL Consistency Check (January 2015)
 
| T1_UK_RAL Consistency Check (January 2015)
Line 235: Line 227:
 
| In Progress
 
| In Progress
 
| 2014-10-01
 
| 2014-10-01
| 2015-01-27
+
| 2015-01-28
 
| CMS
 
| CMS
 
| AAA access test failing at T1_UK_RAL
 
| AAA access test failing at T1_UK_RAL
Line 244: Line 236:
 
| On Hold
 
| On Hold
 
| 2014-08-27
 
| 2014-08-27
| 2015-01-20
+
| 2015-01-28
 
| Atlas
 
| Atlas
 
| BDII vs SRM inconsistent storage capacity numbers
 
| BDII vs SRM inconsistent storage capacity numbers

Latest revision as of 11:48, 4 February 2015

RAL Tier1 Operations Report for 4th February 2015

Review of Issues during the week 28th January to 4th February 2015.
  • During the afternoon of Monday (2nd Feb) there was a problem with some database services running on non-optimal nodes within the RAC. After these were moved to their correct locations some Castor service restarts were carried out to ensure the connections to the database were correct.
  • A single file was reported lost to CMS. This was an old file (2012) file picked up during a consistency check.
Resolved Disk Server Issues
  • GDSS778 (LHCbDst - D1T0) did not come back after being shutdown for a reboot during scheduled work on Castor on Monday (2nd Feb). It was found to have a faulty disk drive. After being checked out it was returned to production during Tuesday afternoon (3rd) in read-only mode. It was put back in full production (read/write) this morning (4th Feb).
Current operational status and issues
  • We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
  • There was a problem overnight with (some) CMS Condor job submissions. This is not yet understood.
Ongoing Disk Server Issues
  • None
Notable Changes made this last week.
  • Numbers of system reboots to pick up recent security patches.
  • Application of Oracle patches to some database nodes.
  • The backup CERN OPN link has been connected via a new route. Traffic was routed over the link for 24hours as a test.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
All Castor (All SRMs) SCHEDULED WARNING 11/02/2015 08:30 11/02/2015 15:00 6 hours and 30 minutes Castor services At Risk during application of regular patches to back end database systems.
Castor Atlas and GEN instances SCHEDULED WARNING 04/02/2015 08:30 04/02/2015 15:00 6 hours and 30 minutes Castor services At Risk during application of regular patches to back end database systems.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Investigate problems on the primary Tier1 router. Discussions with the vendor are ongoing.
  • Physical move of 2011 disk server racks to make space for new delivery.

Listing by category:

  • Databases:
    • Application of Oracle PSU patches to database systems (ongoing)
    • A new database (Oracle RAC) has been set-up to host the Atlas 3D database. This is updated from CERN via Oracle GoldenGate. This system is yet to be brought into use. (Currently Atlas 3D/Frontier still uses the OGMA datase system, although this was also changed to update from CERN using Oracle Golden Gate.)
    • Switch LFC/3D to new Database Infrastructure.
    • Update to Oracle 11.2.0.4
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
    • Update Castor to 2.1-14-latest.
  • Networking:
    • Resolve problems with primary Tier1 Router
    • Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
    • Make routing changes to allow the removal of the UKLight Router.
    • Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
  • Fabric
    • Physical move of 2011 disk server racks to make space for new delivery.
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
Atlas and GEN Castor instances. SCHEDULED WARNING 04/02/2015 08:30 04/02/2015 15:00 6 hours and 30 minutes Castor services At Risk during application of regular patches to back end database systems.
All Castor (all SRMs). SCHEDULED OUTAGE 02/02/2015 10:00 02/02/2015 12:21 2 hours and 21 minutes Castor Storage System Stop for updates and reboots.
srm-atlas.gridpp.rl.ac.uk SCHEDULED WARNING 28/01/2015 09:00 28/01/2015 12:03 3 hours and 3 minutes Warning while patching Castor disk servers
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
111477 Green Less Urgent In Progress 2015-01-29 2015-02-02 CMS RAL Staging Test for Run2
111347 Green Urgent In Progress 2015-01-22 2015-02-03 CMS T1_UK_RAL Consistency Check (January 2015)
111120 Green Less Urgent Waiting Reply 2015-01-12 2015-01-22 Atlas large transfer errors from RAL-LCG2 to BNL-OSG2
109694 Red Urgent On hold 2014-11-03 2015-01-20 SNO+ gfal-copy failing for files at RAL
108944 Red Urgent In Progress 2014-10-01 2015-01-28 CMS AAA access test failing at T1_UK_RAL
107935 Red Less Urgent On Hold 2014-08-27 2015-01-28 Atlas BDII vs SRM inconsistent storage capacity numbers
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
28/01/15 100 100 98.9 100 100 100 99 Single SRM test failure. Could not open connection to srm-atlas.gridpp.rl.ac.uk
29/01/15 100 100 100 100 100 98 100
30/01/15 100 100 100 100 100 98 n/a
31/01/15 100 100 100 100 100 100 100
01/02/15 100 100 100 100 100 100 100
02/02/15 97.5 100 92 93 94 100 48 Castor outage for system reboots to pick up security patch.
03/02/15 100 100 100 73 96 100 85 CMS: CE tests failed overnight. LHCb: SIngle SRM test failure.