Difference between revisions of "Tier1 Operations Report 2015-02-11"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(15 intermediate revisions by 2 users not shown)
Line 9: Line 9:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 4th to 11th February 2015.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 4th to 11th February 2015.
 
|}
 
|}
* SAM tests for the ARC CEs for CMS failed during 3-5 February - teh problem was outsie the Tier1.
+
* SAM tests for the ARC CEs for CMS failed during 3-5 February - the problem was a CMS test failing on ARC-CEs.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 20: Line 20:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS778 (LHCbDst - D1T0) did not come back after being shutdown for a reboot during scheduled work on Castor on Monday (2nd Feb). It was found to have a faulty disk drive. After being checked out it was returned to production during Tuesday afternoon (3rd) in read-only mode. It was put back in full production (read/write) this morning (4th Feb).
+
* None
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
Line 32: Line 32:
 
|}
 
|}
 
* We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
 
* We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
* There was a problem overnight with (some) CMS Condor job submissions. This is not yet understood.
 
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 54: Line 53:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
|}
 
|}
* Numbers of system reboots to pick up recent security patches.
+
* Application of Oracle patches to some database nodes (ongoing).
* Application of Oracle patches to some database nodes.
+
* We are now fully using cgroups to control job memory limits on the batch farm.  
* The backup CERN OPN link has been connected via a new route. Traffic was routed over the link for 24hours as a test.
+
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 82: Line 80:
 
| 11/02/2015 08:30
 
| 11/02/2015 08:30
 
| 11/02/2015 15:00
 
| 11/02/2015 15:00
| 6 hours and 30 minutes
 
| Castor services At Risk during application of regular patches to back end database systems.
 
|-
 
| Castor Atlas and GEN instances
 
| SCHEDULED
 
| WARNING
 
| 04/02/2015 08:30
 
| 04/02/2015 15:00
 
 
| 6 hours and 30 minutes
 
| 6 hours and 30 minutes
 
| Castor services At Risk during application of regular patches to back end database systems.
 
| Castor services At Risk during application of regular patches to back end database systems.
Line 146: Line 136:
 
! Reason
 
! Reason
 
|-
 
|-
| Atlas and GEN Castor instances.
+
| All Castor instances (all SRMs)
 
| SCHEDULED
 
| SCHEDULED
 
| WARNING
 
| WARNING
| 04/02/2015 08:30
+
| 11/02/2015 08:30
| 04/02/2015 15:00
+
| 11/02/2015 15:00
 
| 6 hours and 30 minutes
 
| 6 hours and 30 minutes
 
| Castor services At Risk during application of regular patches to back end database systems.
 
| Castor services At Risk during application of regular patches to back end database systems.
 
|-
 
|-
| All Castor (all SRMs).
+
| Castor Atlas & GEN instances.
| SCHEDULED
+
| OUTAGE
+
| 02/02/2015 10:00
+
| 02/02/2015 12:21
+
| 2 hours and 21 minutes
+
| Castor Storage System Stop for updates and reboots.
+
|-
+
| srm-atlas.gridpp.rl.ac.uk
+
 
| SCHEDULED
 
| SCHEDULED
 
| WARNING
 
| WARNING
| 28/01/2015 09:00
+
| 04/02/2015 08:30
| 28/01/2015 12:03
+
| 04/02/2015 15:00
| 3 hours and 3 minutes
+
| 6 hours and 30 minutes
| Warning while patching Castor disk servers
+
| Castor services At Risk during application of regular patches to back end database systems.
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 185: Line 167:
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
|-
 
|-
| 111347
+
| 111699
 
| Green
 
| Green
| Urgent
+
| Less Urgent
 
| In Progress
 
| In Progress
| 2015-01-22
+
| 2015-02-10
| 2015-02-03
+
| 2015-02-11
| CMS
+
| Atlas
| T1_UK_RAL Consistency Check (January 2015)
+
| gLExec hammercloud jobs keep failing at RAL-LCG2 & RALPP
 
|-  
 
|-  
 
| 111120
 
| 111120
Line 199: Line 181:
 
| Waiting Reply
 
| Waiting Reply
 
| 2015-01-12
 
| 2015-01-12
| 2015-01-22
+
| 2015-02-09
 
| Atlas
 
| Atlas
 
| large transfer errors from RAL-LCG2 to BNL-OSG2
 
| large transfer errors from RAL-LCG2 to BNL-OSG2
Line 217: Line 199:
 
| In Progress
 
| In Progress
 
| 2014-10-01
 
| 2014-10-01
| 2015-01-28
+
| 2015-02-10
 
| CMS
 
| CMS
 
| AAA access test failing at T1_UK_RAL
 
| AAA access test failing at T1_UK_RAL
Line 226: Line 208:
 
| On Hold
 
| On Hold
 
| 2014-08-27
 
| 2014-08-27
| 2015-01-28
+
| 2015-02-09
 
| Atlas
 
| Atlas
 
| BDII vs SRM inconsistent storage capacity numbers
 
| BDII vs SRM inconsistent storage capacity numbers
Line 250: Line 232:
 
| 04/02/15 || 100 || 100 || style="background-color: lightgrey;" | 99 || style="background-color: lightgrey;" | 33 || 100 || 100 || 99 || CMS: CMS Problem affecting all CMS ARC CEs; Atlas: Single SRM Test failure: [SRM_FAILURE] Error trying to locate the file in the disk cache
 
| 04/02/15 || 100 || 100 || style="background-color: lightgrey;" | 99 || style="background-color: lightgrey;" | 33 || 100 || 100 || 99 || CMS: CMS Problem affecting all CMS ARC CEs; Atlas: Single SRM Test failure: [SRM_FAILURE] Error trying to locate the file in the disk cache
 
|-
 
|-
| 05/02/15 || 100 || 100 || 100 || style="background-color: lightgrey;" | 75 || 100 || 100 || 99 || CMS Problem affecting all CMS ARC CEs
+
| 05/02/15 || 100 || 100 || 100 || style="background-color: lightgrey;" | 75 || 100 || style="background-color: lightgrey;" | 94 || 99 || CMS Problem affecting all CMS ARC CEs< Atlas: Trf exit code 40. trans: Athena crash
 
|-
 
|-
| 06/02/15 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
| 06/02/15 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98  || 100 || style="background-color: lightgrey;" | 92 ||  style="background-color: lightgrey;" | 99 || CMS Problem affecting all CMS ARC CEs, Atlas: Trf exit code 40. trans: Athena crash
 
|-
 
|-
| 07/02/15 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
| 07/02/15 || 100 || 100 || 100 || 100 || style="background-color: lightgrey;" | 91.65 || 100 || style="background-color: lightgrey;" | 98 || LHCb failed 2 tests on Sat with an error of, No such file or directory
 
|-
 
|-
 
| 08/02/15 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
 
| 08/02/15 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
 
|-
 
|-
| 09/02/15 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
| 09/02/15 || 100 || 100 || 100 || 100 || 100 || 100 || style="background-color: lightgrey;" | 99 ||
 
|-
 
|-
| 10/02/15 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
| 10/02/15 || 100 || 100 || 100 || 100 || 100 || 100 || style="background-color: lightgrey;" | 97 ||
 
|}
 
|}
 
<!-- **********************End Availability Report************************** ----->
 
<!-- **********************End Availability Report************************** ----->
 
<!-- *********************************************************************** ----->
 
<!-- *********************************************************************** ----->

Latest revision as of 11:23, 11 February 2015

RAL Tier1 Operations Report for 11th February 2015

Review of Issues during the week 4th to 11th February 2015.
  • SAM tests for the ARC CEs for CMS failed during 3-5 February - the problem was a CMS test failing on ARC-CEs.
Resolved Disk Server Issues
  • None
Current operational status and issues
  • We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
Ongoing Disk Server Issues
  • None
Notable Changes made this last week.
  • Application of Oracle patches to some database nodes (ongoing).
  • We are now fully using cgroups to control job memory limits on the batch farm.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
All Castor (All SRMs) SCHEDULED WARNING 11/02/2015 08:30 11/02/2015 15:00 6 hours and 30 minutes Castor services At Risk during application of regular patches to back end database systems.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Investigate problems on the primary Tier1 router. Discussions with the vendor are ongoing.
  • Physical move of 2011 disk server racks to make space for new delivery.

Listing by category:

  • Databases:
    • Application of Oracle PSU patches to database systems (ongoing)
    • A new database (Oracle RAC) has been set-up to host the Atlas 3D database. This is updated from CERN via Oracle GoldenGate. This system is yet to be brought into use. (Currently Atlas 3D/Frontier still uses the OGMA datase system, although this was also changed to update from CERN using Oracle Golden Gate.)
    • Switch LFC/3D to new Database Infrastructure.
    • Update to Oracle 11.2.0.4
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
    • Update Castor to 2.1-14-latest.
  • Networking:
    • Resolve problems with primary Tier1 Router
    • Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
    • Make routing changes to allow the removal of the UKLight Router.
    • Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
  • Fabric
    • Physical move of 2011 disk server racks to make space for new delivery.
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
All Castor instances (all SRMs) SCHEDULED WARNING 11/02/2015 08:30 11/02/2015 15:00 6 hours and 30 minutes Castor services At Risk during application of regular patches to back end database systems.
Castor Atlas & GEN instances. SCHEDULED WARNING 04/02/2015 08:30 04/02/2015 15:00 6 hours and 30 minutes Castor services At Risk during application of regular patches to back end database systems.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
111699 Green Less Urgent In Progress 2015-02-10 2015-02-11 Atlas gLExec hammercloud jobs keep failing at RAL-LCG2 & RALPP
111120 Green Less Urgent Waiting Reply 2015-01-12 2015-02-09 Atlas large transfer errors from RAL-LCG2 to BNL-OSG2
109694 Red Urgent On hold 2014-11-03 2015-01-20 SNO+ gfal-copy failing for files at RAL
108944 Red Urgent In Progress 2014-10-01 2015-02-10 CMS AAA access test failing at T1_UK_RAL
107935 Red Less Urgent On Hold 2014-08-27 2015-02-09 Atlas BDII vs SRM inconsistent storage capacity numbers
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
04/02/15 100 100 99 33 100 100 99 CMS: CMS Problem affecting all CMS ARC CEs; Atlas: Single SRM Test failure: [SRM_FAILURE] Error trying to locate the file in the disk cache
05/02/15 100 100 100 75 100 94 99 CMS Problem affecting all CMS ARC CEs< Atlas: Trf exit code 40. trans: Athena crash
06/02/15 100 100 100 98 100 92 99 CMS Problem affecting all CMS ARC CEs, Atlas: Trf exit code 40. trans: Athena crash
07/02/15 100 100 100 100 91.65 100 98 LHCb failed 2 tests on Sat with an error of, No such file or directory
08/02/15 100 100 100 100 100 100 100
09/02/15 100 100 100 100 100 100 99
10/02/15 100 100 100 100 100 100 97