Difference between revisions of "Tier1 Operations Report 2015-07-08"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(8 intermediate revisions by one user not shown)
Line 9: Line 9:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 1st to 8th July 2015.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 1st to 8th July 2015.
 
|}
 
|}
* LHCb have flagged up a problem running jobs on the batch nodes being used to test the new worker node configuration. A likely cause has been identified and a fix is being rolled out.
+
* A couple of issues that have emerged as part of the testing of the new worker node configuration have been identified and solved. One of these was seen by LHCb (and reported last week), the other was seen by Atlas.
* A configuration error with the network interface on one of the new batches of worker nodes was identified last week. These have been drained out and re-installed to fix this. They are expected to go back into production imminently.
+
* A problem transferring some files from FNAL to RAL (reported last week) has been resolved by FNAL.
* A problem transferring some files from FNAL to us is being investigated.
+
* The 'OPS' SUM test has been failing at RAL, as for many other sites, owing to a problem with the test itself.
* There was a problem with the test FTS3 service at the end of last week. The underlying cause having been triggered by the problem that affected our network on Tuesday (23rd June) and reported last week. Atlas flipped transfers to use the production service.
+
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 23: Line 22:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* The draining of GDSS763 (AtlasDataDisk - D1T0) has been completed and the server removed. This server failed on the 19th June. Following further crashes the disks were placed in a different server/chassis so that all files could be drained off. 13 files were reported lost to Atlas. All these were being written at around the time the server initially failed.
+
* None
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
Line 58: Line 57:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* The change to a database procedure to speeds up file open times within Castor was made to the Atlas instance on Monday (29th June). It will be made on the GEN instance today. (Last week we reported that the change had already been made for the LHCb & CMS Castor instances.)
+
* A UPS/Generator load test was made successfully last Wednesday (1st July).
* An updated version of xrootd (upgraded from version 3.3.3 to 3.3.6) was applied to the Castor GEN instance to enable 3rd party transfers for Alice.
+
* A test of using the Castor re-balancer more aggressively was made. However, these showed some internal operational problems and it has been tuned back down again.
 +
* The change a database procedure to speeds up file open times within Castor was made to the GEN instance on Wednesday afternoon (1st July). This change has now been rolled out to all Castor instances.
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 79: Line 79:
 
! Duration
 
! Duration
 
! Reason
 
! Reason
|-
 
| Whole site
 
| SCHEDULED
 
| WARNING
 
| 01/07/2015 10:00
 
| 01/07/2015 11:30
 
| 1 hour and 30 minutes
 
| Warning on site during quarterly UPS/Generator load test.
 
 
|-
 
|-
 
| arc-ce05.gridpp.rl.ac.uk
 
| arc-ce05.gridpp.rl.ac.uk
Line 109: Line 101:
 
|}
 
|}
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 +
* Vendor intervention on Tier1 Router provisionally scheduled for Tuesday 4th August.
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Databases:
 
* Databases:
Line 177: Line 170:
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
|-
 
|-
| 114659
+
| 114786
 
| Green
 
| Green
| Top Priority
+
| Less Urgent
| In Progress
+
| On Hold
| 2015-06-26
+
| 2015-07-02
| 2015-06-30
+
| 2015-07-07
| LHCb
+
| OPS
| Infrastructure problem at RAL
+
| [Rod Dashboard] Issue detected : egi.eu.lowAvailability-/RAL-LCG2@RAL-LCG2_Availability
 
|-
 
|-
 
| 113910
 
| 113910
Line 196: Line 189:
 
|-
 
|-
 
| 113836
 
| 113836
| Green
+
| Yellow
 
| Less Urgent
 
| Less Urgent
 
| In Progress
 
| In Progress

Latest revision as of 09:31, 8 July 2015

RAL Tier1 Operations Report for 8th July 2015

Review of Issues during the week 1st to 8th July 2015.
  • A couple of issues that have emerged as part of the testing of the new worker node configuration have been identified and solved. One of these was seen by LHCb (and reported last week), the other was seen by Atlas.
  • A problem transferring some files from FNAL to RAL (reported last week) has been resolved by FNAL.
  • The 'OPS' SUM test has been failing at RAL, as for many other sites, owing to a problem with the test itself.
Resolved Disk Server Issues
  • None
Current operational status and issues
  • We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
  • The post mortem review of the network incident on the 8th April is being finalised.
  • The intermittent, low-level, load-related packet loss over the OPN to CERN is still being tracked.
Ongoing Disk Server Issues
  • None
Notable Changes made since the last meeting.
  • A UPS/Generator load test was made successfully last Wednesday (1st July).
  • A test of using the Castor re-balancer more aggressively was made. However, these showed some internal operational problems and it has been tuned back down again.
  • The change a database procedure to speeds up file open times within Castor was made to the GEN instance on Wednesday afternoon (1st July). This change has now been rolled out to all Castor instances.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
arc-ce05.gridpp.rl.ac.uk SCHEDULED OUTAGE 10/06/2015 12:00 08/07/2015 12:00 28 days, This particular ARC CE will be decommisioned. (Four other ARC CEs remain in production).
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Vendor intervention on Tier1 Router provisionally scheduled for Tuesday 4th August.

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
    • Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases: Castor, Atlas Frontier (LFC done)
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Update the Oracle databses behind Castor to version 11.2.0.4. Will require some downtimes (See above)
    • Update disk servers to SL6.
    • Update to Castor version 2.1.15.
  • Networking:
    • Resolve problems with primary Tier1 Router. Need to schedule a roughly half-day outage for the vendor to carry out investigations.
    • Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
    • Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
    • Make routing changes to allow the removal of the UKLight Router.
    • Cabling/switch changes to the network in the UPS room to improve resilience.
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
Whole site SCHEDULED WARNING 01/07/2015 10:00 01/07/2015 11:30 1 hour and 30 minutes Warning on site during quarterly UPS/Generator load test.
arc-ce05.gridpp.rl.ac.uk SCHEDULED OUTAGE 10/06/2015 12:00 08/07/2015 12:00 28 days, This particular ARC CE will be decommisioned. (Four other ARC CEs remain in production).
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
114786 Green Less Urgent On Hold 2015-07-02 2015-07-07 OPS [Rod Dashboard] Issue detected : egi.eu.lowAvailability-/RAL-LCG2@RAL-LCG2_Availability
113910 Green Less Urgent Waiting Reply 2015-05-26 2015-06-23 SNO+ RAL data staging
113836 Yellow Less Urgent In Progress 2015-05-20 2015-06-24 GLUE 1 vs GLUE 2 mismatch in published queues
108944 Red Less Urgent In Progress 2014-10-01 2015-06-16 CMS AAA access test failing at T1_UK_RAL
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
01/07/15 0 100 100 100 100 100 100 Problem with SAM monitoring of ARC-CEs.
02/07/15 0 100 100 100 100 100 100 Problem with SAM monitoring of ARC-CEs.
03/07/15 0 100 100 100 100 90 100 Problem with SAM monitoring of ARC-CEs.
04/07/15 0 100 100 100 100 80 73 Problem with SAM monitoring of ARC-CEs.
05/07/15 0 100 100 100 100 87 n/a Problem with SAM monitoring of ARC-CEs.
06/07/15 0 100 100 100 100 98 100 Problem with SAM monitoring of ARC-CEs.
07/07/15 57.9 100 100 96 100 100 100 CMS:Single SRM Get test failure; OPS: Problem with SAM monitoring of ARC-CEs.