Difference between revisions of "Tier1 Operations Report 2015-01-14"

From GridPP Wiki
Jump to: navigation, search
()
m ()
 
(14 intermediate revisions by one user not shown)
Line 9: Line 9:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 7th to 14th January 2015.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 7th to 14th January 2015.
 
|}
 
|}
* On Sunday 21st December there was a problem with the LHCb Castor instance when the transfer manager processes became unresponsive. These were restarted by the on-call team.
+
* On the morning of Thursday 8th January there was a short (15 minute) network break in access to the Tier1 network. Diagnostic tests were being run on the faulty router – however after that the system restarted and took over as the master router of the pair (which was not anticipated). Some half hour later it failed again.
* On Christmas Eve (24/12) there was a networking problem that affected the Tier1. The primary of our router pair stopped working. However, the problem was not seen by the secondary router and no automated switchover took place. The primary router was manually restarted – triggering the failover and connectivity was restored. An outage of 70 minutes was declared. Following some discussion the active router was flipped back to the primary during the afternoon to leave us in a resilient configuration for the holidays.
+
* We have seen very high load on the LHCb SRMs through most of the week. Intermittent timeouts were seen on the tests. The number of LHCb batch jobs has been restricted to try and reduce the problem. In addition, during the day yesterday (Tuesday 13th) it was found that the SRMs were not responding and needed a restart.
* During Christmas Day evening, at around 21:30, the above problem recurred. Staff attended on site and connectivity was restored at around 01:00 on Boxing Day morning. This time the primary router would not restart. Connectivity has been through the secondary router since this incident with no resilience. The problems with the primary router are being followed up and there may be a hardware fault.
+
* Electrical circuit testing is underway. Two older batches of worker nodes were drained out beforehand. Some problems have been found. This has mainly affected some worker nodes to which access was lost for an hour or so. Also affected was access to ten disk servers which are currently in 'readonly' mode in AtlasDataDisk. A similar problem occurred during this morning (14th Jan). These were caused by a PDU becoming overloaded and tripping off some of it ports.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 22: Line 22:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* None
+
* GDSS757 (CMSDisk D1T0) failed on Saturday 3rd January. It was returned to service during the afternoon of Wednesday 7th January following testing.
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
Line 34: Line 34:
 
|}
 
|}
 
* We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
 
* We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
 +
* There is a problem with xroot access to the Castor GEN instance (not affecting ALICE).
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 44: Line 45:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
|}
 
|}
* GDSS757 (CMSDisk D1T0) failed on Saturday 3rd January. It is undergoing tests.
+
* None
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 55: Line 56:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
|}
 
|}
* A significant number of systems had kernel updates applied during the days before Christmas following a security advisory.
+
* In preparation for the electrical safety checks being carried out this week some "transfer switches" (enabling dual powering of some network switches and older systems) were installed during the second part of last week.
* On Tuesday 6th Jan the Castor GEN instance headnodes were successfully upgraded to SL6.
+
* The quarterly UPS/Generator load test took place successfully this morning (Wednesday 7th January).
+
* Condor updated to version to version 8.2.6 on the CEs.
+
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 117: Line 115:
 
|}
 
|}
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
* The rollout of the RIP protocol to the Tier1 routers still has to be completed. A software patch from the vendors will be applied to the Tier1 Routers in due course. (This was scheduled for 6th Jan but the problems encountered with one of the routers has delayed this).
+
* Investigate problems on the primary Tier1 router.
* Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room: Tue-Thu 13-15 January & Tue-Thu 20-22 January. There are some systms that need to be re-powered in preparation for this work.
+
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Databases:
 
* Databases:
Line 130: Line 127:
 
** Update Castor to 2.1-14-latest, this depends on SL6 being deployed.
 
** Update Castor to 2.1-14-latest, this depends on SL6 being deployed.
 
* Networking:
 
* Networking:
 +
** Resolve problems with primary Tier1 Router
 
** Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
 
** Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
 
** Make routing changes to allow the removal of the UKLight Router.
 
** Make routing changes to allow the removal of the UKLight Router.
** Enable the RIP protocol for updating routing tables on the Tier1 routers.
+
** Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
 
* Fabric
 
* Fabric
 
** Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes finished; migration of GEN from 'A' to 'D' tapes to follow.)
 
** Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes finished; migration of GEN from 'A' to 'D' tapes to follow.)
Line 188: Line 186:
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
|-
 
|-
| 110776
+
| 111120
 
| Green
 
| Green
| Urgent
+
| Less Urgent
| Waiting for Reply
+
| In Progress
| 2014-12-15
+
| 2015-01-12
| 2014-12-17
+
| 2015-01-14
| CMS
+
| Atlas
| Phedex Node Name Transition
+
| large transfer errors from RAL-LCG2 to BNL-OSG2
 
|-
 
|-
 
| 110605
 
| 110605
 
| Green
 
| Green
 
| Less Urgent
 
| Less Urgent
| On Hold
+
| In Progress
 
| 2014-12-08
 
| 2014-12-08
| 2014-12-12
+
| 2015-01-09
 
| ops
 
| ops
 
| [Rod Dashboard] Issues detected at RAL-LCG2 (srm-cms-disk.gridpp.rl.ac.uk)
 
| [Rod Dashboard] Issues detected at RAL-LCG2 (srm-cms-disk.gridpp.rl.ac.uk)
 
|-
 
|-
 
| 110382
 
| 110382
| Green
+
| Yellow
 
| Less Urgent
 
| Less Urgent
 
| In Progress
 
| In Progress
 
| 2014-11-26
 
| 2014-11-26
| 2015-07-01
+
| 2015-01-07
 
| N/A
 
| N/A
 
| RAL-LCG2: please reinstall your perfsonar hosts(s)
 
| RAL-LCG2: please reinstall your perfsonar hosts(s)
Line 220: Line 218:
 
| In Progress
 
| In Progress
 
| 2014-10-29
 
| 2014-10-29
| 2014-11-27
+
| 2015-01-09
 
| CMS
 
| CMS
 
| Glexec exited with status 203; ...
 
| Glexec exited with status 203; ...
Line 236: Line 234:
 
| Red
 
| Red
 
| Urgent
 
| Urgent
| Waiting for Reply
+
| In Progress
 
| 2014-10-01
 
| 2014-10-01
| 2014-01-07
+
| 2015-01-07
 
| CMS
 
| CMS
 
| AAA access test failing at T1_UK_RAL
 
| AAA access test failing at T1_UK_RAL
Line 275: Line 273:
 
| 09/01/15 || 100 || 100 || 100 || 100 || style="background-color: lightgrey;" | 82.6 || 100 || 99 || As above.
 
| 09/01/15 || 100 || 100 || 100 || 100 || style="background-color: lightgrey;" | 82.6 || 100 || 99 || As above.
 
|-
 
|-
| 01/01/15 || 100 || 100 || 100 || 100 || style="background-color: lightgrey;" | 89.8 || 98 || 100 || As above.
+
| 10/01/15 || 100 || 100 || 100 || 100 || style="background-color: lightgrey;" | 89.8 || 98 || 100 || As above.
 
|-
 
|-
 
| 11/01/15 || 100 || 100 || 100 || 100 || style="background-color: lightgrey;" | 80.4 || 100 || 100 || As above.
 
| 11/01/15 || 100 || 100 || 100 || 100 || style="background-color: lightgrey;" | 80.4 || 100 || 100 || As above.

Latest revision as of 10:52, 21 January 2015

RAL Tier1 Operations Report for 14th January 2015

Review of Issues during the week 7th to 14th January 2015.
  • On the morning of Thursday 8th January there was a short (15 minute) network break in access to the Tier1 network. Diagnostic tests were being run on the faulty router – however after that the system restarted and took over as the master router of the pair (which was not anticipated). Some half hour later it failed again.
  • We have seen very high load on the LHCb SRMs through most of the week. Intermittent timeouts were seen on the tests. The number of LHCb batch jobs has been restricted to try and reduce the problem. In addition, during the day yesterday (Tuesday 13th) it was found that the SRMs were not responding and needed a restart.
  • Electrical circuit testing is underway. Two older batches of worker nodes were drained out beforehand. Some problems have been found. This has mainly affected some worker nodes to which access was lost for an hour or so. Also affected was access to ten disk servers which are currently in 'readonly' mode in AtlasDataDisk. A similar problem occurred during this morning (14th Jan). These were caused by a PDU becoming overloaded and tripping off some of it ports.
Resolved Disk Server Issues
  • GDSS757 (CMSDisk D1T0) failed on Saturday 3rd January. It was returned to service during the afternoon of Wednesday 7th January following testing.
Current operational status and issues
  • We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
  • There is a problem with xroot access to the Castor GEN instance (not affecting ALICE).
Ongoing Disk Server Issues
  • None
Notable Changes made this last week.
  • In preparation for the electrical safety checks being carried out this week some "transfer switches" (enabling dual powering of some network switches and older systems) were installed during the second part of last week.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
Whole site SCHEDULED WARNING 20/01/2015 08:30 22/01/2015 18:00 2 days, 9 hours and 30 minutes Warning during safety checks on power circuits in machine room. Testing carried out during working hours on each day.
All Castor (All SRM endpoints). SCHEDULED WARNING 19/01/2015 09:15 19/01/2015 16:00 6 hours and 45 minutes Warning during OS upgrade (SL6) on Castor Nameservers.
Whole site SCHEDULED WARNING 13/01/2015 18:30 15/01/2015 18:00 1 day, 23 hours and 30 minutes Warning during safety checks on power circuits in machine room. Testing carried out during working hours on each day.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Investigate problems on the primary Tier1 router.

Listing by category:

  • Databases:
    • A new database (Oracle RAC) has been set-up to host the Atlas 3D database. This is updated from CERN via Oracle GoldenGate. This system is yet to be brought into use. (Currently Atlas 3D/Frontier still uses the OGMA datase system, although this was also changed to update from CERN using Oracle Golden Gate.)
    • Switch LFC/3D to new Database Infrastructure.
    • Update to Oracle 11.2.0.4
  • Castor:
    • Update Castor headnodes to SL6 (Nameservers remain to be done).
    • Update SRMs to new version (includes updating to SL6).
    • Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
    • Update Castor to 2.1-14-latest, this depends on SL6 being deployed.
  • Networking:
    • Resolve problems with primary Tier1 Router
    • Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
    • Make routing changes to allow the removal of the UKLight Router.
    • Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
  • Fabric
    • Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes finished; migration of GEN from 'A' to 'D' tapes to follow.)
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during January.
Entries in GOC DB starting between the 7th and 14th January 2015.
Service Scheduled? Outage/At Risk Start End Duration Reason
Whole site SCHEDULED WARNING 13/01/2015 18:30 15/01/2015 18:00 1 day, 23 hours and 30 minutes Warning during safety checks on power circuits in machine room. Testing carried out during working hours on each day.
Whole site SCHEDULED WARNING 07/01/2015 10:00 07/01/2015 12:00 2 hours Warning on site during quarterly UPS/Generator load test.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
111120 Green Less Urgent In Progress 2015-01-12 2015-01-14 Atlas large transfer errors from RAL-LCG2 to BNL-OSG2
110605 Green Less Urgent In Progress 2014-12-08 2015-01-09 ops [Rod Dashboard] Issues detected at RAL-LCG2 (srm-cms-disk.gridpp.rl.ac.uk)
110382 Yellow Less Urgent In Progress 2014-11-26 2015-01-07 N/A RAL-LCG2: please reinstall your perfsonar hosts(s)
109712 Red Urgent In Progress 2014-10-29 2015-01-09 CMS Glexec exited with status 203; ...
109694 Red Urgent On hold 2014-11-03 2014-12-18 SNO+ gfal-copy failing for files at RAL
108944 Red Urgent In Progress 2014-10-01 2015-01-07 CMS AAA access test failing at T1_UK_RAL
107935 Red Less Urgent On Hold 2014-08-27 2014-12-15 Atlas BDII vs SRM inconsistent storage capacity numbers
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
07/01/15 100 100 100 100 100 100 99
08/01/15 100 100 100 100 66.0 99 100 Load (stripping campaign) affecting LHCb SRMs. User timesouts on tests.
09/01/15 100 100 100 100 82.6 100 99 As above.
10/01/15 100 100 100 100 89.8 98 100 As above.
11/01/15 100 100 100 100 80.4 100 100 As above.
12/01/15 100 100 100 100 90.4 100 100 As above.
13/01/15 100 100 100 100 60.4 98 100 As above but also found SRMs not processing properly for some hours.