Difference between revisions of "Tier1 Operations Report 2014-10-22"

From GridPP Wiki
Jump to: navigation, search
m ()
()
 
(16 intermediate revisions by one user not shown)
Line 9: Line 9:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 15th to 22nd October 2014.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 15th to 22nd October 2014.
 
|}
 
|}
* Problems with Castor - particularly affecting Atlas, were reported last week. These were caused by the failure of the battery used by a cache in a disk array that hosts the Castor databases. The Castor databases were reconfigured to alleviate the problem. On Wednesday (8th) the battery pack was replaced and the disk array's performance recovered. The following day the re-synchronsation of the standby and primary databases was started. Yesterday (Tuesday 14th October) there was a scheduled outage of the Atlas and GEN Castor instances while the database configuration was put back in its normal operating state.
+
* A problem reported last week, with a user filling up the spool areas on the ARC CEs has been fixed. The user was contacted (and responded) and in addition an automated process that cleans up the large log files put in place.
* During yesterday (Tuesday) evening a problem arose on all four of the ARC-CEs caused by very large logfile outputs from batch jobs which filled up the relevant disk area. This stopped batch work. The problem was resolved this morning. A separate problem on our Nagios monitoring box during the latter part of yesterday evening meant that we did not receive the callout that would have been expected for this.
+
* CIP stopped updating when the database juju changed. It goes into failsafe mode and doesn't update, rather than giving out partial information. Since the database juju has now changed back, the problem has is solved. More generally, the CIP's database connectors need updating when the connection details change.
+
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 22: Line 20:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* None
+
* GDSS648 (LHCbUser - D1T0) failed on Monday afternoon (13th Oct). It was returned to service on 16th October after its network interface was replaced. It was initially in readonly mode as a precaution. This morning (22nd) it was reverted to full (read & write) operation.
 +
* GDSS720 (AtlasDataDisk - D1T0) has been completely drained pending more invasive investigations of the fault seen.
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
Line 33: Line 32:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
|}
 
|}
* The problems reported last week with the Atlas Frontier systems was caused by the performance of the disk array that is being used temporarily by the Cronos back end database. An alternative, faster, disk array is being tested.
+
* There have been problems with three out of the four the ARC CEs since the end of last week. The process of tidying up and reporting completed jobs is taking a long time. ARC-CE01 has been declared down and is being drained as part of the investigations.
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 44: Line 43:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
|}
 
|}
* GDSS648 (LHCbUser - D1T0) failed on Monday afternoon (13th Oct). The problem is with its networking and investigations are ongoing.
+
* GDSS763 (AtlasDataDisk - D1T0) failed on Friday (17th) - the third time on around a month. It was returned to production on Monday (20th) in readonly mode. It is being drained ahead of further investigations.
* GDSS720 (AtlasDataDisk - D1T0) is in production in readonly mode. Following a number of problems this server is being drained ahead of further investigations.
+
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 56: Line 54:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
|}
 
|}
* ARC CEs have been upgraded to version 4.2.0.
+
* The production FTS3 service was upgraded to version 3.2.29-1 on Monday (20th Oct).
* Yesterday (14th Oct) the production FTS3 service was upgraded to version 3.2.28.
+
* Yesterday, Tueday 21st, an Oracle update was applied to the OGMA (Atlas 3D) database system. This morning it was switched to update the Atlas 3D information from CERN using Oracle GoldenGate, rather than Oracle Streams. This update was done during a planned Atlas test. However, it transpired that another site was had load issues with their Frontier/database system and the updated OGMA/Frontier system was returned to service very quickly after completing the change. (This alleviates a problem whereby the replacement database system for OGMA ("Cronos") was not performing well enough owing to limitations of the disk array being used temporarily within that system.)
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 68: Line 66:
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
|}
 
|}
* None
+
{| border=1 align=center
 +
|- bgcolor="#7c8aaf"
 +
! Service
 +
! Scheduled?
 +
! Outage/At Risk
 +
! Start
 +
! End
 +
! Duration
 +
! Reason
 +
|-
 +
| arc-ce01.gridpp.rl.ac.uk
 +
| UNSCHEDULED
 +
| OUTAGE
 +
| 20/10/2014 13:45
 +
| 23/10/2014 17:00
 +
| 3 days, 3 hours and 15 minutes
 +
| Investigating problems with the ARC CE.
 +
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 82: Line 97:
 
|}
 
|}
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 +
* The Oracle regular "PSU" patches will be applied to the Pluto Castor standby database system on Monday (27th Oct) and to the Pluto production database on Wednesday (29th Oct). This is expected to be a transparent intervention. The 'Pluto' database hosts the Castor Nameserver as well as the CMS and LHCb stager databases.
 
* The rollout of the RIP protocol to the Tier1 routers still has to be completed.  
 
* The rollout of the RIP protocol to the Tier1 routers still has to be completed.  
 
* First quarter 2015: Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room.
 
* First quarter 2015: Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room.
Line 120: Line 136:
 
! Reason
 
! Reason
 
|-
 
|-
| lcgfts3.gridpp.rl.ac.uk,
+
| arc-ce01.gridpp.rl.ac.uk
| SCHEDULED
+
| UNSCHEDULED
| WARNING
+
| 14/10/2014 12:00
+
| 14/10/2014 13:00
+
| 1 hour
+
| Upgrade FTS3 servers to version 3.2.28
+
|-
+
| Atlas and GEN Castor instances (srm-alice, srm-atlas, srm-biomed, srm-dteam, srm-hone, srm-ilc, srm-mice, srm-minos, srm-na62, srm-snoplus, srm-superb, srm-t2k
+
| SCHEDULED
+
 
| OUTAGE
 
| OUTAGE
| 14/10/2014 10:00
+
| 20/10/2014 13:45
| 14/10/2014 12:00
+
| 23/10/2014 17:00
| 2 hours
+
| 3 days, 3 hours and 15 minutes
| Outage of Castor instances ATLAS and GEN (Alice plus non-LHC VOs) during database re-organisation.
+
| Investigating problems with the ARC CE.
 
|-
 
|-
| arc-ce02, arc-ce03, arc-ce04
+
|lcgwms06.gridpp.rl.ac.uk,  
| SCHEDULED
+
| UNSCHEDULED
| WARNING
+
| 13/10/2014 11:00
+
| 13/10/2014 12:00
+
| 1 hour
+
| Upgrade of ARC CEs to version 4.2.0 (Delayed from 9th October).
+
|-
+
| lcgwms05.gridpp.rl.ac.uk,  
+
| SCHEDULED
+
 
| OUTAGE
 
| OUTAGE
| 12/10/2014 09:00
+
| 15/10/2014 12:00
| 14/10/2014 17:00
+
| 15/10/2014 16:37
| 2 days, 8 hours
+
| 4 hours and 37 minutes
| update to EMI 3.20
+
| Problems following EMI update
|-
+
| lcgwms04.gridpp.rl.ac.uk,
+
| SCHEDULED
+
| OUTAGE
+
| 09/10/2014 14:00
+
| 10/10/2014 16:35
+
| 1 day, 2 hours and 35 minutes
+
| draining before EMI 3 update 20 update
+
|-
+
| arc-ce02, arc-ce03, arc-ce04
+
| SCHEDULED
+
| WARNING
+
| 09/10/2014 10:00
+
| 09/10/2014 11:00
+
| 1 hour
+
| Upgrade of ARC CEs to version 4.2.0
+
|-
+
| All Castor
+
| UNSCHEDULED
+
| WARNING
+
| 07/10/2014 15:00
+
| 08/10/2014 16:27
+
| 1 day, 1 hour and 27 minutes
+
| Extending warning while databases running with degraded RAID battery
+
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 191: Line 167:
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
|-
 
|-
| 109359
+
| 109399
 
| Green
 
| Green
 
| Less Urgent
 
| Less Urgent
 
| In Progress
 
| In Progress
| 2014-10-15
+
| 2014-10-17
| 2014-10-15
+
| 2014-10-21
 +
|
 +
| [Rod Dashboard] Issues detected at RAL-LCG2 (CE problems)
 +
|-
 +
| 109360
 +
| Green
 +
| Less Urgent
 +
| Waiting for Reply
 +
| 2014-10-20
 +
| 2014-10-21
 
| SNO+
 
| SNO+
| RAL WMS unavailable
+
| Nagios tests failing at RAL
 
|-
 
|-
 
| 109329
 
| 109329
Line 221: Line 206:
 
| Green
 
| Green
 
| Urgent
 
| Urgent
| In Progress
+
| Waiting for Reply
 
| 2014-10-10
 
| 2014-10-10
| 2014-10-14
+
| 2014-10-16
 
| CMS
 
| CMS
 
| possible trouble accessing pileup dataset
 
| possible trouble accessing pileup dataset
|-
 
| 109253
 
| Green
 
| Top Priority
 
| Waiting for Reply
 
| 2014-10-09
 
| 2014-10-15
 
| Atlas
 
| RAL FTS3 issue
 
 
|-
 
|-
 
| 108944
 
| 108944
Line 241: Line 217:
 
| In Progress
 
| In Progress
 
| 2014-10-01
 
| 2014-10-01
| 2014-10-14
+
| 2014-10-17
 
| CMS
 
| CMS
 
| AAA access test failing at T1_UK_RAL
 
| AAA access test failing at T1_UK_RAL
Line 257: Line 233:
 
| Red
 
| Red
 
| Less Urgent
 
| Less Urgent
| Waiting for Reply
+
| In Progress
 
| 2014-08-26
 
| 2014-08-26
 
| 2014-10-15
 
| 2014-10-15

Latest revision as of 11:09, 22 October 2014

RAL Tier1 Operations Report for 22nd October 2014

Review of Issues during the week 15th to 22nd October 2014.
  • A problem reported last week, with a user filling up the spool areas on the ARC CEs has been fixed. The user was contacted (and responded) and in addition an automated process that cleans up the large log files put in place.
Resolved Disk Server Issues
  • GDSS648 (LHCbUser - D1T0) failed on Monday afternoon (13th Oct). It was returned to service on 16th October after its network interface was replaced. It was initially in readonly mode as a precaution. This morning (22nd) it was reverted to full (read & write) operation.
  • GDSS720 (AtlasDataDisk - D1T0) has been completely drained pending more invasive investigations of the fault seen.
Current operational status and issues
  • There have been problems with three out of the four the ARC CEs since the end of last week. The process of tidying up and reporting completed jobs is taking a long time. ARC-CE01 has been declared down and is being drained as part of the investigations.
Ongoing Disk Server Issues
  • GDSS763 (AtlasDataDisk - D1T0) failed on Friday (17th) - the third time on around a month. It was returned to production on Monday (20th) in readonly mode. It is being drained ahead of further investigations.
Notable Changes made this last week.
  • The production FTS3 service was upgraded to version 3.2.29-1 on Monday (20th Oct).
  • Yesterday, Tueday 21st, an Oracle update was applied to the OGMA (Atlas 3D) database system. This morning it was switched to update the Atlas 3D information from CERN using Oracle GoldenGate, rather than Oracle Streams. This update was done during a planned Atlas test. However, it transpired that another site was had load issues with their Frontier/database system and the updated OGMA/Frontier system was returned to service very quickly after completing the change. (This alleviates a problem whereby the replacement database system for OGMA ("Cronos") was not performing well enough owing to limitations of the disk array being used temporarily within that system.)
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
arc-ce01.gridpp.rl.ac.uk UNSCHEDULED OUTAGE 20/10/2014 13:45 23/10/2014 17:00 3 days, 3 hours and 15 minutes Investigating problems with the ARC CE.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • The Oracle regular "PSU" patches will be applied to the Pluto Castor standby database system on Monday (27th Oct) and to the Pluto production database on Wednesday (29th Oct). This is expected to be a transparent intervention. The 'Pluto' database hosts the Castor Nameserver as well as the CMS and LHCb stager databases.
  • The rollout of the RIP protocol to the Tier1 routers still has to be completed.
  • First quarter 2015: Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room.

Listing by category:

  • Databases:
    • Apply latest Oracle patches (PSU) to the production database systems (Castor, LFC). (Underway).
    • A new database (Oracle RAC) has been set-up to host the Atlas3D database. This is updated from CERN via Oracle GoldenGate.
    • Switch LFC/3D to new Database Infrastructure.
  • Castor:
    • Update Castor headnodes to SL6.
    • Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
  • Networking:
    • Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
    • Make routing changes to allow the removal of the UKLight Router.
    • Enable the RIP protocol for updating routing tables on the Tier1 routers.
  • Fabric
    • Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes underway; migration of GEN from 'A' to 'D' tapes to follow.)
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room (Expected first quarter 2015).
Entries in GOC DB starting between the 15th and 22nd October 2014.
Service Scheduled? Outage/At Risk Start End Duration Reason
arc-ce01.gridpp.rl.ac.uk UNSCHEDULED OUTAGE 20/10/2014 13:45 23/10/2014 17:00 3 days, 3 hours and 15 minutes Investigating problems with the ARC CE.
lcgwms06.gridpp.rl.ac.uk, UNSCHEDULED OUTAGE 15/10/2014 12:00 15/10/2014 16:37 4 hours and 37 minutes Problems following EMI update
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
109399 Green Less Urgent In Progress 2014-10-17 2014-10-21 [Rod Dashboard] Issues detected at RAL-LCG2 (CE problems)
109360 Green Less Urgent Waiting for Reply 2014-10-20 2014-10-21 SNO+ Nagios tests failing at RAL
109329 Green Less Urgent In Progress 2014-10-14 2014-10-14 CMS access to lcgvo04.gridpp.rl.ac.uk
109276 Green Less Urgent In Progress 2014-10-11 2014-10-13 CMS Submissions to RAL FTS3 REST interface are failing for some users
109267 Green Urgent Waiting for Reply 2014-10-10 2014-10-16 CMS possible trouble accessing pileup dataset
108944 Green Urgent In Progress 2014-10-01 2014-10-17 CMS AAA access test failing at T1_UK_RAL
107935 Red Less Urgent On Hold 2014-08-27 2014-10-15 Atlas BDII vs SRM inconsistent storage capacity numbers
107880 Red Less Urgent In Progress 2014-08-26 2014-10-15 SNO+ srmcp failure
106324 Red Urgent On Hold 2014-06-18 2014-10-13 CMS pilots losing network connections at T1_UK_RAL
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
15/10/14 100 100 66.2 68.7 100 100 100 Problem on ARC-CEs (started previous day). Plus, for Atlas, single SRM Put test failure "HANDLING TIMEOUT"
16/10/14 100 100 91.6 92.0 100 100 100 CERN power cut (plus one Atlas SRM Put failure: HANDLING TIMEOUT)
17/10/14 100 100 100 100 100 100 100
18/10/14 100 100 99.2 100 100 100 100 Single SRM Put Test failure: User timeout.
19/10/14 100 100 100 100 100 0 100
20/10/14 100 100 99.0 100 100 100 100 Single SRM Put Test failure: could not open connection to srm-atlas.gridpp.rl.ac.uk
21/10/14 100 100 100 91.6 100 100 99 Problems with the ARC CEs.