Difference between revisions of "Tier1 Operations Report 2014-09-10"

From GridPP Wiki
Jump to: navigation, search
(Created page with "=RAL Tier1 Operations Report for 10th September 2014= __NOTOC__ ====== ====== <!-- ************************************************************* -----> <!-- ***********Start ...")
 
()
 
(19 intermediate revisions by 2 users not shown)
Line 9: Line 9:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 3rd to 10th September 2014.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 3rd to 10th September 2014.
 
|}
 
|}
* A network switch failed overnight Friday-Saturday (29/30 Aug). Staff attended on site and the immediate problem was resolved. However, further problems were found with a number of VMs providing services that took some time to fix. Not all services were affected - the site (except Castor) was declared down for around 6 hours on Saturday.
+
* On Monday (8th Sep) there was a few minute break in network access to the Tier1 at around 5pm. Investigation showed that the primary router of our failover pair that connects the Tier1 network to the site rebooted at around 4pm. This cased two short (seconds) breaks in connectivity as the traffic failed over to the backup router then failed back. The cause of the problem an hour later, with a break in connnectivity for a few minutes, is not yet understood.
 +
* A high rate of Atlas file access failures into/from Castor was seen during the day yesterday (9th Sep). A number of measures were taken and the problem stopped - although the underlying cause is not yet understood.
 +
* Checks on the disk servers to be removed from the GenTape cache show a number (~300) of problematic files. These are likely to be the results of partly failed transfers into Castor in the past. These are being checked and will be followed up with the appropriate VOs.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 20: Line 22:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS748 (AtlasDataDisk - D1T0) was found to be unresponsive in the early morning of Thursday (28th Aug). It failed to restart after a reboot. A failed disk was found and replaced. The system was returned to service later that day.
+
* GDSS659 (AtlasDataDisk - D1T0) was reported last week as having problems. This server has been completedly drained, the disk controller card firmware updated and it is undergoing re-acceptance testing ahead of its re-use.
 +
* GDSS651 (LHCbDst - D1T0) failed on Sunday morning (7th). Following investigations it was put back into service yesterday (9th). One file was reported lost from the server.
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
Line 31: Line 34:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
|}
 
|}
* Discrepancies were found in some of the Castor database tables and columns. The Castor team are considering options with regard to fixing these. The issue has no operational impact.
+
* None.
* We are still investigating xroot access to CMS Castor following the upgrade on the 17th June. The service has improved but there may still be work to be done.
+
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 43: Line 45:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
|}
 
|}
* GDSS659 (AtlasDataDisk - D1T0) has had a number of problems. The server initially failed on Thursday (28th Aug). It was returned to service the following day but failed again over the weekend - a problem only found on Monday morning. Following further RAID disk rebuild it was returned to service yesterday morning (Tuesday 2nd Aug). The server again stopped serving files at around 05:30 this morning. The server is now being drained (during which time it does serve files).
+
* None.
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 54: Line 56:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
|}
 
|}
* The FTS2 service was ended yesterday, 2nd September. The servers were shutdown.
+
* Seven additional disk servers have been added to the disk cache for GenTape. These have 10Gbit network interfaces which enables us to withdraw some older servers with only 1 Gbit interfaces.
* The Software Server that was used by the smaller VOs has been stopped.
+
* CMS are now writing to the newer T10KD tapes and migration of CMS data from 'B' to 'D' tapes is underway.
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 98: Line 100:
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
* The rollout of the RIP protocol to the Tier1 routers still has to be completed.  
 
* The rollout of the RIP protocol to the Tier1 routers still has to be completed.  
* Access to the Cream CEs will be withdrawn apart from leaving access for ALICE. The proposed date for this is Tuesday 23rd September.  
+
* Access to the Cream CEs will be withdrawn apart from leaving access for ALICE. The proposed date for this is Tuesday 30th September.  
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Databases:
 
* Databases:
** Apply latest Oracle patches (PSU) to the production database systems (Castor, Atlas3D).
+
** Apply latest Oracle patches (PSU) to the production database systems (Castor, LFC).
 +
** A new database (Oracle RAC) is being set-up that will host the Atlas3D database and be updated from CERN via a new method Oracle GoldenGate).
 
** Switch LFC/3D to new Database Infrastructure.
 
** Switch LFC/3D to new Database Infrastructure.
 
* Castor:
 
* Castor:
** None.  
+
** Update Castor headnodes to SL6.
 +
** Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
 
* Networking:
 
* Networking:
 
** Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
 
** Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
Line 110: Line 114:
 
** Enable the RIP protocol for updating routing tables on the Tier1 routers.
 
** Enable the RIP protocol for updating routing tables on the Tier1 routers.
 
* Fabric
 
* Fabric
** Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes; migration of GEN from 'A' to 'D' tapes.)
+
** Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes underway; migration of GEN from 'A' to 'D' tapes to follow.)
 
** Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
 
** Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
** There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room - date to be decided.
+
** There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room (Expected first quarter 2015).
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ************************************************************************** ----->
 
<!-- ************************************************************************** ----->
Line 133: Line 137:
 
! Reason
 
! Reason
 
|-
 
|-
| cream-ce01.gridpp.rl.ac.uk,
+
|lcgfts.gridpp.rl.ac.uk,  
| UNSCHEDULED
+
| OUTAGE
+
| 02/09/2014 15:01
+
| 02/09/2014 16:16
+
| 1 hour and 15 minutes
+
| draining before re-configuration
+
|-
+
| lcgfts.gridpp.rl.ac.uk,  
+
 
| SCHEDULED
 
| SCHEDULED
 
| OUTAGE
 
| OUTAGE
Line 147: Line 143:
 
| 02/10/2014 11:00
 
| 02/10/2014 11:00
 
| 30 days,  
 
| 30 days,  
| Service being decommissioned.
+
|Service being decommissioned.
|-
+
| All services except Castor
+
| UNSCHEDULED
+
| WARNING
+
| 30/08/2014 14:00
+
| 01/09/2014 09:43
+
| 1 day, 19 hours and 43 minutes
+
| WARNING following network problems on virtual machine
+
|-
+
| All services except Castor
+
| UNSCHEDULED
+
| OUTAGE
+
| 30/08/2014 09:00
+
| 30/08/2014 14:24
+
| 5 hours and 24 minutes
+
| Putting all services except CASTOR into downtime while we investigate network related problems on the HyperV systems
+
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 190: Line 170:
 
|-
 
|-
 
| 107880
 
| 107880
| Green
+
| Yellow
 
| Less Urgent
 
| Less Urgent
 
| In Progress
 
| In Progress
Line 234: Line 214:
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! CMS HC !! Comment
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! CMS HC !! Comment
 
|-
 
|-
| 27/08/14 || 100 || 100 || 100 || 100 || 100 || 98 || 100 ||
+
| 03/09/14 || 100 || 100 || 100 || 100 || 100 || 100 || 97 ||
 
|-
 
|-
| 28/08/14 || 100 || 100 || 100 || 100 || 100 || 100 || 97 ||
+
| 04/09/14 || 100 || 100 || 100 || 100 || 100 || 98 || 97 ||
 
|-
 
|-
| 29/08/14 || 100 || 100 || 100 || 100 || 100 || 100 || 97 ||
+
| 05/09/14 || 100 || 100 || style="background-color: lightgrey;" | 97.0 || 100 || 100 || 100 || 99 || Several SRM test failures.
 
|-
 
|-
| 30/08/14 || 100 || 100 || style="background-color: lightgrey;" | 99.4 || style="background-color: lightgrey;" | 94.3 || 100 || 100 || 98 || A network switch failed. This was worked around but the VM infrastructure exhibited some network problems too.
+
| 06/09/14 || 100 || 100 || 100 || 100 || 100 || 100 || n/a ||
 
|-
 
|-
| 31/08/14 || 100 || 100 || 100 || 100 || 100 || 94 || n/a ||
+
| 07/09/14 || 100 || 100 || 100 || 100 || 100 || 100 || n/a ||
 
|-
 
|-
| 01/09/14 || 100 || 100 || 100 || 100 || 100 || 100 || 96 ||
+
| 08/09/14 || 100 || 100 || 100 || 100 || 100 || 95 || 88 ||
 
|-
 
|-
| 02/09/14 || 100 || 100 || 100 || 100 || 100 || 96 ||  96 ||
+
| 09/09/14 || 100 || 100 || style="background-color: lightgrey;" | 94.5 || 100 || 100 || 95 || 98 || A number of SRM test failures (PUT test failure with "Invalid argument") and Atlas Castor performace was poor at the time.
 
+
|-
+
| 03/09/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 04/09/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 05/09/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 06/09/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 07/09/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 08/09/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
|-
+
| 09/09/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
 
+
 
|}
 
|}
 
<!-- **********************End Availability Report************************** ----->
 
<!-- **********************End Availability Report************************** ----->
 
<!-- *********************************************************************** ----->
 
<!-- *********************************************************************** ----->

Latest revision as of 13:30, 10 September 2014

RAL Tier1 Operations Report for 10th September 2014

Review of Issues during the week 3rd to 10th September 2014.
  • On Monday (8th Sep) there was a few minute break in network access to the Tier1 at around 5pm. Investigation showed that the primary router of our failover pair that connects the Tier1 network to the site rebooted at around 4pm. This cased two short (seconds) breaks in connectivity as the traffic failed over to the backup router then failed back. The cause of the problem an hour later, with a break in connnectivity for a few minutes, is not yet understood.
  • A high rate of Atlas file access failures into/from Castor was seen during the day yesterday (9th Sep). A number of measures were taken and the problem stopped - although the underlying cause is not yet understood.
  • Checks on the disk servers to be removed from the GenTape cache show a number (~300) of problematic files. These are likely to be the results of partly failed transfers into Castor in the past. These are being checked and will be followed up with the appropriate VOs.
Resolved Disk Server Issues
  • GDSS659 (AtlasDataDisk - D1T0) was reported last week as having problems. This server has been completedly drained, the disk controller card firmware updated and it is undergoing re-acceptance testing ahead of its re-use.
  • GDSS651 (LHCbDst - D1T0) failed on Sunday morning (7th). Following investigations it was put back into service yesterday (9th). One file was reported lost from the server.
Current operational status and issues
  • None.
Ongoing Disk Server Issues
  • None.
Notable Changes made this last week.
  • Seven additional disk servers have been added to the disk cache for GenTape. These have 10Gbit network interfaces which enables us to withdraw some older servers with only 1 Gbit interfaces.
  • CMS are now writing to the newer T10KD tapes and migration of CMS data from 'B' to 'D' tapes is underway.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
lcgfts.gridpp.rl.ac.uk, SCHEDULED OUTAGE 02/09/2014 11:00 02/10/2014 11:00 30 days, Service being decommissioned.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • The rollout of the RIP protocol to the Tier1 routers still has to be completed.
  • Access to the Cream CEs will be withdrawn apart from leaving access for ALICE. The proposed date for this is Tuesday 30th September.

Listing by category:

  • Databases:
    • Apply latest Oracle patches (PSU) to the production database systems (Castor, LFC).
    • A new database (Oracle RAC) is being set-up that will host the Atlas3D database and be updated from CERN via a new method Oracle GoldenGate).
    • Switch LFC/3D to new Database Infrastructure.
  • Castor:
    • Update Castor headnodes to SL6.
    • Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
  • Networking:
    • Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
    • Make routing changes to allow the removal of the UKLight Router.
    • Enable the RIP protocol for updating routing tables on the Tier1 routers.
  • Fabric
    • Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes underway; migration of GEN from 'A' to 'D' tapes to follow.)
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room (Expected first quarter 2015).
Entries in GOC DB starting between the 3rd and 10th September 2014.
Service Scheduled? Outage/At Risk Start End Duration Reason
lcgfts.gridpp.rl.ac.uk, SCHEDULED OUTAGE 02/09/2014 11:00 02/10/2014 11:00 30 days, Service being decommissioned.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
107935 Green Less Urgent In Progress 2014-08-27 2014-09-02 Atlas BDII vs SRM inconsistent storage capacity numbers
107880 Yellow Less Urgent In Progress 2014-08-26 2014-09-02 SNO+ srmcp failure
106324 Red Urgent On Hold 2014-06-18 2014-08-14 CMS pilots losing network connections at T1_UK_RAL
105405 Red Urgent On Hold 2014-05-14 2014-07-29 Please check your Vidyo router firewall configuration
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
03/09/14 100 100 100 100 100 100 97
04/09/14 100 100 100 100 100 98 97
05/09/14 100 100 97.0 100 100 100 99 Several SRM test failures.
06/09/14 100 100 100 100 100 100 n/a
07/09/14 100 100 100 100 100 100 n/a
08/09/14 100 100 100 100 100 95 88
09/09/14 100 100 94.5 100 100 95 98 A number of SRM test failures (PUT test failure with "Invalid argument") and Atlas Castor performace was poor at the time.