Difference between revisions of "Tier1 Operations Report 2014-09-03"

From GridPP Wiki
Jump to: navigation, search
(Created page with "=RAL Tier1 Operations Report for 3rd September 2014= __NOTOC__ ====== ====== <!-- ************************************************************* -----> <!-- ***********Start R...")
 
m ()
 
(13 intermediate revisions by one user not shown)
Line 9: Line 9:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 27th August to 3rd September 2014.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 27th August to 3rd September 2014.
 
|}
 
|}
* Following some problems with disk server draining in Castor 2.1.14 a modified procedure has been tested on one disk server and been successful.
+
* A network switch failed overnight Friday-Saturday (29/30 Aug). Staff attended on site and the immediate problem was resolved. However, further problems were found with a number of VMs providing services that took some time to fix. Not all services were affected - the site (except Castor) was declared down for around 6 hours on Saturday.
* While the farm was quiet around the 14th August the number of permitted Alice jobs was increased.
+
* Late evening on Monday 18th Aug there was an (Atlas) Oracle database crash due to a known (and reported) bug. The database failed over to another node in the Oracle RAC. There were some restarts of the (Atlas) SRM processes as the failover occured and again as the databse was manually returned to its 'correct' node in the RAC an hour later.
+
* There was load issues on the Atlas Scratch disk problems over the weekend (Sunday & Monday 24/25 August).
+
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 23: Line 20:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS648 (LHCbUser - D1T0) failed during the evening of Monday 25th August. The RAID controller crashed while recovering medium errors. It was returned to service this morning (Wednesday 27th Aug).
+
* GDSS748 (AtlasDataDisk - D1T0) was found to be unresponsive in the early morning of Thursday (28th Aug). It failed to restart after a reboot. A failed disk was found and replaced. The system was returned to service later that day.
* GDSS612 (AtlasScratchDisk - D1T0) was taken out of service yesterday morning (26th Aug) as it was found to have two failed disk drives. It was returned to service in read-only mode at the end of the afternoon of that day while the first drive was rebuilding. The server was then put back in full production this morning (27th Aug).
+
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
Line 47: Line 43:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
|}
 
|}
* None
+
* GDSS659 (AtlasDataDisk - D1T0) has had a number of problems. The server initially failed on Thursday (28th Aug). It was returned to service the following day but failed again over the weekend - a problem only found on Monday morning. Following further RAID disk rebuild it was returned to service yesterday morning (Tuesday 2nd Aug). The server again stopped serving files at around 05:30 this morning. The server is now being drained (during which time it does serve files).
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 58: Line 54:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
|}
 
|}
* Successful test of modified Castor diskserver draining procedure.
+
* The FTS2 service was ended yesterday, 2nd September. The servers were shutdown.
* 20/21st Aug: Deployed five disk servers (gdss633-gdss637) to AtlasTape cache.
+
* The Software Server that was used by the smaller VOs has been stopped.
* Systems (CEs etc) updated to Condor version 8.2.2.
+
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 103: Line 98:
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
* The rollout of the RIP protocol to the Tier1 routers still has to be completed.  
 
* The rollout of the RIP protocol to the Tier1 routers still has to be completed.  
* We are planning the termination of the FTS2 service (announced for 2nd September) now that almost all use is on FTS3.
+
* Access to the Cream CEs will be withdrawn apart from leaving access for ALICE. The proposed date for this is Tuesday 23rd September.  
* The removal of the (NFS) software server is scheduled for the 2nd September.
+
* We are planning stop access to the cream CEs - although possibly leaving them available to ALICE for some time. No date has yet been specified for this.
+
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Databases:
 
* Databases:
 
** Apply latest Oracle patches (PSU) to the production database systems (Castor, Atlas3D).
 
** Apply latest Oracle patches (PSU) to the production database systems (Castor, Atlas3D).
** Switch LFC/FTS/3D to new Database Infrastructure.
+
** Switch LFC/3D to new Database Infrastructure.
 
* Castor:
 
* Castor:
 
** None.  
 
** None.  
Line 115: Line 108:
 
** Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
 
** Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
 
** Make routing changes to allow the removal of the UKLight Router.
 
** Make routing changes to allow the removal of the UKLight Router.
** Enable the RIP protocol for updating routing tables on the Tier1 routers. (Requires resolution of blocking issue).
+
** Enable the RIP protocol for updating routing tables on the Tier1 routers.
 
* Fabric
 
* Fabric
** We are phasing out the use of the software server used by the small VOs.
 
 
** Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes; migration of GEN from 'A' to 'D' tapes.)
 
** Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes; migration of GEN from 'A' to 'D' tapes.)
 
** Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
 
** Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
** There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during 2014.
+
** There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room - date to be decided.
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ************************************************************************** ----->
 
<!-- ************************************************************************** ----->
Line 141: Line 133:
 
! Reason
 
! Reason
 
|-
 
|-
|lcgfts.gridpp.rl.ac.uk,  
+
| cream-ce01.gridpp.rl.ac.uk,
 +
| UNSCHEDULED
 +
| OUTAGE
 +
| 02/09/2014 15:01
 +
| 02/09/2014 16:16
 +
| 1 hour and 15 minutes
 +
| draining before re-configuration
 +
|-
 +
| lcgfts.gridpp.rl.ac.uk,  
 
| SCHEDULED
 
| SCHEDULED
 
| OUTAGE
 
| OUTAGE
Line 147: Line 147:
 
| 02/10/2014 11:00
 
| 02/10/2014 11:00
 
| 30 days,  
 
| 30 days,  
|Service being decommissioned.
+
| Service being decommissioned.
 +
|-
 +
| All services except Castor
 +
| UNSCHEDULED
 +
| WARNING
 +
| 30/08/2014 14:00
 +
| 01/09/2014 09:43
 +
| 1 day, 19 hours and 43 minutes
 +
| WARNING following network problems on virtual machine
 +
|-
 +
| All services except Castor
 +
| UNSCHEDULED
 +
| OUTAGE
 +
| 30/08/2014 09:00
 +
| 30/08/2014 14:24
 +
| 5 hours and 24 minutes
 +
| Putting all services except CASTOR into downtime while we investigate network related problems on the HyperV systems
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 164: Line 180:
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
|-
 
|-
| 107880
+
| 107935
 
| Green
 
| Green
 
| Less Urgent
 
| Less Urgent
 
| In Progress
 
| In Progress
| 2014-08-26
 
 
| 2014-08-27
 
| 2014-08-27
| SNO+
+
| 2014-09-02
| srmcp failure
+
| Atlas
 +
| BDII vs SRM inconsistent storage capacity numbers
 
|-
 
|-
| 107815
+
| 107880
 
| Green
 
| Green
 
| Less Urgent
 
| Less Urgent
| Waiting Reply
+
| In Progress
| 2014-08-22
+
 
| 2014-08-26
 
| 2014-08-26
| OPS
+
| 2014-09-02
| [Rod Dashboard] Issues detected at RAL-LCG2
+
| SNO+
|-
+
| srmcp failure
| 107778
+
| Green
+
| Less Urgent
+
| In Progress
+
| 2014-08-21
+
| 2014-08-22
+
| Atlas
+
| UK:RAL problems with staging
+
 
|-
 
|-
 
| 106324
 
| 106324

Latest revision as of 11:16, 3 September 2014

RAL Tier1 Operations Report for 3rd September 2014

Review of Issues during the week 27th August to 3rd September 2014.
  • A network switch failed overnight Friday-Saturday (29/30 Aug). Staff attended on site and the immediate problem was resolved. However, further problems were found with a number of VMs providing services that took some time to fix. Not all services were affected - the site (except Castor) was declared down for around 6 hours on Saturday.
Resolved Disk Server Issues
  • GDSS748 (AtlasDataDisk - D1T0) was found to be unresponsive in the early morning of Thursday (28th Aug). It failed to restart after a reboot. A failed disk was found and replaced. The system was returned to service later that day.
Current operational status and issues
  • Discrepancies were found in some of the Castor database tables and columns. The Castor team are considering options with regard to fixing these. The issue has no operational impact.
  • We are still investigating xroot access to CMS Castor following the upgrade on the 17th June. The service has improved but there may still be work to be done.
Ongoing Disk Server Issues
  • GDSS659 (AtlasDataDisk - D1T0) has had a number of problems. The server initially failed on Thursday (28th Aug). It was returned to service the following day but failed again over the weekend - a problem only found on Monday morning. Following further RAID disk rebuild it was returned to service yesterday morning (Tuesday 2nd Aug). The server again stopped serving files at around 05:30 this morning. The server is now being drained (during which time it does serve files).
Notable Changes made this last week.
  • The FTS2 service was ended yesterday, 2nd September. The servers were shutdown.
  • The Software Server that was used by the smaller VOs has been stopped.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
lcgfts.gridpp.rl.ac.uk, SCHEDULED OUTAGE 02/09/2014 11:00 02/10/2014 11:00 30 days, Service being decommissioned.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • The rollout of the RIP protocol to the Tier1 routers still has to be completed.
  • Access to the Cream CEs will be withdrawn apart from leaving access for ALICE. The proposed date for this is Tuesday 23rd September.

Listing by category:

  • Databases:
    • Apply latest Oracle patches (PSU) to the production database systems (Castor, Atlas3D).
    • Switch LFC/3D to new Database Infrastructure.
  • Castor:
    • None.
  • Networking:
    • Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
    • Make routing changes to allow the removal of the UKLight Router.
    • Enable the RIP protocol for updating routing tables on the Tier1 routers.
  • Fabric
    • Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes; migration of GEN from 'A' to 'D' tapes.)
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room - date to be decided.
Entries in GOC DB starting between the 27th August and 3rd September 2014.
Service Scheduled? Outage/At Risk Start End Duration Reason
cream-ce01.gridpp.rl.ac.uk, UNSCHEDULED OUTAGE 02/09/2014 15:01 02/09/2014 16:16 1 hour and 15 minutes draining before re-configuration
lcgfts.gridpp.rl.ac.uk, SCHEDULED OUTAGE 02/09/2014 11:00 02/10/2014 11:00 30 days, Service being decommissioned.
All services except Castor UNSCHEDULED WARNING 30/08/2014 14:00 01/09/2014 09:43 1 day, 19 hours and 43 minutes WARNING following network problems on virtual machine
All services except Castor UNSCHEDULED OUTAGE 30/08/2014 09:00 30/08/2014 14:24 5 hours and 24 minutes Putting all services except CASTOR into downtime while we investigate network related problems on the HyperV systems
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
107935 Green Less Urgent In Progress 2014-08-27 2014-09-02 Atlas BDII vs SRM inconsistent storage capacity numbers
107880 Green Less Urgent In Progress 2014-08-26 2014-09-02 SNO+ srmcp failure
106324 Red Urgent On Hold 2014-06-18 2014-08-14 CMS pilots losing network connections at T1_UK_RAL
105405 Red Urgent On Hold 2014-05-14 2014-07-29 Please check your Vidyo router firewall configuration
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
27/08/14 100 100 100 100 100 98 100
28/08/14 100 100 100 100 100 100 97
29/08/14 100 100 100 100 100 100 97
30/08/14 100 100 99.4 94.3 100 100 98 A network switch failed. This was worked around but the VM infrastructure exhibited some network problems too.
31/08/14 100 100 100 100 100 94 n/a
01/09/14 100 100 100 100 100 100 96
02/09/14 100 100 100 100 100 96 96