Difference between revisions of "Tier1 Operations Report 2014-09-17"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(11 intermediate revisions by one user not shown)
Line 9: Line 9:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 10th to 17th September 2014.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 10th to 17th September 2014.
 
|}
 
|}
* On Monday (8th Sep) there was a few minute break in network access to the Tier1 at around 5pm. Investigation showed that the primary router of our failover pair that connects the Tier1 network to the site rebooted at around 4pm. This cased two short (seconds) breaks in connectivity as the traffic failed over to the backup router then failed back. The cause of the problem an hour later, with a break in connnectivity for a few minutes, is not yet understood.
+
* On Saturday (13th Sep) there was a problem with the Atlas Castor instance that persisted into the beginning of Sunday. A number of measures were taken to improve it, although the root cause remains unknown.
* A high rate of Atlas file access failures into/from Castor was seen during the day yesterday (9th Sep). A number of measures were taken and the problem stopped - although the underlying cause is not yet understood.
+
* For the second half of last week there were problems with cream-ce02.
* Checks on the disk servers to be removed from the GenTape cache show a number (~300) of problematic files. These are likely to be the results of partly failed transfers into Castor in the past. These are being checked and will be followed up with the appropriate VOs.
+
* This morning (Wednesday 17th Sep) there was a problem with some machines that run as VMs - the symptom was that their networking stopped. Restarting the network fixed the problem. This is similar to a problem seen on the 30th August. The configuration of the network interface on these systems has been changed to workaround this. One of the systems affected was the argus server and this caused a problem for batch job submissions for an hour or so.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 22: Line 22:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS659 (AtlasDataDisk - D1T0) was reported last week as having problems. This server has been completedly drained, the disk controller card firmware updated and it is undergoing re-acceptance testing ahead of its re-use.
+
* None
* GDSS651 (LHCbDst - D1T0) failed on Sunday morning (7th). Following investigations it was put back into service yesterday (9th). One file was reported lost from the server.
+
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
Line 56: Line 55:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
|}
 
|}
* Seven additional disk servers have been added to the disk cache for GenTape. These have 10Gbit network interfaces which enables us to withdraw some older servers with only 1 Gbit interfaces.
+
* VO Londongrid enabled on LFC.
* CMS are now writing to the newer T10KD tapes and migration of CMS data from 'B' to 'D' tapes is underway.
+
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 68: Line 66:
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
|}
 
|}
{| border=1 align=center
+
* None
|- bgcolor="#7c8aaf"
+
! Service
+
! Scheduled?
+
! Outage/At Risk
+
! Start
+
! End
+
! Duration
+
! Reason
+
|-
+
|lcgfts.gridpp.rl.ac.uk,
+
| SCHEDULED
+
| OUTAGE
+
| 02/09/2014 11:00
+
| 02/10/2014 11:00
+
| 30 days,
+
|Service being decommissioned.
+
|}
+
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 100: Line 81:
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
* The rollout of the RIP protocol to the Tier1 routers still has to be completed.  
 
* The rollout of the RIP protocol to the Tier1 routers still has to be completed.  
* Access to the Cream CEs will be withdrawn apart from leaving access for ALICE. The proposed date for this is Tuesday 30th September.  
+
* Access to the Cream CEs will be withdrawn apart from leaving access for ALICE. The proposed date for this is Tuesday  
 +
30th September.
 +
* The Atlas Frontier service will be switched to use the new database system that updates from CERN using Oracle "GoldenGate" on 24th Sep.
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Databases:
 
* Databases:
 
** Apply latest Oracle patches (PSU) to the production database systems (Castor, LFC).
 
** Apply latest Oracle patches (PSU) to the production database systems (Castor, LFC).
** A new database (Oracle RAC) is being set-up that will host the Atlas3D database and be updated from CERN via a new method Oracle GoldenGate).
+
** A new database (Oracle RAC) has been set-up to host the Atlas3D database. This is updated from CERN via Oracle GoldenGate.
 
** Switch LFC/3D to new Database Infrastructure.
 
** Switch LFC/3D to new Database Infrastructure.
 
* Castor:
 
* Castor:
Line 143: Line 126:
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
|-
 
|-
| 107935
+
| 108546
 
| Green
 
| Green
 +
| Less Urgent
 +
| In Progress
 +
| 2014-09-16
 +
| 2014-09-16
 +
| Atlas
 +
| RAL-LCG2_HIMEM_SL6: production jobs failed
 +
|-
 +
| 107935
 +
| Yellow
 
| Less Urgent
 
| Less Urgent
 
| In Progress
 
| In Progress
Line 153: Line 145:
 
|-
 
|-
 
| 107880
 
| 107880
| Yellow
+
| Amber
 
| Less Urgent
 
| Less Urgent
 
| In Progress
 
| In Progress
Line 175: Line 167:
 
| On Hold
 
| On Hold
 
| 2014-05-14
 
| 2014-05-14
| 2014-07-29
+
| 2014-09-12
 
|  
 
|  
 
| Please check your Vidyo router firewall configuration
 
| Please check your Vidyo router firewall configuration

Latest revision as of 14:07, 17 September 2014

RAL Tier1 Operations Report for 17th September 2014

Review of Issues during the week 10th to 17th September 2014.
  • On Saturday (13th Sep) there was a problem with the Atlas Castor instance that persisted into the beginning of Sunday. A number of measures were taken to improve it, although the root cause remains unknown.
  • For the second half of last week there were problems with cream-ce02.
  • This morning (Wednesday 17th Sep) there was a problem with some machines that run as VMs - the symptom was that their networking stopped. Restarting the network fixed the problem. This is similar to a problem seen on the 30th August. The configuration of the network interface on these systems has been changed to workaround this. One of the systems affected was the argus server and this caused a problem for batch job submissions for an hour or so.
Resolved Disk Server Issues
  • None
Current operational status and issues
  • None.
Ongoing Disk Server Issues
  • None.
Notable Changes made this last week.
  • VO Londongrid enabled on LFC.
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • The rollout of the RIP protocol to the Tier1 routers still has to be completed.
  • Access to the Cream CEs will be withdrawn apart from leaving access for ALICE. The proposed date for this is Tuesday

30th September.

  • The Atlas Frontier service will be switched to use the new database system that updates from CERN using Oracle "GoldenGate" on 24th Sep.

Listing by category:

  • Databases:
    • Apply latest Oracle patches (PSU) to the production database systems (Castor, LFC).
    • A new database (Oracle RAC) has been set-up to host the Atlas3D database. This is updated from CERN via Oracle GoldenGate.
    • Switch LFC/3D to new Database Infrastructure.
  • Castor:
    • Update Castor headnodes to SL6.
    • Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
  • Networking:
    • Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
    • Make routing changes to allow the removal of the UKLight Router.
    • Enable the RIP protocol for updating routing tables on the Tier1 routers.
  • Fabric
    • Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes underway; migration of GEN from 'A' to 'D' tapes to follow.)
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room (Expected first quarter 2015).
Entries in GOC DB starting between the 10th and 17th September 2014.
  • None
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
108546 Green Less Urgent In Progress 2014-09-16 2014-09-16 Atlas RAL-LCG2_HIMEM_SL6: production jobs failed
107935 Yellow Less Urgent In Progress 2014-08-27 2014-09-02 Atlas BDII vs SRM inconsistent storage capacity numbers
107880 Amber Less Urgent In Progress 2014-08-26 2014-09-02 SNO+ srmcp failure
106324 Red Urgent On Hold 2014-06-18 2014-08-14 CMS pilots losing network connections at T1_UK_RAL
105405 Red Urgent On Hold 2014-05-14 2014-09-12 Please check your Vidyo router firewall configuration
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
10/09/14 100 100 99.2 100 100 96 97 Single SRM test failure on GET - [SRM_FILE_BUSY]
11/09/14 100 100 100 100 100 100 99
12/09/14 100 100 100 100 100 100 96
13/09/14 100 100 82.2 100 100 54 99 Problems with Atlas Castor instance
14/09/14 100 100 91.8 100 100 84 98 Problems with Atlas Castor instance (continued)
15/09/14 100 100 100 100 100 99 98
16/09/14 100 100 100 100 100 98 99