Difference between revisions of "Tier1 Operations Report 2014-12-17"

From GridPP Wiki
Jump to: navigation, search
()
m ()
 
(12 intermediate revisions by one user not shown)
Line 2: Line 2:
 
__NOTOC__
 
__NOTOC__
 
====== ======
 
====== ======
 +
 +
*The Tier1's plans for the Christmas and New Year holiday can be seen on our [http://www.gridpp.rl.ac.uk/blog/2014/12/17/ral-tier1-plans-for-christmas-new-year-holiday-3/ blog].*
 +
  
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 9: Line 12:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 10th to 17th December 2014.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 10th to 17th December 2014.
 
|}
 
|}
* The problems reported last week that were triggered by CMS Castor being very full continued through until the night of Thursday/Friday (4/5 December). With Castor very full there were very few disk servers available with any space on to receive write requests. This coupled with large numbers of batch jobs running was the cause of the problem. CMS deleting files to make space and a reduction in the number of running batch jobs relieved thd strain.
+
* On Saturday (13th Dec) network problems severely affected Tier1 services. A network switch was found to have have problems coincident with the service issues. A member of staff came on site and resolved the switch probem. However, this turned out not to be the principle underlying cause of the service problems which were then traced to a DNS server that was not responding. Systems were re-configured not to use that as the primary DNS server. In parallel with a member of Networking staff attending on site to fix the DNS server. The problems lasted from around 07:00 to 22:00 that day.
* There was a problem with the SRM back-end daemon's for Atlas crashing on Sunday and into Monday (7/8 Dec) and an unscheduled outage was announced. This was found to be the same as a problem seen on the 'GEN' instance in the summer and was caused by the VO requesting filenames containing a double-slash (//). A workaround that had previously been applied to the GEN instance was applied to the Atlas stager database and the problem resolved on the Monday afternoon.
+
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 54: Line 56:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
|}
 
|}
* CMS Castor headnodes were updated to SL6 on Tuesday 9th December and the Atlas ones this morning (Wednesday 10th Dec).
+
* On Tuesday morning (16th Dec) the firmware in the Tier1 router pair was updated to the latest production version. This is ahead of a patch to be applied in the New Year that should fix the ongoing RIP protocol problem.
* Database trigger to replace a double-slash (//) with a single-slash in a filename applied to the Atas SRM database on Monday 7th December and the LHCb & CMS the following day.
+
* Following a restriction on numbers of CMS batch jobs imposed during problems a week or so ago the CMS jobs limits on the farm have been progressively increased.
* On Tuesday (9th) WebDAV download access to Castor was enabled for LHCb.
+
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 81: Line 82:
 
|}
 
|}
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
* The rollout of the RIP protocol to the Tier1 routers still has to be completed. As part of udnerstanding the problem there will be an upgrade to the firmware on the Tier1 Routers on Tuesday 16th Dec, and a further one on Tuesday 6th January.
+
* The rollout of the RIP protocol to the Tier1 routers still has to be completed. A software patch from the vendors will be applied to the Tier1 Routers on Tuesday 6th January.
* First quarter 2015: Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room.(Provisional dates: Tue-Thu 13-15 January & Tue-Thu 20-22 January).
+
* The next quarterly UPS/Generator load test will take place on Wednesday 7th January.
 +
* Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room: Tue-Thu 13-15 January & Tue-Thu 20-22 January. There are some systms that need to be re-powered in preparation for this work.
 
* Completing Castor headnode upgrades to SL6: Tuesday 6th Jan - GEN; Wednesday 7th Jan - Nameserver (transparent - at risk)
 
* Completing Castor headnode upgrades to SL6: Tuesday 6th Jan - GEN; Wednesday 7th Jan - Nameserver (transparent - at risk)
 
'''Listing by category:'''
 
'''Listing by category:'''
Line 91: Line 93:
 
* Castor:
 
* Castor:
 
** Update Castor headnodes to SL6 (ongoing).
 
** Update Castor headnodes to SL6 (ongoing).
 +
** Update SRMs to new version (includes updating to SL6).
 
** Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
 
** Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
 
** Update Castor to 2.1-14-latest, this depends on SL6 being deployed.
 
** Update Castor to 2.1-14-latest, this depends on SL6 being deployed.
Line 100: Line 103:
 
** Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes finished; migration of GEN from 'A' to 'D' tapes to follow.)
 
** Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes finished; migration of GEN from 'A' to 'D' tapes to follow.)
 
** Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
 
** Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
** There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room (Expected first quarter 2015).
+
** There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during January.
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ************************************************************************** ----->
 
<!-- ************************************************************************** ----->
Line 188: Line 191:
 
|-
 
|-
 
| 109712
 
| 109712
| Yellow
+
| Amber
 
| Urgent
 
| Urgent
 
| In Progress
 
| In Progress
Line 226: Line 229:
 
| Red
 
| Red
 
| Urgent
 
| Urgent
| On Hold
+
| In Progress
 
| 2014-06-18
 
| 2014-06-18
 
| 2014-12-12
 
| 2014-12-12

Latest revision as of 14:50, 17 December 2014

RAL Tier1 Operations Report for 17th December 2014

  • The Tier1's plans for the Christmas and New Year holiday can be seen on our blog.*


Review of Issues during the week 10th to 17th December 2014.
  • On Saturday (13th Dec) network problems severely affected Tier1 services. A network switch was found to have have problems coincident with the service issues. A member of staff came on site and resolved the switch probem. However, this turned out not to be the principle underlying cause of the service problems which were then traced to a DNS server that was not responding. Systems were re-configured not to use that as the primary DNS server. In parallel with a member of Networking staff attending on site to fix the DNS server. The problems lasted from around 07:00 to 22:00 that day.
Resolved Disk Server Issues
  • GDSS778 (LHCbDst D1T0) failed in the early hours of Monday 15th December. Tests revealed faulty RAM which was replaced. The server was returned to production around 09:15 this morning (Wed. 17th Dec).
Current operational status and issues
  • None
Ongoing Disk Server Issues
  • None.
Notable Changes made this last week.
  • On Tuesday morning (16th Dec) the firmware in the Tier1 router pair was updated to the latest production version. This is ahead of a patch to be applied in the New Year that should fix the ongoing RIP protocol problem.
  • Following a restriction on numbers of CMS batch jobs imposed during problems a week or so ago the CMS jobs limits on the farm have been progressively increased.
Declared in the GOC DB
  • None.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • The rollout of the RIP protocol to the Tier1 routers still has to be completed. A software patch from the vendors will be applied to the Tier1 Routers on Tuesday 6th January.
  • The next quarterly UPS/Generator load test will take place on Wednesday 7th January.
  • Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room: Tue-Thu 13-15 January & Tue-Thu 20-22 January. There are some systms that need to be re-powered in preparation for this work.
  • Completing Castor headnode upgrades to SL6: Tuesday 6th Jan - GEN; Wednesday 7th Jan - Nameserver (transparent - at risk)

Listing by category:

  • Databases:
    • A new database (Oracle RAC) has been set-up to host the Atlas 3D database. This is updated from CERN via Oracle GoldenGate. This system is yet to be brought into use. (Currently Atlas 3D/Frontier still uses the OGMA datase system, although this was also changed to update from CERN using Oracle Golden Gate.)
    • Switch LFC/3D to new Database Infrastructure.
    • Update to Oracle 11.2.0.4
  • Castor:
    • Update Castor headnodes to SL6 (ongoing).
    • Update SRMs to new version (includes updating to SL6).
    • Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
    • Update Castor to 2.1-14-latest, this depends on SL6 being deployed.
  • Networking:
    • Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
    • Make routing changes to allow the removal of the UKLight Router.
    • Enable the RIP protocol for updating routing tables on the Tier1 routers.
  • Fabric
    • Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes finished; migration of GEN from 'A' to 'D' tapes to follow.)
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during January.
Entries in GOC DB starting between the 10th and 17th December 2014.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-atlas, srm-cms-disk, srm-cms, srm-lhcb UNSCHEDULED OUTAGE 13/12/2014 14:30 13/12/2014 22:21 7 hours and 51 minutes Correcting warning on SRMs to an Outage.
srm-atlas, srm-cms, srm-lhcb UNSCHEDULED WARNING 13/12/2014 07:00 13/12/2014 22:21 15 hours and 21 minutes Castor instances under investigation
srm-atlas SCHEDULED OUTAGE 10/12/2014 10:00 10/12/2014 11:43 1 hour and 43 minutes OS upgrade (SL6) on headnodes for Atlas Castor instance.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
110776 Green Urgent Waiting for Reply 2014-12-15 2014-12-17 CMS Phedex Node Name Transition
110605 Green Less Urgent In Progress 2014-12-08 2014-12-12 ops [Rod Dashboard] Issues detected at RAL-LCG2 (srm-cms-disk.gridpp.rl.ac.uk)
110382 Green Less Urgent In Progress 2014-11-26 2014-12-15 N/A RAL-LCG2: please reinstall your perfsonar hosts(s)
109712 Amber Urgent In Progress 2014-10-29 2014-11-27 CMS Glexec exited with status 203; ...
109694 Yellow Urgent On hold 2014-11-03 2014-12-15 SNO+ gfal-copy failing for files at RAL
108944 Red Urgent In Progress 2014-10-01 2014-12-09 CMS AAA access test failing at T1_UK_RAL
107935 Red Less Urgent On Hold 2014-08-27 2014-12-15 Atlas BDII vs SRM inconsistent storage capacity numbers
106324 Red Urgent In Progress 2014-06-18 2014-12-12 CMS pilots losing network connections at T1_UK_RAL
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
10/12/14 100 100 92.8 100 100 99 n/a Upgrade of Atlas Castor headnodes to SL6.
11/12/14 100 100 100 100 100 100 n/a
12/12/14 100 100 100 100 100 100 100
13/12/14 71.1 100 31.9 35.3 33.6 90 100 Problems with a DNS server.
14/12/14 100 100 100 100 100 97 100
15/12/14 100 100 99.0 100 95.8 100 n/a Singe SRM Test failure in each case.
16/12/14 100 100 100 100 100 97 100