Tier1 Operations Report 2014-12-17

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 17th December 2014

  • The Tier1's plans for the Christmas and New Year holiday can be seen on our blog.*


Review of Issues during the week 10th to 17th December 2014.
  • On Saturday (13th Dec) network problems severely affected Tier1 services. A network switch was found to have have problems coincident with the service issues. A member of staff came on site and resolved the switch probem. However, this turned out not to be the principle underlying cause of the service problems which were then traced to a DNS server that was not responding. Systems were re-configured not to use that as the primary DNS server. In parallel with a member of Networking staff attending on site to fix the DNS server. The problems lasted from around 07:00 to 22:00 that day.
Resolved Disk Server Issues
  • GDSS778 (LHCbDst D1T0) failed in the early hours of Monday 15th December. Tests revealed faulty RAM which was replaced. The server was returned to production around 09:15 this morning (Wed. 17th Dec).
Current operational status and issues
  • None
Ongoing Disk Server Issues
  • None.
Notable Changes made this last week.
  • On Tuesday morning (16th Dec) the firmware in the Tier1 router pair was updated to the latest production version. This is ahead of a patch to be applied in the New Year that should fix the ongoing RIP protocol problem.
  • Following a restriction on numbers of CMS batch jobs imposed during problems a week or so ago the CMS jobs limits on the farm have been progressively increased.
Declared in the GOC DB
  • None.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • The rollout of the RIP protocol to the Tier1 routers still has to be completed. A software patch from the vendors will be applied to the Tier1 Routers on Tuesday 6th January.
  • The next quarterly UPS/Generator load test will take place on Wednesday 7th January.
  • Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room: Tue-Thu 13-15 January & Tue-Thu 20-22 January. There are some systms that need to be re-powered in preparation for this work.
  • Completing Castor headnode upgrades to SL6: Tuesday 6th Jan - GEN; Wednesday 7th Jan - Nameserver (transparent - at risk)

Listing by category:

  • Databases:
    • A new database (Oracle RAC) has been set-up to host the Atlas 3D database. This is updated from CERN via Oracle GoldenGate. This system is yet to be brought into use. (Currently Atlas 3D/Frontier still uses the OGMA datase system, although this was also changed to update from CERN using Oracle Golden Gate.)
    • Switch LFC/3D to new Database Infrastructure.
    • Update to Oracle 11.2.0.4
  • Castor:
    • Update Castor headnodes to SL6 (ongoing).
    • Update SRMs to new version (includes updating to SL6).
    • Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
    • Update Castor to 2.1-14-latest, this depends on SL6 being deployed.
  • Networking:
    • Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
    • Make routing changes to allow the removal of the UKLight Router.
    • Enable the RIP protocol for updating routing tables on the Tier1 routers.
  • Fabric
    • Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes finished; migration of GEN from 'A' to 'D' tapes to follow.)
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during January.
Entries in GOC DB starting between the 10th and 17th December 2014.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-atlas, srm-cms-disk, srm-cms, srm-lhcb UNSCHEDULED OUTAGE 13/12/2014 14:30 13/12/2014 22:21 7 hours and 51 minutes Correcting warning on SRMs to an Outage.
srm-atlas, srm-cms, srm-lhcb UNSCHEDULED WARNING 13/12/2014 07:00 13/12/2014 22:21 15 hours and 21 minutes Castor instances under investigation
srm-atlas SCHEDULED OUTAGE 10/12/2014 10:00 10/12/2014 11:43 1 hour and 43 minutes OS upgrade (SL6) on headnodes for Atlas Castor instance.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
110776 Green Urgent Waiting for Reply 2014-12-15 2014-12-17 CMS Phedex Node Name Transition
110605 Green Less Urgent In Progress 2014-12-08 2014-12-12 ops [Rod Dashboard] Issues detected at RAL-LCG2 (srm-cms-disk.gridpp.rl.ac.uk)
110382 Green Less Urgent In Progress 2014-11-26 2014-12-15 N/A RAL-LCG2: please reinstall your perfsonar hosts(s)
109712 Amber Urgent In Progress 2014-10-29 2014-11-27 CMS Glexec exited with status 203; ...
109694 Yellow Urgent On hold 2014-11-03 2014-12-15 SNO+ gfal-copy failing for files at RAL
108944 Red Urgent In Progress 2014-10-01 2014-12-09 CMS AAA access test failing at T1_UK_RAL
107935 Red Less Urgent On Hold 2014-08-27 2014-12-15 Atlas BDII vs SRM inconsistent storage capacity numbers
106324 Red Urgent In Progress 2014-06-18 2014-12-12 CMS pilots losing network connections at T1_UK_RAL
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
10/12/14 100 100 92.8 100 100 99 n/a Upgrade of Atlas Castor headnodes to SL6.
11/12/14 100 100 100 100 100 100 n/a
12/12/14 100 100 100 100 100 100 100
13/12/14 71.1 100 31.9 35.3 33.6 90 100 Problems with a DNS server.
14/12/14 100 100 100 100 100 97 100
15/12/14 100 100 99.0 100 95.8 100 n/a Singe SRM Test failure in each case.
16/12/14 100 100 100 100 100 97 100