Tier1 Operations Report 2014-10-15

From GridPP Wiki
Revision as of 13:12, 15 October 2014 by Gareth Smith 6edc9bf92b (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

RAL Tier1 Operations Report for 15th October 2014

Review of Issues during the week 8th to 15th October 2014.
  • Problems with Castor - particularly affecting Atlas, were reported last week. These were caused by the failure of the battery used by a cache in a disk array that hosts the Castor databases. The Castor databases were reconfigured to alleviate the problem. On Wednesday (8th) the battery pack was replaced and the disk array's performance recovered. The following day the re-synchronsation of the standby and primary databases was started. Yesterday (Tuesday 14th October) there was a scheduled outage of the Atlas and GEN Castor instances while the database configuration was put back in its normal operating state.
  • During yesterday (Tuesday) evening a problem arose on all four of the ARC-CEs caused by very large logfile outputs from batch jobs which filled up the relevant disk area. This stopped batch work. The problem was resolved this morning. A separate problem on our Nagios monitoring box during the latter part of yesterday evening meant that we did not receive the callout that would have been expected for this.
  • CIP stopped updating when the database juju changed. It goes into failsafe mode and doesn't update, rather than giving out partial information. Since the database juju has now changed back, the problem has is solved. More generally, the CIP's database connectors need updating when the connection details change.
Resolved Disk Server Issues
  • None
Current operational status and issues
  • The problems reported last week with the Atlas Frontier systems was caused by the performance of the disk array that is being used temporarily by the Cronos back end database. An alternative, faster, disk array is being tested.
Ongoing Disk Server Issues
  • GDSS648 (LHCbUser - D1T0) failed on Monday afternoon (13th Oct). The problem is with its networking and investigations are ongoing.
  • GDSS720 (AtlasDataDisk - D1T0) is in production in readonly mode. Following a number of problems this server is being drained ahead of further investigations.
Notable Changes made this last week.
  • ARC CEs have been upgraded to version 4.2.0.
  • Yesterday (14th Oct) the production FTS3 service was upgraded to version 3.2.28.
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • The rollout of the RIP protocol to the Tier1 routers still has to be completed.
  • First quarter 2015: Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room.

Listing by category:

  • Databases:
    • Apply latest Oracle patches (PSU) to the production database systems (Castor, LFC). (Underway).
    • A new database (Oracle RAC) has been set-up to host the Atlas3D database. This is updated from CERN via Oracle GoldenGate.
    • Switch LFC/3D to new Database Infrastructure.
  • Castor:
    • Update Castor headnodes to SL6.
    • Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
  • Networking:
    • Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
    • Make routing changes to allow the removal of the UKLight Router.
    • Enable the RIP protocol for updating routing tables on the Tier1 routers.
  • Fabric
    • Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes underway; migration of GEN from 'A' to 'D' tapes to follow.)
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room (Expected first quarter 2015).
Entries in GOC DB starting between the 8th and 15th October 2014.
Service Scheduled? Outage/At Risk Start End Duration Reason
lcgfts3.gridpp.rl.ac.uk, SCHEDULED WARNING 14/10/2014 12:00 14/10/2014 13:00 1 hour Upgrade FTS3 servers to version 3.2.28
Atlas and GEN Castor instances (srm-alice, srm-atlas, srm-biomed, srm-dteam, srm-hone, srm-ilc, srm-mice, srm-minos, srm-na62, srm-snoplus, srm-superb, srm-t2k SCHEDULED OUTAGE 14/10/2014 10:00 14/10/2014 12:00 2 hours Outage of Castor instances ATLAS and GEN (Alice plus non-LHC VOs) during database re-organisation.
arc-ce02, arc-ce03, arc-ce04 SCHEDULED WARNING 13/10/2014 11:00 13/10/2014 12:00 1 hour Upgrade of ARC CEs to version 4.2.0 (Delayed from 9th October).
lcgwms05.gridpp.rl.ac.uk, SCHEDULED OUTAGE 12/10/2014 09:00 14/10/2014 17:00 2 days, 8 hours update to EMI 3.20
lcgwms04.gridpp.rl.ac.uk, SCHEDULED OUTAGE 09/10/2014 14:00 10/10/2014 16:35 1 day, 2 hours and 35 minutes draining before EMI 3 update 20 update
arc-ce02, arc-ce03, arc-ce04 SCHEDULED WARNING 09/10/2014 10:00 09/10/2014 11:00 1 hour Upgrade of ARC CEs to version 4.2.0
All Castor UNSCHEDULED WARNING 07/10/2014 15:00 08/10/2014 16:27 1 day, 1 hour and 27 minutes Extending warning while databases running with degraded RAID battery
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
109359 Green Less Urgent In Progress 2014-10-15 2014-10-15 SNO+ RAL WMS unavailable
109329 Green Less Urgent In Progress 2014-10-14 2014-10-14 CMS access to lcgvo04.gridpp.rl.ac.uk
109276 Green Less Urgent In Progress 2014-10-11 2014-10-13 CMS Submissions to RAL FTS3 REST interface are failing for some users
109267 Green Urgent In Progress 2014-10-10 2014-10-14 CMS possible trouble accessing pileup dataset
109253 Green Top Priority Waiting for Reply 2014-10-09 2014-10-15 Atlas RAL FTS3 issue
108944 Green Urgent In Progress 2014-10-01 2014-10-14 CMS AAA access test failing at T1_UK_RAL
107935 Red Less Urgent On Hold 2014-08-27 2014-10-15 Atlas BDII vs SRM inconsistent storage capacity numbers
107880 Red Less Urgent Waiting for Reply 2014-08-26 2014-10-15 SNO+ srmcp failure
106324 Red Urgent On Hold 2014-06-18 2014-10-13 CMS pilots losing network connections at T1_UK_RAL
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
08/10/14 100 100 100 100 100 91 100
09/10/14 100 100 100 100 100 95 100
10/10/14 100 100 100 100 100 94 94
11/10/14 100 100 100 100 100 95 93
12/10/14 100 100 100 100 100 98 98
13/10/14 100 100 100 100 100 99 98
14/10/14 100 100 88.6 96.5 100 91 99 For Atlas: An outage of two hours during scheduled Castor DB re-configuration. Atlas & CMS: Evening problems with ARC CEs.