Tier1 Operations Report 2014-10-29

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 29th October 2014

Review of Issues during the week 22nd to 29th October 2014.
  • A problem reported last week, with a user filling up the spool areas on the ARC CEs has been fixed. The user was contacted (and responded) and in addition an automated process that cleans up the large log files put in place.
  • There have been problems with three out of the four the ARC CEs since the end of last week. The process of tidying up and reporting completed jobs took a long time. ARC-CE01 has been declared down and was drained and re-installed last Friday (24th Oct). ARC-CE02 and ARC-CE03 were then drained over the weekend and re-installed on Monday (27th Oct).
Resolved Disk Server Issues
  • GDSS673 (lhcbRawRdst - D0T1) failed around 7am on Sunday morning (26th Oct). A faulty disk drive was found. The server was checked out and firmware updated. It was returned to service during the working day yesterday (Tuesday 28th). One file was declared lost from this machine.
  • As reported last week: GDSS763 (AtlasDataDisk - D1T0) had failed for the third time on around a month. This system has been completely drained and is undergoing further investigations.
Current operational status and issues
  • None
Ongoing Disk Server Issues
  • None
Notable Changes made this last week.
  • Oracle patching is ongoing with the pluto standby database patched on Monday (27th). The pluto primary is being patched today (29th).
  • Increased the number of jobs Alice is allowed to run to 3500. (23rd October)
Declared in the GOC DB

None

Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • The Oracle regular "PSU" patches will be applied to the Pluto Castor standby database system on Monday (27th Oct) and to the Pluto production database on Wednesday (29th Oct). This is expected to be a transparent intervention. The 'Pluto' database hosts the Castor Nameserver as well as the CMS and LHCb stager databases.
  • The rollout of the RIP protocol to the Tier1 routers still has to be completed.
  • First quarter 2015: Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room.

Listing by category:

  • Databases:
    • Apply latest Oracle patches (PSU) to the production database systems (Castor, LFC). (Underway).
    • A new database (Oracle RAC) has been set-up to host the Atlas3D database. This is updated from CERN via Oracle GoldenGate.
    • Switch LFC/3D to new Database Infrastructure.
  • Castor:
    • Update Castor headnodes to SL6.
    • Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
  • Networking:
    • Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
    • Make routing changes to allow the removal of the UKLight Router.
    • Enable the RIP protocol for updating routing tables on the Tier1 routers.
  • Fabric
    • Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes underway; migration of GEN from 'A' to 'D' tapes to follow.)
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room (Expected first quarter 2015).
Entries in GOC DB starting between the 22nd and 29th October 2014.
Service Scheduled? Outage/At Risk Start End Duration Reason
arc-ce02.gridpp.rl.ac.uk, arc-ce03.gridpp.rl.ac.uk, UNSCHEDULED OUTAGE 24/10/2014 13:06 28/10/2014 13:00 4 days, 54 minutes There are ongoing problems with arc-ce02 and arc-ce03. We are investigating
arc-ce01.gridpp.rl.ac.uk, arc-ce01.gridpp.rl.ac.uk, arc-ce01.gridpp.rl.ac.uk, UNSCHEDULED OUTAGE 23/10/2014 17:00 24/10/2014 12:40 19 hours and 40 minutes Ongoing problems with teh CE.
arc-ce02.gridpp.rl.ac.uk, arc-ce02.gridpp.rl.ac.uk, arc-ce02.gridpp.rl.ac.uk, arc-ce03.gridpp.rl.ac.uk, arc-ce03.gridpp.rl.ac.uk, arc-ce03.gridpp.rl.ac.uk, UNSCHEDULED WARNING 22/10/2014 13:00 27/10/2014 15:00 5 days, 3 hours We are investigating a problem with these CEs.
arc-ce01.gridpp.rl.ac.uk, arc-ce01.gridpp.rl.ac.uk, arc-ce01.gridpp.rl.ac.uk, UNSCHEDULED OUTAGE 20/10/2014 13:45 23/10/2014 17:00 3 days, 3 hours and 15 minutes Investigating problems with the ARC CE.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
109713 Green Less Urgent In Progress 2014-10-29 2014-10-29 Add the icecube repository
109712 Green Urgent In Progress 2014-10-29 2014-10-29 CMS Glexec exited with status 203; ...
109708 Green Less Urgent In Progress 2014-10-29 2014-10-29 Ops [Rod Dashboard] Issues detected at RAL-LCG2
109608 Green Urgent Waitig for Reply 2014-10-24 2014-10-28 t2k LFC denies existence of new user
109276 Green Less Urgent In Progress 2014-10-11 2014-10-28 CMS Submissions to RAL FTS3 REST interface are failing for some users
108944 Green Urgent In Progress 2014-10-01 2014-10-27 CMS AAA access test failing at T1_UK_RAL
107935 Red Less Urgent On Hold 2014-08-27 2014-10-15 Atlas BDII vs SRM inconsistent storage capacity numbers
107880 Red Less Urgent In Progress 2014-08-26 2014-10-29 SNO+ srmcp failure
106324 Red Urgent On Hold 2014-06-18 2014-10-13 CMS pilots losing network connections at T1_UK_RAL
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
22/10/14 100 100 100 98.9 100 99 100 Problems with the ARC CEs.
23/10/14 100 100 98.3 72.6 88.0 99 99 Some SRM test failures at around 08:00 - 09:30 local time. Compunded by some (ARC) CE test failures.
24/10/14 100 100 98.3 99.3 100 100 100 Atlas: Pair of SRM test failures (User timeout): CMS: Batch (ARC CE issues)
25/10/14 100 100 100 98.6 100 100 98 Batch (ARC CE issues)
26/10/14 100 100 100 99.4 99.1 100 n/a CMS: Batch (ARC CE issues); LHCb: SRM test failure (No such file or directory).
27/10/14 100 100 100 100 100 100 100
28/10/14 100 100 98.4 100 100 100 100 Some file not copied errors on SRM.