Tier1 Operations Report 2014-10-01

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 1st October 2014

Review of Issues during the week 24th September to 1st October 2014.
  • There were problems with the Atlas Castor instance over the weekend which was linked to the draining of a disk server. This was followed by very heavy load experienced by that server.
  • There were problems with the SAM tests on the CEs from Tuesday through to Wednesday. Believed due to a problem in one of the site BDIIs.
  • This morning, Wed 1st October, one of the Neptune standby databases crashed. Cause unknown. A Service Request has been opened with Oracle.
Resolved Disk Server Issues
  • GDSS763 (AtlasDataDisk - D1T0) failed during Saturday evening. It was restarted and tested but no fault found. The system was returned to service yesterday (30th Sep).
  • GDSS720 (AtlasDataDisk - D1T0) was reported as having had problems (and being returned to service) in last week's report. This server has suffered four failures in the last 14 months and is being drained pending further investigation.
Current operational status and issues
  • We are investigating why the Atlas Frontier systems had problems using the new Cronos database.
Ongoing Disk Server Issues
  • None.
Notable Changes made this last week.
  • One batch of worker nodes (64 machines) have had Linux cgroups configured to enforce memory limits. (Before it was just advisory).
  • Oracle patches (PSU) applied to the standby Neptune database (Castor Atlas & GEN) yesterday (Tuesday 30th Sep).
  • Assorted system (Linux) updates.
  • Migration of CMS data from 'B' to 'D' tapes is continuing.
  • There was a successful UPS/Generator load test this morning (Wed 1st Oct).
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
arc-ce02, arc-ce03, arc-ce04 SCHEDULED WARNING 09/10/2014 10:00 09/10/2014 11:00 1 hour Upgrade of ARC CEs to version 4.2.0
arc-ce01 SCHEDULED WARNING 07/10/2014 10:00 07/10/2014 11:00 1 hour Upgrade of ARC CE to version 4.2.0


Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • The rollout of the RIP protocol to the Tier1 routers still has to be completed.
  • Access to the Cream CEs will be withdrawn apart from leaving access for ALICE. This was proposed for yesterday (Tuesday 30th September) but has been delayed.
  • First quarter 2015: Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room.

Listing by category:

  • Databases:
    • Apply latest Oracle patches (PSU) to the production database systems (Castor, LFC). (Underway).
    • A new database (Oracle RAC) has been set-up to host the Atlas3D database. This is updated from CERN via Oracle GoldenGate.
    • Switch LFC/3D to new Database Infrastructure.
  • Castor:
    • Update Castor headnodes to SL6.
    • Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
  • Networking:
    • Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
    • Make routing changes to allow the removal of the UKLight Router.
    • Enable the RIP protocol for updating routing tables on the Tier1 routers.
  • Fabric
    • Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes underway; migration of GEN from 'A' to 'D' tapes to follow.)
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room (Expected first quarter 2015).
Entries in GOC DB starting between the 24th September and 1st October 2014.
Service Scheduled? Outage/At Risk Start End Duration Reason
Whole Site SCHEDULED WARNING 01/10/2014 10:00 01/10/2014 12:00 2 hours RAL Tier1 site in warning state due to UPS/generator test.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
108944 Green Urgent In Progress 2014-10-01 2014-10-01 CMS AAA access test failing at T1_UK_RAL
108886 Green Less Urgent In Progress 2014-09-29 2014-09-30 dteam RAL stratum one, cernvmfs.gridpp. connections
108845 Green Urgent In Progress 2014-09-27 2014-09-30 Atlas RAL-LCG2: Source connection timeout plus globus_ftp_client error
108546 Yellow Less Urgent In Progress 2014-09-16 2014-09-22 Atlas RAL-LCG2_HIMEM_SL6: production jobs failed
107935 Red Less Urgent In Progress 2014-08-27 2014-09-02 Atlas BDII vs SRM inconsistent storage capacity numbers
107880 Red Less Urgent In Progress 2014-08-26 2014-09-02 SNO+ srmcp failure
106324 Red Urgent In Progress 2014-06-18 2014-09-23 CMS pilots losing network connections at T1_UK_RAL
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
24/09/14 100 100 100 100 100 100 99
25/09/14 100 100 100 100 100 99 100
26/09/14 100 100 93.5 100 100 98 98 Problems with Atlas Castor instance (triggered by draining).
27/09/14 100 100 86.1 100 100 92 n/a Problems with Atlas Castor instance (triggered by draining).
28/09/14 100 100 85.9 100 100 93 n/a Problems with Atlas Castor instance (triggered by draining).
29/09/14 100 100 97.9 100 100 97 n/a Atlas test failure.
30/09/14 94.6 100 100 95.5 100 98 n/a Multiple CE test failures. Possibly due to problem on one of our site BDIIs.