Tier1 Operations Report 2014-08-06

From GridPP Wiki
Jump to: navigation, search

DRAFT RAL Tier1 Operations Report for 6th August 2014 DRAFT

Review of Issues during the week 23rd to 30th July 2014.
  • There are problems with disk server draining for Atlas in Castor 2.1.4. This is under investigation.
  • Discrepancies were found in some of the Castor database tables and columns. The Castor team are considering options with regard to fixing these.
  • The problem with the dteam SRM regional nagios tests has been fixed with a CIP update.
  • The fix to the problems with the RIP on the tier1 routers was rolled out on Thursday 31st July as announced. RIP did not start on the routers and this is being investigated.
  • At about 22:15 on Friday 1st Aug, a network fault on site caused the Tier1 routers to flip the active routing system from rtr-x670v-1 to rtr-x670v-2, expected behaviour when a link dies. The active route was changed back on Tuesday 5th Aug at 10:15.
Resolved Disk Server Issues
  • Gdss680 was returned to production on thursday 31st July.
Current operational status and issues
  • We are still investigating xroot access to CMS Castor following the upgrade on the 17th June. The initial problems are understood but we still need to investigate further optimisations and there have been hot data sets.


Ongoing Disk Server Issues
  • None
Notable Changes made this last week.
  • 10 disk servers deployed into lhcbDst
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
perfsonar-ps01.gridpp.rl.ac.uk, perfsonar-ps02.gridpp.rl.ac.uk, SCHEDULED OUTAGE 14/07/2014 11:00 14/08/2014 11:00 31 days, Systems being decommissioned. They have been replaced by lcgps01.gridpp.rl.ac.uk and lcgps02.gridpp.rl.ac.uk
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • A problem prevented the rollout of the RIP protocol to the Tier1 routers when the rest of site was done a week ago. This problem is understood and will be resolved tomorrow morning (Thursday 31st July) allowing the RIP protocol to be enabled on the Tier1 router failover pair.
  • We are planning the termination of the FTS2 service (announced for 2nd September) now that almost all use is on FTS3.
  • The removal of the (NFS) software server is scheduled for the 2nd September.
  • We are planning stop access to the cream CEs - although possibly leaving them available to ALICE for some time. No date has yet been specified for this.

Listing by category:

  • Databases:
    • Switch LFC/FTS/3D to new Database Infrastructure.
  • Castor:
    • None.
  • Networking:
    • Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
    • Make routing changes to allow the removal of the UKLight Router.
  • Fabric
    • We are phasing out the use of the software server used by the small VOs.
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during 2014.
Entries in GOC DB starting between the 23rd and 30th July 2014.
Service Scheduled? Outage/At Risk Start End Duration Reason
perfsonar-ps01.gridpp.rl.ac.uk, perfsonar-ps02.gridpp.rl.ac.uk, SCHEDULED OUTAGE 14/07/2014 11:00 14/08/2014 11:00 31 days, Systems being decommissioned. They have been replaced by lcgps01.gridpp.rl.ac.uk and lcgps02.gridpp.rl.ac.uk
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
106655 Yellow Less Urgent In Progress 2014-07-04 2014-07-23 Ops [Rod Dashboard] Issues detected at RAL-LCG2 (srm-dteam)
106324 Red Urgent In Progress 2014-06-18 2014-07-07 CMS pilots losing network connections at T1_UK_RAL
105405 Red Urgent On Hold 2014-05-14 2014-07-01 please check your Vidyo router firewall configuration
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
23/07/14 100 100 100 100 100 94 99
24/07/14 100 100 100 100 100 96 100
25/07/14 100 100 100 100 100 98 100
26/07/14 100 100 100 100 100 100 99
27/07/14 100 100 100 100 100 98 99
28/07/14 100 100 97.6 100 100 89 99 Three SRM GET test failures (all SRM_FILE_BUSY).
29/07/14 100 100 97.2 100 100 100 100 Couple of SRM test failures which correspond to a srmserver restart.