Tier1 Operations Report 2017-11-15

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 15th November 2017

Review of Issues during the week 9th to 15th November 2017.
  • There was a failure of the CMS Castor stager headnode early morning yesterday (7th Nov) - the processor failed. The physical box was replaced (using one from the Castor 'preprod) system and CMS Castor service resumed during the afternoon.
  • There was a short (15minute) network problem affecting some connectivity to the Tier1 from the RAL core network this morning (8th Nov). It was traced to the failure of a router that was taken out of service.
Current operational status and issues
  • There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. This is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Resolved Disk Server Issues
  • None
Ongoing Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • Re-distribution of data in Echo onto the 2015 capacity hardware is ongoing. This is expected to complete in a week or two.
  • A start has been made on updating Castor tape servers to SL7.
  • The "Test" FTS service (used by Atlas) has been enabled for IPv6 (i.e. dual stack enabled).
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-cms.gridpp.rl.ac.uk, UNSCHEDULED OUTAGE 07/11/2017 06:00 07/11/2017 16:00 10 hours Hardware problems with Stager server - CMS Castor instance
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Ongoing or Pending - but not yet formally announced:

  • Re-distribute the data in Echo onto the 2015 capacity hardware. (Ongoing)
  • Update the LHCb Castor SRMs so as to be able to configure timeouts.
  • The Production FTS service will be enabled for IPv6 (i.e. dual stack enabled).

Listing by category:

  • Castor:
    • Update systems (initially tape servers) to use SL7 and configured by Quattor/Aquilon.
    • Move to generic Castor headnodes.
  • Echo:
    • Re-distribute the data in Echo onto the remaining 2015 capacity hardware.
    • Update to next CEPH version ("Luminous").
  • Networking
    • Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, all squids and the CVMFS Stratum-1 servers).
  • Services
    • The Production and "Test" (Atlas) FTS3 services will be merged and will make use of a resilient distributed database.
  • Internal
    • DNS servers will be rolled out within the Tier1 network.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
130207 Red Urgent On Hold 2017-08-24 2017-11-13 MICE Timeouts when copyiing MICE reco data to CASTOR
127597 Red Urgent Assigned 2017-04-07 2017-10-05 CMS Check networking and xrootd RAL-CERN performance
124876 Red Less Urgent On Hold 2016-11-07 2017-11-13 Ops [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent Waiting for Reply 2015-11-18 2017-11-06 CASTOR at RAL not publishing GLUE 2
Availability Report
Day OPS Alice Atlas CMS LHCb Atlas Echo Comment
08/11/17 100 100 98 46 100 100
09/11/17 100 100 100 100 100 100
10/11/17 100 100 100 100 100 100
11/11/17 100 100 100 100 100 100
12/11/17 100 100 100 100 100 100
13/11/17 100 100 100 100 100 100
14/11/17 100 100 85 83 100 100 CASTOR patch update.
Hammercloud Test Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud

Day Atlas HC Atlas HC Echo CMS HC Comment
08/11/17 100 100 46
09/11/17 100 98 100
10/11/17 100 97 100
11/11/17 100 100 100
12/11/17 100 100 100
13/11/17 100 100 100
14/11/17 96 100 39
Notes from Meeting.
  • Following the merger of AtlasScratchDisk with AtlasDataDisk in Castor some months ago - all data sitting on the older disk servers that made up AtlsScratchDisk has now been moved (or removed). These servers will be decommissioned.