Tier1 Operations Report 2014-04-09

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 9th April 2014

Review of Issues during the week 2nd to 9th April 2014.
  • During the afternoon of Thursday 3rd April all three WMS systems reported problems. These problems went away without our intervention and are believed to be caused by something in jobs being submitted.
  • Maintenence on Primary OPN link overnight Saturday - Sunday 5/6 April took the link down for a few hours. The failover to the backup link did not work properly. The effect of this can be seen in the failure of the SUM tests from CERN during this time.
  • It was reported last week that around 50 files in tape backed service classes (mainly in GEN) had been found not to have been migrated to tape. This is now fixed.
Resolved Disk Server Issues
  • Last Wednesday (2nd April) GDSS239 (Atlas HotDisk) crashed. As AtlasHotDisk was about to be merged into another SpaceToken and there should be multiple copies of files on each server spread across the servers in AtlasHotDisk it was decided to withdraw the server from use rather than spend time investigating the failure. In fact there were 329 unique files on this server. Following discussion with Atlas these were copied back in from other sites rather than investigating the server problems.
  • In the early hours of Sunday 6th April GDSS600 (AtlasDataDisk - D1T0) failed. Multiple disk failures were being reported by the disk controller. The system was returned to production yesterday evening (8th April) and is being drained. It will be decommissioned after files have been copied off.
Current operational status and issues
  • The load related problems reported for the CMS Castor instance havenot been seen this last week. However, work is underway to tackle these problems, in particular servers with faster network connections will be moved into the disk cache in front of CMS_Tape when they become available.
  • The Castor Team are now able to reproduce the intermittent failures of Castor access via the SRM that has been reported in recent weeks. Understanding of the problem is significantly adcanced and further investigations are ongoing using the Castor Preprod instance. Ideas for a workaround are being developed.
  • As reported before, working with Atlas the file deletion rate was somewhat improved. However, there is still a problem that needs to be understood.
  • Problems with the infrastructure used to host many of our non-Catsor services have largely been worked around, although not yet fixed. Some additional migrations of VMs has been necessary.
Ongoing Disk Server Issues
  • None.
Notable Changes made this last week.
  • The rollout of of WNs updated to the EMI-3 version has been completed.
  • The EMI3 Argus server is now in use everywehere in the batch farm.
  • Since the meeting last week three new disk servers have been deployed in CMSDisk and eight to AtlasDataDisk. (These are in addition to the nine servers added to LHCbDst as reported last week).
  • Batch farm fairshares have been adjusted for the 2014 pledges.
  • Atlas have resumed using the RAL FTS3 server for many file transfers.
  • A required change to ACLs in a router enabled our two new Perfsonar nodes to become active.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
Whole Site SCHEDULED OUTAGE 29/04/2014 07:00 29/04/2014 17:00 10 hours Site outage during Network Upgrade.
lcgrbp01.gridpp.rl.ac.uk, SCHEDULED OUTAGE 02/04/2014 12:00 01/05/2014 12:00 29 days, System be decommissioned. (Replaced my myproxy.gridpp.rl.ac.uk).
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • Databases:
    • Switch LFC/FTS/3D to new Database Infrastructure.
  • Castor:
    • Castor 2.1.14 testing is largely complete. (A non-Tier1 production Castor instance has been successfully upgraded.) We are starting to look at possible dates for rolling this out (probably around May).
  • Networking:
    • Update core Tier1 network and change connection to site and OPN including:
      • Install new Routing layer for Tier1 & change the way the Tier1 connects to the RAL network. (Scheduled for 29th April)
      • These changes will lead to the removal of the UKLight Router.
  • Fabric
    • We are phasing out the use of the software server used by the small VOs.
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during 2014.
Entries in GOC DB starting between the 2nd and 9th April 2014.


Service Scheduled? Outage/At Risk Start End Duration Reason
lcgwms04, lcgwms05, lcgwms06 UNSCHEDULED WARNING 03/04/2014 17:00 04/04/2014 09:25 16 hours and 25 minutes We are investigating problems with these WMS systems
srm-lhcb-tape.gridpp.rl.ac.uk UNSCHEDULED WARNING 03/04/2014 08:00 03/04/2014 09:30 1 hour and 30 minutes Warning during further testing of new tape interface (ACSLS),
lcgrbp01.gridpp.rl.ac.uk SCHEDULED OUTAGE 02/04/2014 12:00 01/05/2014 12:00 29 days, System be decommissioned. (Replaced my myproxy.gridpp.rl.ac.uk).
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
103197 Green Less Urgent Waiting Reply 2014-04-09 2014-04-09 RAL myproxy server and GridPP wiki
102611 Yellow Urgent In Progress 2014-03-24 2014-03-24 NAGIOS *eu.egi.sec.Argus-EMI-1* failed on argusngi.gridpp.rl.ac.uk@RAL-LCG2
101968 Red Less Urgent On Hold 2014-03-11 2014-04-01 Atlas RAL-LCG2_SCRATCHDISK: One dataset to delete is causing 1379 deletion errors
101079 Red Less Urgent In Progress 2014-02-09 2014-04-01 ARC CEs have VOViews with a default SE of "0"
98249 Red Urgent In Progress 2013-10-21 2014-03-13 SNO+ please configure cvmfs stratum-0 for SNO+ at RAL T1
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
02/04/14 100 100 100 100 100 100 98
03/04/14 100 100 100 100 100 100 99
04/04/14 100 100 100 100 100 100 99
05/04/14 100 100 100 100 100 100 100
06/04/14 100 100 93.6 95.5 93.6 100 100 Primary OPN link to CERN down. Failover to backup link didn't work properly.
07/04/14 100 100 86.3 86.2 81.5 100 100 Primary OPN link to CERN down. Failover to backup link didn't work properly.
08/04/14 100 100 100 100 100 100 100