Tier1 Operations Report 2014-05-07

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 7th May 2014

Review of Issues during the week 30th April and 7th May 2014.
  • Problems with "CMSDisk" in Castor reported last week have been resolved. CMS deleted files freeing up space an dthree more disk servers were added to the service class. (ALthough one of the later has subsequently failed and is currently out of service).
  • There was a problem on Thurdsay with the batch farm caused by a particular (biomed) user running very large jobs. This led to problems for other VOs. The jobs were killed and the user banned on the CEs. This problem recurred over the weekend as the original banning was not placed in Quattor. The user was more again banned over the weekend. A ticket about this from LHCb was raised to an 'alarm' status. This was responded to but the alert didn't page out correctly.
Resolved Disk Server Issues
  • GDSS758 (CMSDisk - D1T0) failed on Friday morning (2nd May). This was a a new server thta had only gone into production the previous day. Following initial investigation the server was returned to service later that day. SInce then the server has been drained and is undergoing further investigations.
Current operational status and issues
  • The load related problems reported for the CMS Castor instance have not been seen for a few weeks. However, work is underway to tackle these problems, in particular servers with faster network connections will be moved into the disk cache in front of CMS_Tape when they become available.
  • The Castor Team are now able to reproduce the intermittent failures of Castor access via the SRM that has been reported in recent weeks. Understanding of the problem is significantly advanced and further investigations are ongoing using the Castor Preprod instance. Ideas for a workaround are being developed.
  • As reported before, working with Atlas the file deletion rate was somewhat improved. However, there is still a problem that needs to be understood.
  • Problems with the infrastructure used to host many of our non-Castor services have largely been worked around, although not yet fixed. Some additional migrations of VMs has been necessary.
  • We have now had repeated instances where the OPN link has not cleanly failed over to the backup link during problems with the primary.
  • One of the network uplinks (for the 2012 disk servers) has been running at full capacity. We have a plan to move the switch into the new Tier1 mesh network to alleviate this.
Ongoing Disk Server Issues
  • None
Notable Changes made this last week.
  • RAL Tier1 network uplink has been migrated to the new Tier1 router pair
  • WMS nodes (lcgwms04, lcgwms05, lcgwms06) upgraded to EMI-3 update 15.
  • L&B nodes (lcglb01, lcglb02) upgraded to EMI-3 update 12.
  • EMI-3 update 15 applied to top-BDII nodes (lcgbdii01, lcgbdii03, lcgbdii04).
  • EMI v3.15.0 BDII-site applied to lcgsbdii01 and lcgsbdii02 nodes.
  • We are starting to test CVMFS Client version 2.1.19.
  • Condor has been upgraded from version 8.0.4 to version 8.0.6 and the hyperthreading settings have been altered with many machines now running at full hyperthreaded capacity.


Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-lhcb-tape.gridpp.rl.ac.uk, SCHEDULED OUTAGE 13/05/2014 08:00 13/05/2014 11:00 3 hours Outage of tape system for update of library controller.
All Castor (SRM) endpoints SCHEDULED WARNING 13/05/2014 08:00 13/05/2014 11:00 3 hours Outage of tape system for update of library controller.
lcgui02.gridpp.rl.ac.uk, SCHEDULED OUTAGE 30/04/2014 14:00 29/05/2014 13:00 28 days, 23 hours Service being decommissioned.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • Databases:
    • Switch LFC/FTS/3D to new Database Infrastructure.
  • Castor:
    • Castor 2.1.14 testing was largely complete, although a new minor version (2.1.14-12) will be released soon.
  • Networking:
    • Move switches connecting recent disk servers batches ('11, '12) onto the Tier1 mesh network.
    • Update core Tier1 network and change connection to site and OPN including:
      • Make routing changes to allow the removal of the UKLight Router.
  • Fabric
    • We are phasing out the use of the software server used by the small VOs.
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during 2014.
Entries in GOC DB starting between the 30th April and 7th May 2014.
Service Scheduled? Outage/At Risk Start End Duration Reason
lcgui02.gridpp.rl.ac.uk, SCHEDULED OUTAGE 30/04/2014 14:00 29/05/2014 13:00 28 days, 23 hours Service being decommissioned.
Whole site SCHEDULED WARNING 30/04/2014 10:00 30/04/2014 12:00 2 hours RAL Tier1 site in warning state due to UPS/generator test.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
105161 Green Less Urgent In Progress 2014-05-05 2014-05-06 H1 hone jobs submitted into CREAM queues through lcgwms05.gridpp.rl.ac.uk & lcgwms06.gridpp.rl.ac.uk WMSs are are Ready status long time (more as 5 hours)
105100 Green Urgent In Progress 2014-05-02 2014-05-06 CMS T1_UK_RAL Consistency Check (May14)
103197 Red Less Urgent Waiting Reply 2014-04-09 2014-04-09 RAL myproxy server and GridPP wiki
101968 Red Less Urgent In Progress 2014-03-11 2014-04-01 Atlas RAL-LCG2_SCRATCHDISK: One dataset to delete is causing 1379 deletion errors
98249 Red Urgent In Progress 2013-10-21 2014-04-23 SNO+ please configure cvmfs stratum-0 for SNO+ at RAL T1
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
30/04/14 100 100 100 100 100 99 99
01/05/14 100 100 100 100 100 90 99
02/05/14 100 100 100 100 100 98 99
03/05/14 100 100 100 100 100 90 99 Some RAL batch problems followed by problem with Atlas Hammercloud monitoring.
04/05/14 100 100 100 100 100 87 99 Some RAL batch problems followed by problem with Atlas Hammercloud monitoring.
05/05/14 100 100 100 100 100 100 100
06/05/14 100 100 100 100 100 100 99