Tier1 Operations Report 2014-07-30

From GridPP Wiki
Jump to: navigation, search

RAL Tier1 Operations Report for 30th July 2014

Review of Issues during the week 23rd to 30th July 2014.
  • The problems with the SRM processes for the castor GEN instance were reported last week as being solved - but then they recurred. The cause was a mal-formed file name being sent that was not trapped by the relevant SRM code and the user had been requested to change their code. This morning a workaround (code to trap and fixup the mal-formed filename) was inserted into the Castor GEN instance.
  • There have been some problems with the Atlas SRM/Castor instance in the last couple of days that are under investigation.
  • The batch farm has been quiet the last couple of days. The cap on the allowed number of Alice jobs was increased temporarily (up to 3000). The number of allowed multicore jobs was also increased (from 600 to 1000).
  • There problems with the Atlas FAX redirector reported last week were fixed on Wednsday afternoon (23rd July).
  • Problems were found with running jobs for the Hyper-K VO after it was enabled on ARC CEs a week ago. This has now been fixed.
Resolved Disk Server Issues
  • None
Current operational status and issues
  • We are still investigating xroot access to CMS Castor following the upgrade on the 17th June. The initial problems are understood but we still need to investigate further optimisations and there have been hot data sets.
  • There is a problem with the dteam SRM regional nagios tests, which may be caused by how dteam is published by the CIP.
Ongoing Disk Server Issues
  • GDSS680 (AtlasDataDisk D1T0) crashed this morning at around 08:30. Investigations are ongoing.
Notable Changes made this last week.
  • The Castor compatibility mode was switched off on the Tier1 Castor Namserver on Thursday 24th July completing the final step of the 2.1.14-13 upgrade.
  • CMS site local config changed so that xrootd fallback SUM tests (ond only these tests) go through the firewall.
  • Increased max number of multicore jobs from 600 to 1000
  • Workaround (code to trap and fixup the mal-formed filename) was inserted into the Castor GEN instance today (30th July).
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
perfsonar-ps01.gridpp.rl.ac.uk, perfsonar-ps02.gridpp.rl.ac.uk, SCHEDULED OUTAGE 14/07/2014 11:00 14/08/2014 11:00 31 days, Systems being decommissioned. They have been replaced by lcgps01.gridpp.rl.ac.uk and lcgps02.gridpp.rl.ac.uk
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • A problem prevented the rollout of the RIP protocol to the Tier1 routers when the rest of site was done a week ago. This problem is understood and will be resolved tomorrow morning (Thursday 31st July) allowing the RIP protocol to be enabled on the Tier1 router failover pair.
  • We are planning the termination of the FTS2 service (announced for 2nd September) now that almost all use is on FTS3.
  • The removal of the (NFS) software server is scheduled for the 2nd September.
  • We are planning stop access to the cream CEs - although possibly leaving them available to ALICE for some time. No date has yet been specified for this.

Listing by category:

  • Databases:
    • Switch LFC/FTS/3D to new Database Infrastructure.
  • Castor:
    • None.
  • Networking:
    • Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
    • Make routing changes to allow the removal of the UKLight Router.
  • Fabric
    • We are phasing out the use of the software server used by the small VOs.
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during 2014.
Entries in GOC DB starting between the 23rd and 30th July 2014.
Service Scheduled? Outage/At Risk Start End Duration Reason
perfsonar-ps01.gridpp.rl.ac.uk, perfsonar-ps02.gridpp.rl.ac.uk, SCHEDULED OUTAGE 14/07/2014 11:00 14/08/2014 11:00 31 days, Systems being decommissioned. They have been replaced by lcgps01.gridpp.rl.ac.uk and lcgps02.gridpp.rl.ac.uk
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
106655 Yellow Less Urgent In Progress 2014-07-04 2014-07-23 Ops [Rod Dashboard] Issues detected at RAL-LCG2 (srm-dteam)
106324 Red Urgent In Progress 2014-06-18 2014-07-07 CMS pilots losing network connections at T1_UK_RAL
105405 Red Urgent On Hold 2014-05-14 2014-07-01 please check your Vidyo router firewall configuration
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
23/07/14 100 100 100 100 100 94 99
24/07/14 100 100 100 100 100 96 100
25/07/14 100 100 100 100 100 98 100
26/07/14 100 100 100 100 100 100 99
27/07/14 100 100 100 100 100 98 99
28/07/14 100 100 97.6 100 100 89 99 Three SRM GET test failures (all SRM_FILE_BUSY).
29/07/14 100 100 97.2 100 100 100 100 Couple of SRM test failures which correspond to a srmserver restart.