RAL Tier1 weekly operations castor 22/09/2014

From GridPP Wiki
Jump to: navigation, search

Operations News

  • Plan to ensure PreProd represents production in terms of hardware generation are underway
  • Disk server redeployments continue (i.e. D1T0 reused in D0T1 etc) ... 5 servers in LHCb left
  • new VO (pheno) added to GEN scratch
  • SL6 Headnode work progressing well - hoping for rollout in Nov
  • xrootd security advisory with FAX component within xrootd

Operations Problems

  • Atlas load, high wait I/O on several servers – lots of xroot jobs. Suggestion to find what workflow is running
  • cmsd crashing (cluster management daemon xrootd)
  • Atlas disk issue yesterday
  • db failover (not physical machine) … the longstanding issue reported to Oracle
  • Checksum issues on atlas – the same file (nsfileid) multiple disk servers … could be that the source file is corrupt and failed to write to disk?


Blocking Issues

  • grid ftp bug in SL6 - stops any globus copy if a client is using a particular library. This is a show stopper for SL6 on disk server.


Planned, Scheduled and Cancelled Interventions

  • Juan to patch castor dbs beginning of Nov PSU patches – standard change
  • 2.1.14-14 testing in preprod
  • A Tier 1 Database cleanup is planned so as to eliminate a number of excess tables and other entities left over from previous CASTOR versions. This will be change-controlled in the near future.


Advanced Planning

Tasks

  • Possible future upgrade to CASTOR 2.1.14-15.
  • Switch from admin machines: lcgccvm02 to lcgcadm05
  • New VM configured to run against the standby CASTOR database will be created as a front-end for dark data etc queries.
  • Replace DLF with Elastic Search
  • Correct partitioning alignment issue (3rd CASTOR partition) on new castor disk servers


Interventions

  • None

Staffing

  • Castor on Call person
    • Rob


  • Staff absence/out of the office:
    • Castor team - Out Mon/Tues at Castor face to face
    • Brian – Out all week
    • Shaun - Out all week