RAL Tier1 weekly operations castor 02/02/2015

From GridPP Wiki
Jump to: navigation, search

List of CASTOR meetings

Operations News

  • Draining - Atlas currently on hold due to lack of free disk space (currently 85TB)
  • Facilities CASTOR patched for kernel/errata (not Ghost)
  • CASTOR Disk server kernel patch – LHCb was completed, atlas ~10 disk servers completed when Ghost vulnerability identified then paused
  • certificates on fdsdss20 to fdsdss30 were updated
  • initial CASTOR DB cleanup was successful - further change control req?


Operations Problems

  • castor functional test on lcgccvm02 causing problems - Gareth reviewing
  • storageD retrieval from castor problems - investigation ongoing
  • 150k zero size files reported last week have almost all been dealt with, CMS files outstanding
  • Files with no ns or xattr checksum value in castor are failing transfers from RAL to BNL using the BNL FTS3 server.
  • CMS heavy load & high job failure – hot spotting 3 servers, files spread to many servers which then also became loaded. CMS moved away from RAL for these files. Need to discuss this issue in more detail.
  • Files unroutable to tape - a few test files recently (sonar.test) from atlas. Little more investigation needed
  • fetch-crl was not running on any SL6 headnode (cert revocation) - now resolved


Blocking Issues

  • grid ftp bug in SL6 - stops any globus copy if a client is using a particular library. This is a show stopper for SL6 on disk server.


Planned, Scheduled and Cancelled Interventions

  • CASTOR STOP for Patching (kernel and errata). Monday 10–13:30. Matt/Chris/others?
  • Oracle PSU patching. Neptune(atlas and gen)- Wednesday Primary i.e. short castor outage (and Tuesday Backup i.e. no outage)
  • Oracle upgrade of preprod 2nd Feb - will require a short outage
  • Upgrade Oracle DB to version 11.2.0.4 (Late February?)
  • Upgrade CASTOR to version 2.1.14-14 OR 2.1.14-15 (February)


Advanced Planning

Tasks

  • DB team need to plan some work which will result in the DBs being under load for approx 1h - not terribly urgent but needs to be done in new year.
  • Provide new VM? to provide castor client functionality to query the backup DBs
  • Plan to ensure PreProd represents production in terms of hardware generation are underway
  • Possible future upgrade to CASTOR 2.1.14-15 post-Christmas
  • Switch from admin machines: lcgccvm02 to lcgcadm05
  • Correct partitioning alignment issue (3rd CASTOR partition) on new castor disk servers

Interventions


Actions

  • Rob to pick up DB cleanup change control
  • Bruno to document processes to control services previously controlled by puppet
  • Gareth to arrange meeting castor/fab/production to discuss the decommissioning procedures
  • Chris/Rob to arrange a meeting to discuss CMS performance/xroot issues (is performance appropriate, if not plan to resolve) - inc. Shaun, Rob, Brian, Gareth
  • Gareth to investigate providing checks for /etc/noquatto on production nodes & checks for fetch-crl
  • Matt to identify a suitable time to patch CASTOR facilities - DB team will synchronise patching of Juno with this outage


Staffing

  • Castor on Call person
    • Rob (TBC)
  • Staff absence/out of the office:
    • Rob out Monday
    • Chris out Tues/Wed/Thurs
    • Matt out at CERN Tues> TBC