Difference between revisions of "RAL Tier1 weekly operations castor 27/04/2015"

From GridPP Wiki
Jump to: navigation, search
(Created page with "[https://www.gridpp.ac.uk/wiki/RAL_Tier1_weekly_operations_castor List of CASTOR meetings] == Operations News == * Draining - Atlas still draining * SL6 Disk and Tape server ...")
 
 
Line 21: Line 21:
 
* storageD retrieval from castor problems - investigation ongoing
 
* storageD retrieval from castor problems - investigation ongoing
 
* CMS heavy load & high job failure – hot spotting 3 servers, files spread to many servers which then also became loaded. CMS moved away from RAL for these files. Need to discuss this issue in more detail.
 
* CMS heavy load & high job failure – hot spotting 3 servers, files spread to many servers which then also became loaded. CMS moved away from RAL for these files. Need to discuss this issue in more detail.
* castor functional test on lcgccvm02 causing problems - Gareth reviewing
+
* castor functional test on lcgccvm02 causing problems - Gareth reviewingr.
* 150k zero size files reported last week have almost all been dealt with, CMS files outstanding
+
* Files with no ns or xattr checksum value in castor are failing transfers from RAL to BNL using the BNL FTS3 server.
+
  
  

Latest revision as of 09:25, 1 May 2015

List of CASTOR meetings

Operations News

  • Draining - Atlas still draining
  • SL6 Disk and Tape server config is production ready - change control in for tape servers
  • patch preprod oracle DB 11.2.04 - change control in
  • testing CASTOR rebalancer (new version in 2.1.14-15)

Operations Problems

  • GDSS757 cmsDisk / 763 atlasStripInput - new motherboards, fabric acceptance test finishes 23/4/15
  • Elastic tape data corruption - CASTOR team assisting with extraction of data for review
  • Seeing a high volume of SRM DB file dup errors - discuss with Shaun this week
  • A higher number of checksum errors (Alice) investigation ongoing
  • CMS high load / possibly still suffering from hot files - these have been duplicated on other disk servers
  • OLDER>>>>
  • Deadlocks on atlas stager DB (post 2.1.14-15) - not serious but investigation ongoing
  • Issue with missing files in LHCb – race condition reported to CERN
  • Atlas are putting files (sonar.test files) into un-routable paths - this looks like an issue with the space token used. Brian / Rob actions below.
  • There is a problem with latest gfal libraries - not a new problem but Atlas are starting to exercise functionality and identifying these issues
  • storageD retrieval from castor problems - investigation ongoing
  • CMS heavy load & high job failure – hot spotting 3 servers, files spread to many servers which then also became loaded. CMS moved away from RAL for these files. Need to discuss this issue in more detail.
  • castor functional test on lcgccvm02 causing problems - Gareth reviewingr.


Blocking Issues

  • grid ftp bug in SL6 - stops any globus copy if a client is using a particular library. This is a show stopper for SL6 on disk server.


Planned, Scheduled and Cancelled Interventions

Advanced Planning

Tasks

  • SL6 tape server
  • Bruno working on SL6 disk servers
  • DB team need to plan some work which will result in the DBs being under load for approx 1h
  • Provide new VM? to provide castor client functionality to query the backup DBs
  • Plan to ensure PreProd represents production in terms of hardware generation are underway
  • Switch from admin machines: lcgccvm02 to lcgcadm05
  • Correct partitioning alignment issue (3rd CASTOR partition) on new castor disk servers

Interventions


Actions

  • Rob - revisit deployment plans (esp people in office towards the end of March)
  • Brian - On Wed, tell Tim if he can start repacking mctape
  • Rob - change checksum validator so it maintains case when displaying
  • Rob - talk with Shaun to see if its possible to reject anything that does not have a spacetoken
  • Brian - to discuss unroutable files / spacetokens at at DDM meeting on Tuesday
  • Rob to pick up DB cleanup change control
  • Bruno to document processes to control services previously controlled by puppet
  • Gareth to arrange meeting castor/fab/production to discuss the decommissioning procedures
  • Chris/Rob to arrange a meeting to discuss CMS performance/xroot issues (is performance appropriate, if not plan to resolve) - inc. Shaun, Rob, Brian, Gareth
  • Gareth to investigate providing checks for /etc/noquatto on production nodes & checks for fetch-crl


Staffing

  • Castor on Call person
    • Matt
  • Staff absence/out of the office:
    • Brian – out Wed/Thurs for GridPP