RAL Tier1 weekly operations castor 08/06/2015

From GridPP Wiki
Jump to: navigation, search

List of CASTOR meetings

Operations News

  • New LHCb tape pools created
  • CASTOR rebalancing from Monday
  • DiRAC small VO being created


  • Mice (Castor Gen) will be operating overnight and able to call pri oncall
  • All tape servers have now been upgraded to SL6 and are running smoothly
  • On a related note, the disk server SL6 configuration is ready but waiting for the Oracle updates to be completed.
    • We are examining options for running this in a slow-and-steady fashion with CASTOR up.
  • Testing CASTOR rebalancer on preproduction, and developing associated tools. We hope to use the rebalancer to prevent future hotspotting issues.
  • 13-generation disk servers are being prepared for deployment into CASTOR production
  • The move of the standby DB racks to R26 has been successfully completed. Some issues remained with the hardware following the move, resulting in an unplanned at-risk, but these were resolved.
  • We are examining options for the upgrade of the CASTOR DBs to Oracle version 11.2.0.4. The experiments are keen to avoid downtime early in run 2, so some careful scheduling will be necessary.


Operations Problems

  • Atlas files without NS checksums causing file copies to fail (may affect other VOs)
  • CMS pileup - work around suggestion to reduce the disk server xroot weighting (reduce likelihood of file open timeout)


  • Possible problem identified when creating new service class for DiRAC, castor external emailed
  • standby DBs for castor are occasionally 10 or 15 mins behind and return to sync
  • xroot redirection - works with our redirector but not others, this started during a past upgrade - shaun debugging.
    • We have determined that the most serious incidence of this problem is due to a number of hot datasets that are located almost entirely on one node. Shaun has implemented a process to redistribute this data across the rest of the cmsDisk pool.
  • Retrieval errors from facilities castor - a number of similar incidents seen in last few weeks.
  • GDSS757 cmsDisk / 763 atlasStripInput - new motherboards, fabric acceptance test complete, ready for deployment.
  • A higher number of checksum errors (Alice) - found to be due to VO actions. This is being cleared up.
  • Atlas are putting files (sonar.test files) into un-routable paths - this looks like an issue with the space token used. Brian working on this.
    • This is related to a problem with latest gfal libraries - not a new problem but Atlas are starting to exercise functionality and identifying these issues
  • castor functional test on lcgccvm02 causing problems - Gareth reviewing.


Blocking Issues

  • grid ftp bug in SL6 - stops any globus copy if a client is using a particular library. This is a show stopper for SL6 on disk server.


Planned, Scheduled and Cancelled Interventions

Advanced Planning

Tasks

  • Disk deployments - SL6 disk deployments, dev mostly complete, deployment planning needed
  • srm 2.1.14 - functional testing positive
  • Switch from admin machines: lcgccvm02 to lcgcadm05
  • Correct partitioning alignment issue (3rd CASTOR partition) on new castor disk servers
  • Intervention to upgrade CASTOR DBs to Oracle 11.2.0.4

Interventions

  • tier1 router flip test - possibly on the 16th June TBC, purpose of this test is to flush out any facilities issues. This will involve a castor and Batch farm stop


Staffing

  • Castor on Call person next week
    • Shaun
  • Staff absence/out of the office:
    • Matt – out Mon-Wed


Actions

  • Brian to call other sites to identify if they are experiencing issues with CMS jobs
  • Rob to get jobs thought to cause CMS pileup
  • Rob/Shaun drop xroot weighting to 1 on one server Monday, then all next week (cmsDisk)
  • Rob to prep cms hotdisk as a plan B, for pileup
  • Bruno to put SL6 on preprod disk
  • Bruno / Rob to write change control doc for SL6 disk
  • Shaun testing/working gfalcopy rpms
  • Rob/Alastair to clarify what we are doing with 'broken' disk servers
  • Bruno - castor template documentation/training
  • Someone - mice, what access protocol do they use?
  • Rob rebalancing change control
  • Gareth/Matt - suggestion to change monitoring to call out in daytime only
  • Gareth to ensure that there is a ping test etc to the atlas building
  • Chris raise a ticket in castor queue to track xroot redirection bug
  • Rob/Jens to add SNOPlus to CIP, plus possibly Dirac.
  • Rob/Shaun to try and reproduce SRM DB Dups issue with FTS transfers
  • Bruno to document processes to control services previously controlled by puppet
  • Gareth to arrange meeting castor/fab/production to discuss the decommissioning procedures
  • Gareth to investigate providing checks for /etc/noquatto on production nodes & checks for fetch-crl
  • Rob/Bruno: Rob to send the changes made to xroot timeouts to Bruno for implementation in Quattor.
  • Rob to remove Facilities disk servers from cedaRetrieve to go back to Fabric for acceptance testing.

Completed Actions

  • Brian - On Wed, tell Tim if he can start repacking mctape (Done)
  • Shaun to send plots to Matt to support above action
  • Rob to book meeting to discuss possible workarounds/investigations for CMS issue - xroot timeout / server(read-ahead) or Network tuning / CASTOR bug / deploying more disk servers
  • Rob - talk with Shaun to see if its possible to reject anything that does not have a spacetoken (answer: this is not a good idea)
  • Brian - to discuss unroutable files / spacetokens at at DDM meeting on Tuesday (Done)
  • Rob to pick up DB cleanup change control (Done)
  • Chris/Rob to arrange a meeting to discuss CMS performance/xroot issues (is performance appropriate, if not plan to resolve) - inc. Shaun, Rob, Brian, Gareth (Potential fix implemented)
  • Rob and Shaun to continue fixing CMS