Difference between revisions of "RAL Tier1 weekly operations castor 07/12/2018"
From GridPP Wiki
Line 35: | Line 35: | ||
* Three new disk servers deployed into Facilities diamondRecall pool | * Three new disk servers deployed into Facilities diamondRecall pool | ||
+ | |||
+ | * LHCb are staging data for the major reprocessing of Run1 and Run2 data (4.7 PB) that will be carried out in 2019 | ||
== Plans for next few weeks == | == Plans for next few weeks == |
Revision as of 10:07, 7 December 2018
Contents
Standing agenda
1. Problems encountered this week
2. Upgrades/improvements made this week
3. What are we planning to do next week?
4. Long-term project updates (if not already covered)
5. Special topics
6. Actions
7. Review Fabric tasks
1. Link
8. AoTechnicalB
9. Availability for next week
10. On-Call
11. AoOtherB
Operation problems
Argo tests for CMS temporarily failed while preparing for CMS migration
Intensive CMS recall activity caused timeouts on PheDEx transfers
Operation news
* CMS migrated to the new CASTOR instance
* Three new disk servers deployed into Facilities diamondRecall pool
* LHCb are staging data for the major reprocessing of Run1 and Run2 data (4.7 PB) that will be carried out in 2019
Plans for next few weeks
* Complete kernel patching on CASTOR hosts
* Oracle/kernel patching for CASTOR Facilities DB
Long-term projects
* New CASTOR WLCGTape instance. Things need doing: Create a seperate xrootd redirector for ALICE
* CASTOR disk server migration to Aquilon: gdss742 has been compiled with a draft aquilon profile but there are problems with the SL7 installation RT216885
Actions
Staffing
* RA out until 10/12