Difference between revisions of "RAL Tier1 weekly operations castor 08/12/2017"

From GridPP Wiki
Jump to: navigation, search
(Created page with "== Draft agenda == 1. Problems encountered this week 2. Upgrades/improvements made this week 3. What are we planning to do next week? 1. Facilities Headnode Replacement ...")
 
 
Line 30: Line 30:
 
== Operation problems ==
 
== Operation problems ==
  
LHCb having trouble entering SRM requests "No idle thread...". Started as of Thursday. Believed to be due to thread exhaustion on the central NS. Proposal to increase available threads on the central NS boxes - needs consultation with DB team.
+
LHCb DB thread count increase worked; on Monday plan to reduce back to original setting (maybe in 2-3 steps)
  
 
------
 
------
Line 38: Line 38:
 
== Operation news ==
 
== Operation news ==
  
Upgraded non-LHCb SRMs to 2.1.16-18. LHCb SRMs upgraded during November.
 
  
CA versions pinned on an old version by Aquilon administrators due to a problem with the new version. Not a CASTOR team action to resolve.
 
  
 
== Plans for next week ==
 
== Plans for next week ==
  
 +
Behind-the-scenes work on cattle headnodes.
  
 
== Long-term projects ==
 
== Long-term projects ==
Line 50: Line 49:
  
 
Headnode migration to Aquilon - Stager configuration almost complete; functional tests pass. Focus on features but not personalities
 
Headnode migration to Aquilon - Stager configuration almost complete; functional tests pass. Focus on features but not personalities
 
All tape servers on SL7
 
  
 
Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.
 
Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.
Line 58: Line 55:
  
 
== Actions ==  
 
== Actions ==  
 
GP to ask Tim about any other settings which he has changed manually on the tape servers which will have been lost when we moved to Aquilon.
 
  
 
RA to check if the disk server setting, changed to bring disk servers back more quickly after a CASTOR shutdown, is still in place
 
RA to check if the disk server setting, changed to bring disk servers back more quickly after a CASTOR shutdown, is still in place
Line 68: Line 63:
  
 
GP/RA to chase up Fabric team about getting action on RT197296(use fdscspr05 as the preprod stager - replacement for ccse08)
 
GP/RA to chase up Fabric team about getting action on RT197296(use fdscspr05 as the preprod stager - replacement for ccse08)
 
CP to follow up on Diamond's directory structure changes.
 
 
RA to consider Phillipe's suggestion to move disk from LHCb's tape buffer to lhcbDst.
 
  
 
== Staffing ==
 
== Staffing ==
Line 77: Line 68:
 
GP out until mid-January from Tuesday
 
GP out until mid-January from Tuesday
  
RA on call from Monday
+
RA on call

Latest revision as of 10:39, 15 December 2017

Draft agenda

1. Problems encountered this week

2. Upgrades/improvements made this week

3. What are we planning to do next week?

  1. Facilities Headnode Replacement

4. Long-term project updates (if not already covered)

  1. SL5 elimination from CIP and tape verification server
  2. CASTOR stress test improvement
  3. Generic CASTOR headnode setup
  4. Aquilonised headnodes

5. Special topics

6. Actions

7. Anything for CASTOR-Fabric?

8. AoTechnicalB

9. Availability for next week

10. On-Call

11. AoOtherB

Operation problems

LHCb DB thread count increase worked; on Monday plan to reduce back to original setting (maybe in 2-3 steps)


SOLID access/ceritificate issues GGUS131840. Waiting for reply from SOLID.

Operation news

Plans for next week

Behind-the-scenes work on cattle headnodes.

Long-term projects

CASTOR stress test improvement - Script writing, awaiting testing and validation

Headnode migration to Aquilon - Stager configuration almost complete; functional tests pass. Focus on features but not personalities

Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.

Draining of remainder of 12 generation HW - waiting on CMS migration to Echo. No more draining ongoing.

Actions

RA to check if the disk server setting, changed to bring disk servers back more quickly after a CASTOR shutdown, is still in place

RA/GP: Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/

GP: Turn CASTOR SRMs & other Aquilonised nodes from clients to servers

GP/RA to chase up Fabric team about getting action on RT197296(use fdscspr05 as the preprod stager - replacement for ccse08)

Staffing

GP out until mid-January from Tuesday

RA on call