Difference between revisions of "RAL Tier1 weekly operations castor 1/9/2017"

From GridPP Wiki
Jump to: navigation, search
(Operation problems)
(Operation problems)
 
Line 30: Line 30:
 
== Operation problems ==
 
== Operation problems ==
  
Lots of files on canbemigr state [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=194371 RT194731]. CMS and Gen  
+
Lots of files on canbemigr state on Atlas, CMS and Gen [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=194371 RT194731]. CMS and Gen are still competing for tape-drives
are still competing for tape-drives
+
  
 
lcgcts22 and lcgcts27 were down
 
lcgcts22 and lcgcts27 were down

Latest revision as of 10:05, 4 September 2017

Draft agenda

1. Problems encountered this week

2. Upgrades/improvements made this week

3. What are we planning to do next week?

4. Long-term project updates (if not already covered)

  1. SL7 upgrade on tape servers
  2. SL5 elimination from CASTOR functional test boxes and tape verification server
  3. CASTOR stress test improvement
  4. Generic CASTOR headnode setup

5. Special topics

6. Actions

7. Anything for CASTOR-Fabric?

8. AoTechnicalB

9. Availability for next week

10. On-Call

11. AoOtherB

Operation problems

Lots of files on canbemigr state on Atlas, CMS and Gen RT194731. CMS and Gen are still competing for tape-drives

lcgcts22 and lcgcts27 were down

Problems with the SL6 upgrade of the tape verification servers RT193596

Operation news

The new facilities headnodes have been deployed

Plans for next week

Deployment of the new ral-castor tools version

Continue work on tape-server migration to aquilon

Complete the deployment of the sic V12/OCF12 disk servers into lhcbRawRdst

Long-term projects

Tape-server migration to aquilon and SL7 upgrade

CASTOR stress test improvement

Configure multiple instances of a CASTOR generic headnode

Drain and decomission/recomission the 12 generation disk servers

Actions

RA to check if the disk server setting, changed to bring disk servers back more quickly after a CASTOR shutdown, is still in place

Ensure that Fabric is on track with the deployment of the new DB hardware

RA to ensure that proper Nagios checks are in place for the xroot manager boxes

RA and JK to do the the new service checklist for the new functional test box (castor-functional-test1)

GP to talk to AL about service ownership/handover of the xroot manager boxes

Staffing

GP on call until Firday, then RA