Difference between revisions of "RAL Tier1 weekly operations castor 03/11/2017"

From GridPP Wiki
Jump to: navigation, search
(Staffing)
 
(12 intermediate revisions by one user not shown)
Line 32: Line 32:
 
== Operation problems ==
 
== Operation problems ==
  
 +
tape usage plots for facilities instance of castor are not same with Tier1 [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=196714 RT196714]
  
 +
SAM tests failing because of nameserve stub file in Castor [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=196707 RT196707]
 +
 +
SOLID cannot write into CASTOR
  
 
== Operation news ==
 
== Operation news ==
  
 +
lcgcts22 (Tier-1) and fdscts12 tape servers have been running on SL7 for a week now
 +
 +
Preprod SRMs are upgrade to 2.1.16-18 and patched
  
 
== Plans for next week ==
 
== Plans for next week ==
  
 +
Facilities headnode replacement planned for Wed 08/11
  
 +
Continue roll-out SL7 upgrade on CASTOR tape servers
 +
 +
Miguel to confirm with Martin on date for patching Orisa DB
  
 
== Long-term projects ==
 
== Long-term projects ==
 
Tape-server migration to aquilon and SL7 upgrade: initial
 
  
 
CASTOR stress test improvement
 
CASTOR stress test improvement
Line 49: Line 58:
 
Configure multiple instances of a CASTOR generic headnode
 
Configure multiple instances of a CASTOR generic headnode
  
Drain and decommission/recommission the 12 generation disk servers
+
All 12' generation disk servers from Atlas have been drained and recommissioned. Need to finish with the ones
 +
from cmsDisk
 +
 
 +
== Special topics ==
 +
 
 +
Test gfal commands against Castor (email from Alastair)
  
 
== Actions ==  
 
== Actions ==  
  
 
RA to check if the disk server setting, changed to bring disk servers back more quickly after a CASTOR shutdown, is still in place
 
RA to check if the disk server setting, changed to bring disk servers back more quickly after a CASTOR shutdown, is still in place
 
Ensure that Fabric is on track with the deployment of the new DB hardware
 
  
 
GP to talk to AL about service ownership/handover of the xroot manager boxes
 
GP to talk to AL about service ownership/handover of the xroot manager boxes
  
 
== Staffing ==
 
== Staffing ==
 +
 +
RA on call
 +
 +
GP out on Fri 10/11

Latest revision as of 17:19, 3 November 2017

Parent article: https://www.gridpp.ac.uk/wiki/RAL_Tier1_weekly_operations_castor

Draft agenda

1. Problems encountered this week

2. Upgrades/improvements made this week

3. What are we planning to do next week?

  1. Facilities Headnode Replacement

4. Long-term project updates (if not already covered)

  1. SL7 upgrade on tape servers
  2. SL5 elimination from CASTOR functional test boxes and tape verification server
  3. CASTOR stress test improvement
  4. Generic CASTOR headnode setup

5. Special topics

6. Actions

7. Anything for CASTOR-Fabric?

8. AoTechnicalB

9. Availability for next week

10. On-Call

11. AoOtherB

Operation problems

tape usage plots for facilities instance of castor are not same with Tier1 RT196714

SAM tests failing because of nameserve stub file in Castor RT196707

SOLID cannot write into CASTOR

Operation news

lcgcts22 (Tier-1) and fdscts12 tape servers have been running on SL7 for a week now

Preprod SRMs are upgrade to 2.1.16-18 and patched

Plans for next week

Facilities headnode replacement planned for Wed 08/11

Continue roll-out SL7 upgrade on CASTOR tape servers

Miguel to confirm with Martin on date for patching Orisa DB

Long-term projects

CASTOR stress test improvement

Configure multiple instances of a CASTOR generic headnode

All 12' generation disk servers from Atlas have been drained and recommissioned. Need to finish with the ones from cmsDisk

Special topics

Test gfal commands against Castor (email from Alastair)

Actions

RA to check if the disk server setting, changed to bring disk servers back more quickly after a CASTOR shutdown, is still in place

GP to talk to AL about service ownership/handover of the xroot manager boxes

Staffing

RA on call

GP out on Fri 10/11