Difference between revisions of "RAL Tier1 weekly operations castor 30/6/2017"

From GridPP Wiki
Jump to: navigation, search
(Created page with "== Draft agenda == 1. Problems encountered this week 2. Upgrades/improvements made this week 3. What are we planning to do next week? 4. Long-term project updates (if not ...")
 
 
(13 intermediate revisions by one user not shown)
Line 12: Line 12:
 
   2. SL5 elimination from CASTOR functional test boxes and tape verification server
 
   2. SL5 elimination from CASTOR functional test boxes and tape verification server
 
   3. CASTOR stress test improvement
 
   3. CASTOR stress test improvement
   4. Generic CASTOR headnode
+
   4. Generic CASTOR headnode setup
   5. Virualisation of the CASTOR headnodes
+
   5. Virtualisation of the CASTOR headnodes
  
 
5. Special topics
 
5. Special topics
  
   1. Future CASTOR upgrade methodology
+
   1. OCF 14 firmware update
  
 
6. Actions
 
6. Actions
Line 34: Line 34:
  
 
Problems with srmbed on Atlas SRMs [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=190742 RT190742]
 
Problems with srmbed on Atlas SRMs [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=190742 RT190742]
 +
 +
High rate of Atlas transfer failures of  type "TRANSFER [70] TRANSFER globus_xio: Unable to connect to gdss644.gridpp.rl.ac.uk:51630 globus_xio: System error in connect: Connection refused globus_xio: A system call failed: Connection refused" [https://ggus.eu/?mode=ticket_info&ticket_id=129098 GGUS129098]
  
 
== Operation news ==
 
== Operation news ==
Line 39: Line 41:
 
A triger was deployed to the Atlas SRM DB to fix the issue of srmbed not staying up due to incoming requests with multiple slahes [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=190742 RT190742]
 
A triger was deployed to the Atlas SRM DB to fix the issue of srmbed not staying up due to incoming requests with multiple slahes [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=190742 RT190742]
  
==  Plans for next week ==  
+
==  Plans for next week ==
 +
 
 +
Roll-out of the WAN tuning params on all CASTOR Tier-1 instances
  
 
== Long-term projects ==
 
== Long-term projects ==
  
SL6 upgrade on functional test boxes and tape verification server: aquilon configuration is complete for the functional test box
+
SL6 upgrade on functional test boxes and tape verification server: aquilon configuration is complete for the functional test box,
and the tape verification server and tests for these two hosts are pending
+
all Nagios tests are in place and a creation of HyperV VM is undeway to replace the old box.
 +
The test for tape verification server is pending.
  
Tape-server migration to aquilon and SL7 upgrade: resumed work on this; re-factoring and re-compiling
+
Tape-server migration to aquilon and SL7 upgrade
  
 
CASTOR stress test improvement
 
CASTOR stress test improvement
 +
 +
Configure multiple instance of a CASTOR generic headnode
 +
 +
Virtualise CASTOR headnodes; currently enquiring whether we move to VMware or HyperV
 +
 +
== Special actions ==
 +
 +
Future CASTOR upgrade methodology: CERN need to provide RAL with a customed DB downgrade script
 +
for the on-the-fly CASTOR upgrade procedure
  
 
== Actions ==  
 
== Actions ==  
Line 56: Line 70:
 
Drain and decomission/recomission the 12 generation disk servers
 
Drain and decomission/recomission the 12 generation disk servers
  
Investigate and, if necessary, eliminate prelink
+
RA to ensure that proper Nagios checks are in place for the xroot manager boxes
 +
 
 +
GP to talk to AL about service ownership/handover of the xroot manager boxes
 +
 
 +
RA to continue nagging Martin to replace Facilities headnodes
  
 
== Staffing ==
 
== Staffing ==
  
GP on call
+
RA on call until Thursday then GP

Latest revision as of 14:43, 6 July 2017

Draft agenda

1. Problems encountered this week

2. Upgrades/improvements made this week

3. What are we planning to do next week?

4. Long-term project updates (if not already covered)

  1. SL7 upgrade on tape servers
  2. SL5 elimination from CASTOR functional test boxes and tape verification server
  3. CASTOR stress test improvement
  4. Generic CASTOR headnode setup
  5. Virtualisation of the CASTOR headnodes

5. Special topics

  1. OCF 14 firmware update

6. Actions

7. Anything for CASTOR-Fabric?

8. AoTechnicalB

9. Availability for next week

10. On-Call

11. AoOtherB

Operation problems

Problems with srmbed on Atlas SRMs RT190742

High rate of Atlas transfer failures of type "TRANSFER [70] TRANSFER globus_xio: Unable to connect to gdss644.gridpp.rl.ac.uk:51630 globus_xio: System error in connect: Connection refused globus_xio: A system call failed: Connection refused" GGUS129098

Operation news

A triger was deployed to the Atlas SRM DB to fix the issue of srmbed not staying up due to incoming requests with multiple slahes RT190742

Plans for next week

Roll-out of the WAN tuning params on all CASTOR Tier-1 instances

Long-term projects

SL6 upgrade on functional test boxes and tape verification server: aquilon configuration is complete for the functional test box, all Nagios tests are in place and a creation of HyperV VM is undeway to replace the old box. The test for tape verification server is pending.

Tape-server migration to aquilon and SL7 upgrade

CASTOR stress test improvement

Configure multiple instance of a CASTOR generic headnode

Virtualise CASTOR headnodes; currently enquiring whether we move to VMware or HyperV

Special actions

Future CASTOR upgrade methodology: CERN need to provide RAL with a customed DB downgrade script for the on-the-fly CASTOR upgrade procedure

Actions

Ensure that Fabric is on track with the deployment of the new DB hardware

Drain and decomission/recomission the 12 generation disk servers

RA to ensure that proper Nagios checks are in place for the xroot manager boxes

GP to talk to AL about service ownership/handover of the xroot manager boxes

RA to continue nagging Martin to replace Facilities headnodes

Staffing

RA on call until Thursday then GP