Difference between revisions of "RAL Tier1 weekly operations castor 14/7/2017"

From GridPP Wiki
Jump to: navigation, search
(Created page with "== Draft agenda == 1. Problems encountered this week 2. Upgrades/improvements made this week 3. What are we planning to do next week? 4. Long-term project updates (if not ...")
 
(Operation news)
 
(12 intermediate revisions by one user not shown)
Line 34: Line 34:
  
 
Problems with the Atlas SRMs. Nodes became (almost) unresponsive to external requests causing SAM tests to fail  
 
Problems with the Atlas SRMs. Nodes became (almost) unresponsive to external requests causing SAM tests to fail  
and two callouts (see https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=191431 RT191431 and  
+
and two callouts, see [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=191431 RT191431] and  
https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=191510 RT191510)
+
[https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=191510 RT191510]. The last message that appeared before
 +
the nodes became unresponsive was the notorious "No idle thread in pool..." While in the first case, restarting
 +
the SRM services resolved the problem, in the second case the thread count for the srmfed was increase until this
 +
caused a second callout for the Neptune Oracle server [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=191512 RT191512]
  
Gen Castor is failing Argo tests [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=191235 RT191235]
+
SRM SAM tests have been failing for CMS; ~90% availability as of Fri 14/07/2017 (some DB network problems on Wed 12/07/2017)
 +
 
 +
Gen Castor is failing Argo tests [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=191235 RT191235] presumably because
 +
genScratch is still declared in srm2_storagemap.conf
 +
 
 +
fdsdss22 is showing a lot of media errors
  
 
== Operation news ==
 
== Operation news ==
  
Firmware upgrade on two OCF14 disk servers
+
Firmware upgrade was done on two OCF14 disk servers
 +
 
 +
The WAN tuning parameters were deployed on all CASTOR disk servers [https://elog.gridpp.rl.ac.uk/Tier1/5673 e-log]
 +
 
 +
The Stack Clash patch was applied on Eurynome (preprod) and Orpheus (vcert and vcert2) Oracle servers
 +
 
 +
Six '12 generation disk servers were deployed into atalsTape and draining of the nine '11 generation servers began
  
 
== Plans for next week ==
 
== Plans for next week ==
  
 
Firmware upgrade on the rest of OCF14 disk servers
 
Firmware upgrade on the rest of OCF14 disk servers
 
Roll-out of the WAN tuning params on all CASTOR Tier-1 instances
 
  
 
== Long-term projects ==
 
== Long-term projects ==
Line 59: Line 71:
 
CASTOR stress test improvement
 
CASTOR stress test improvement
  
Configure multiple instance of a CASTOR generic headnode
+
Configure multiple instances of a CASTOR generic headnode
  
 
Virtualise CASTOR headnodes; currently enquiring whether we move to VMware or HyperV
 
Virtualise CASTOR headnodes; currently enquiring whether we move to VMware or HyperV
Line 84: Line 96:
 
== Staffing ==
 
== Staffing ==
  
GP on call
+
All in
 
+
RA on A/L
+

Latest revision as of 11:18, 14 July 2017

Draft agenda

1. Problems encountered this week

2. Upgrades/improvements made this week

3. What are we planning to do next week?

4. Long-term project updates (if not already covered)

  1. SL7 upgrade on tape servers
  2. SL5 elimination from CASTOR functional test boxes and tape verification server
  3. CASTOR stress test improvement
  4. Generic CASTOR headnode setup
  5. Virtualisation of the CASTOR headnodes

5. Special topics

  1. OCF 14 firmware update

6. Actions

7. Anything for CASTOR-Fabric?

8. AoTechnicalB

9. Availability for next week

10. On-Call

11. AoOtherB

Operation problems

Problems with the Atlas SRMs. Nodes became (almost) unresponsive to external requests causing SAM tests to fail and two callouts, see RT191431 and RT191510. The last message that appeared before the nodes became unresponsive was the notorious "No idle thread in pool..." While in the first case, restarting the SRM services resolved the problem, in the second case the thread count for the srmfed was increase until this caused a second callout for the Neptune Oracle server RT191512

SRM SAM tests have been failing for CMS; ~90% availability as of Fri 14/07/2017 (some DB network problems on Wed 12/07/2017)

Gen Castor is failing Argo tests RT191235 presumably because genScratch is still declared in srm2_storagemap.conf

fdsdss22 is showing a lot of media errors

Operation news

Firmware upgrade was done on two OCF14 disk servers

The WAN tuning parameters were deployed on all CASTOR disk servers e-log

The Stack Clash patch was applied on Eurynome (preprod) and Orpheus (vcert and vcert2) Oracle servers

Six '12 generation disk servers were deployed into atalsTape and draining of the nine '11 generation servers began

Plans for next week

Firmware upgrade on the rest of OCF14 disk servers

Long-term projects

SL6 upgrade on functional test boxes and tape verification server: aquilon configuration is complete for the functional test box, all Nagios tests are in place and a creation of HyperV VM is undeway to replace the old box. The test for tape verification server is pending.

Tape-server migration to aquilon and SL7 upgrade

CASTOR stress test improvement

Configure multiple instances of a CASTOR generic headnode

Virtualise CASTOR headnodes; currently enquiring whether we move to VMware or HyperV

Special actions

Future CASTOR upgrade methodology: CERN need to provide RAL with a customed DB downgrade script for the on-the-fly CASTOR upgrade procedure

Actions

Ensure that Fabric is on track with the deployment of the new DB hardware

Drain and decomission/recomission the 12 generation disk servers

RA to ensure that proper Nagios checks are in place for the xroot manager boxes

GP to talk to AL about service ownership/handover of the xroot manager boxes

RA to continue nagging Martin to replace Facilities headnodes

Ensure that httpd on lcgcadm04 is up and running and the machine points to the right DB RT191377

Staffing

All in