Difference between revisions of "RAL Tier1 weekly operations castor 28/7/2017"

From GridPP Wiki
Jump to: navigation, search
(Created page with "== Draft agenda == 1. Problems encountered this week 2. Upgrades/improvements made this week 3. What are we planning to do next week? 4. Long-term project updates (if not ...")
 
(Draft agenda)
 
(4 intermediate revisions by one user not shown)
Line 13: Line 13:
 
   3. CASTOR stress test improvement
 
   3. CASTOR stress test improvement
 
   4. Generic CASTOR headnode setup
 
   4. Generic CASTOR headnode setup
  5. Virtualisation of the CASTOR headnodes
 
  
 
5. Special topics
 
5. Special topics
 
  1. OCF 14 firmware update
 
  
 
6. Actions
 
6. Actions
Line 33: Line 30:
 
== Operation problems ==
 
== Operation problems ==
  
Blocking sessions in CMS Stager DB [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=191741 RT191741]
+
cmsDisk ran out of space causing problems on CMS production
[https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=191752 191752]
+
  
SAM test failures for CMS [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=191755 191755]
+
Long standing issue of SAM test failures for CMS that needs investigation
 +
 
 +
FTS upgrade in CERN combined with the a severe network outage on Tuesday caused failed transfers fot all VOs
 +
 
 +
40% of the total transfers out of atlasScratchDisk failed and 90% of them were associated with the
 +
globus-xio error message. Need to investigate whether the cause is related to specific Atlas workflow
 +
that uses atlasScratchDisk or/and the obsolete hardware
 +
 
 +
John is moaning of failing regional Nagios  tests for Gen
  
 
== Operation news ==
 
== Operation news ==
Line 57: Line 61:
  
 
Configure multiple instances of a CASTOR generic headnode
 
Configure multiple instances of a CASTOR generic headnode
 
Virtualise CASTOR headnodes; currently enquiring whether we move to VMware or HyperV
 
  
 
Drain and decomission/recomission the 12 generation disk servers
 
Drain and decomission/recomission the 12 generation disk servers

Latest revision as of 10:33, 28 July 2017

Draft agenda

1. Problems encountered this week

2. Upgrades/improvements made this week

3. What are we planning to do next week?

4. Long-term project updates (if not already covered)

  1. SL7 upgrade on tape servers
  2. SL5 elimination from CASTOR functional test boxes and tape verification server
  3. CASTOR stress test improvement
  4. Generic CASTOR headnode setup

5. Special topics

6. Actions

7. Anything for CASTOR-Fabric?

8. AoTechnicalB

9. Availability for next week

10. On-Call

11. AoOtherB

Operation problems

cmsDisk ran out of space causing problems on CMS production

Long standing issue of SAM test failures for CMS that needs investigation

FTS upgrade in CERN combined with the a severe network outage on Tuesday caused failed transfers fot all VOs

40% of the total transfers out of atlasScratchDisk failed and 90% of them were associated with the globus-xio error message. Need to investigate whether the cause is related to specific Atlas workflow that uses atlasScratchDisk or/and the obsolete hardware

John is moaning of failing regional Nagios tests for Gen

Operation news

The Stack Clash patch was applied on all Tier-1 CASTOR instances

Plans for next week

Firmware upgrade on the rest of OCF14 disk servers

Long-term projects

SL6 upgrade on functional test boxes and tape verification server: aquilon configuration is complete for the functional test box, all Nagios tests are in place and a creation of HyperV VM is undeway to replace the old box. The test for tape verification server is pending.

Tape-server migration to aquilon and SL7 upgrade

CASTOR stress test improvement

Configure multiple instances of a CASTOR generic headnode

Drain and decomission/recomission the 12 generation disk servers

Special actions

Future CASTOR upgrade methodology: CERN need to provide RAL with a customed DB downgrade script for the on-the-fly CASTOR upgrade procedure

Actions

Ensure that Fabric is on track with the deployment of the new DB hardware

RA to ensure that proper Nagios checks are in place for the xroot manager boxes

GP to talk to AL about service ownership/handover of the xroot manager boxes

RA to continue nagging Martin to replace Facilities headnodes

Ensure that httpd on lcgcadm04 is up and running and the machine points to the right DB RT191377

Staffing

All in