Difference between revisions of "RAL Tier1 weekly operations castor 04/05/2018"

From GridPP Wiki
Jump to: navigation, search
(Created page with "== Draft agenda == 1. Problems encountered this week 2. Upgrades/improvements made this week 3. What are we planning to do next week? 4. Long-term project updates (if not ...")
 
 
Line 31: Line 31:
 
== Operation problems ==
 
== Operation problems ==
  
   * Firewall update on 25/04 caused transient problems with CASTOR services [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=207369 ]
+
   * gdss737 failed and brought to production after drive replacement
  
   * Network interface of gdss766 was misconfigured as eth2 causing problems with access - now fixed
+
   * genTape load issue again; ilc and na62 trying to do things at the same time: try a bulk recall of ca. 50 files 
 +
    and see how fast
 +
      Noted that gdss745 and gdss752 have odd ganglia readings, i.e noe data is read from them
  
   * Transfers from RAL_Disk to Florida are failing [https://ggus.eu/index.php?mode=ticket_info&ticket_id=134769 GGUS ticket]
+
   * Transfers from RAL_Disk to Florida (CMS) are failing [https://ggus.eu/index.php?mode=ticket_info&ticket_id=134769 GGUS ticket]
  
 
== Operation news ==
 
== Operation news ==
  
 
== Plans for next few weeks ==
 
== Plans for next few weeks ==
 +
 +
    * RA to finish developing the new information system for tape
 +
 +
    * Decommission 2011 genTape disk servers
  
 
== Long-term projects ==
 
== Long-term projects ==
  
Headnode migration to Aquilon - Trying to configure former vCert headnodes as generic CASTOR headnodes. Almost done
+
Headnode migration to Aquilon - The former vcert headnodes have been configured as CASTOR macroheadnodes and they pass functional tests
but have some problems with SRM
+
Sl7 name severs installs ok. Next step: HAproxy
  
 
No point to move Atlas and CMS to the macroheadnode setup since their d1t0 data are moving to Echo over the next two months
 
No point to move Atlas and CMS to the macroheadnode setup since their d1t0 data are moving to Echo over the next two months
Line 53: Line 59:
  
 
== Actions ==  
 
== Actions ==  
 
RA/GP to talk to Chris Brew about alerts for new CMS file classes and tape families
 
  
 
JK to investigate why ralreplicas fails and submit a "bug" report
 
JK to investigate why ralreplicas fails and submit a "bug" report
Line 64: Line 68:
 
== Staffing ==
 
== Staffing ==
  
RA on call
+
GP on call

Latest revision as of 09:58, 4 May 2018

Draft agenda

1. Problems encountered this week

2. Upgrades/improvements made this week

3. What are we planning to do next week?

4. Long-term project updates (if not already covered)

  1. SL5 elimination from CIP and tape verification server
  2. CASTOR stress test improvement
  3. Generic CASTOR headnode setup
  4. Aquilonised headnodes

5. Special topics

6. Actions

7. Review Fabric tasks

  1.   Link

8. AoTechnicalB

9. Availability for next week

10. On-Call

11. AoOtherB

Operation problems

  * gdss737 failed and brought to production after drive replacement
  * genTape load issue again; ilc and na62 trying to do things at the same time: try a bulk recall of ca. 50 files  
    and see how fast 
      Noted that gdss745 and gdss752 have odd ganglia readings, i.e noe data is read from them
  * Transfers from RAL_Disk to Florida (CMS) are failing GGUS ticket

Operation news

Plans for next few weeks

   * RA to finish developing the new information system for tape
   * Decommission 2011 genTape disk servers

Long-term projects

Headnode migration to Aquilon - The former vcert headnodes have been configured as CASTOR macroheadnodes and they pass functional tests Sl7 name severs installs ok. Next step: HAproxy

No point to move Atlas and CMS to the macroheadnode setup since their d1t0 data are moving to Echo over the next two months

HA-proxyfication of the CASTOR SRMs: HA proxy set up has been tested on the SRMs

RA writing a new accounting system to match WLCG archival requirements

Actions

JK to investigate why ralreplicas fails and submit a "bug" report

RA/BD: Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/

GP/RA to write a Nagios test to check for large number of requests on the NewRequests DB table that remain for a long time

Staffing

GP on call