Difference between revisions of "RAL Tier1 weekly operations castor 11/05/2018"

From GridPP Wiki
Jump to: navigation, search
(Created page with "== Draft agenda == 1. Problems encountered this week 2. Upgrades/improvements made this week 3. What are we planning to do next week? 4. Long-term project updates (if not ...")
 
Line 30: Line 30:
  
 
== Operation problems ==
 
== Operation problems ==
 
  * gdss737 failed and brought to production after drive replacement
 
  
 
   * genTape load issue again; ilc and na62 trying to do things at the same time: try a bulk recall of ca. 50 files   
 
   * genTape load issue again; ilc and na62 trying to do things at the same time: try a bulk recall of ca. 50 files   
 
     and see how fast  
 
     and see how fast  
 
       Noted that gdss745 and gdss752 have odd ganglia readings, i.e noe data is read from them
 
       Noted that gdss745 and gdss752 have odd ganglia readings, i.e noe data is read from them
 
  * Transfers from RAL_Disk to Florida (CMS) are failing [https://ggus.eu/index.php?mode=ticket_info&ticket_id=134769 GGUS ticket]
 
  
 
== Operation news ==
 
== Operation news ==
 +
 +
  * The old genTape disk servers (2011 gen) are decommissioned
 +
 +
  * Kernel patching on the Oracle DB machines Orpheus and Eurynome
 +
 +
  * CASTOR merge plan completed and reviewed
  
 
== Plans for next few weeks ==
 
== Plans for next few weeks ==
Line 45: Line 47:
 
     * RA to finish developing the new information system for tape
 
     * RA to finish developing the new information system for tape
  
     * Decommission 2011 genTape disk servers
+
     * Need to gracefully shut down the current information system
 +
 
 +
    * Sort out status of the remaining SL5 tape verification server
  
 
== Long-term projects ==
 
== Long-term projects ==
Line 59: Line 63:
  
 
== Actions ==  
 
== Actions ==  
 +
 +
Try a bulk recall of ca. 50 files and check metwork perfomance of different genTape disk servers
  
 
JK to investigate why ralreplicas fails and submit a "bug" report
 
JK to investigate why ralreplicas fails and submit a "bug" report
Line 64: Line 70:
 
RA/BD:  Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/
 
RA/BD:  Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/
  
GP/RA to write a Nagios test to check for large number of requests on the NewRequests DB table that remain for a long time
+
GP/RA to write a Nagios test to check for large number of requests on the NewRequests DB table that remain for a long time - the check will run every xxx min (to be aggreed with Production) and issue an alarm if the number exceeds 1,000 for an extended period of time (15 min)
  
 
== Staffing ==
 
== Staffing ==
  
GP on call
+
RA on call

Revision as of 11:08, 11 May 2018

Draft agenda

1. Problems encountered this week

2. Upgrades/improvements made this week

3. What are we planning to do next week?

4. Long-term project updates (if not already covered)

  1. SL5 elimination from CIP and tape verification server
  2. CASTOR stress test improvement
  3. Generic CASTOR headnode setup
  4. Aquilonised headnodes

5. Special topics

6. Actions

7. Review Fabric tasks

  1.   Link

8. AoTechnicalB

9. Availability for next week

10. On-Call

11. AoOtherB

Operation problems

  * genTape load issue again; ilc and na62 trying to do things at the same time: try a bulk recall of ca. 50 files  
    and see how fast 
      Noted that gdss745 and gdss752 have odd ganglia readings, i.e noe data is read from them

Operation news

  * The old genTape disk servers (2011 gen) are decommissioned
  * Kernel patching on the Oracle DB machines Orpheus and Eurynome
  * CASTOR merge plan completed and reviewed

Plans for next few weeks

   * RA to finish developing the new information system for tape
   * Need to gracefully shut down the current information system
   * Sort out status of the remaining SL5 tape verification server 

Long-term projects

Headnode migration to Aquilon - The former vcert headnodes have been configured as CASTOR macroheadnodes and they pass functional tests Sl7 name severs installs ok. Next step: HAproxy

No point to move Atlas and CMS to the macroheadnode setup since their d1t0 data are moving to Echo over the next two months

HA-proxyfication of the CASTOR SRMs: HA proxy set up has been tested on the SRMs

RA writing a new accounting system to match WLCG archival requirements

Actions

Try a bulk recall of ca. 50 files and check metwork perfomance of different genTape disk servers

JK to investigate why ralreplicas fails and submit a "bug" report

RA/BD: Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/

GP/RA to write a Nagios test to check for large number of requests on the NewRequests DB table that remain for a long time - the check will run every xxx min (to be aggreed with Production) and issue an alarm if the number exceeds 1,000 for an extended period of time (15 min)

Staffing

RA on call