Difference between revisions of "RAL Tier1 weekly operations castor 23/02/2018"
(→Plans for next week) |
(→Actions) |
||
Line 67: | Line 67: | ||
RA/BD: Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/ | RA/BD: Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/ | ||
− | |||
− | |||
RA to organise a meeting with the Fabric team to discuss outstanding issues with Data Services hardware | RA to organise a meeting with the Fabric team to discuss outstanding issues with Data Services hardware | ||
Line 74: | Line 72: | ||
GP to talk to Alastair about draining of 10% of 14 gen disk servers | GP to talk to Alastair about draining of 10% of 14 gen disk servers | ||
− | GP/RA to write a Nagios test to check for large number of requests that remain for a long time | + | GP/RA to write a Nagios test to check for large number of requests that remain for a long time |
== Staffing == | == Staffing == | ||
RA on call | RA on call |
Latest revision as of 12:28, 23 February 2018
Contents
Draft agenda
1. Problems encountered this week
2. Upgrades/improvements made this week
3. What are we planning to do next week?
4. Long-term project updates (if not already covered)
1. SL5 elimination from CIP and tape verification server 2. CASTOR stress test improvement 3. Generic CASTOR headnode setup 4. Aquilonised headnodes
5. Special topics
6. Actions
7. Anything for CASTOR-Fabric?
8. AoTechnicalB
9. Availability for next week
10. On-Call
11. AoOtherB
Operation problems
gdss762 was removed from production and it is now back in read-only mode
aliceDisk run out of space - VO's responsibility
Operation news
vCert is back in production
Patching of the Castor Standby Environments
Plans for next week
Patching of the Neptune and Pluto DB and testing of switching over to R26 on Thu 01/03: Rolling update and test of the switch over to standby. CASTOR downtime from
Patching of the Juno database on 7th of March: CASTOR downtime from 10:30 to 13:30
GP: Fix-up on Aquilon SRM profiles: 1) Move nscd feature to a sub-dir task 2) Make castor/cron-jobs/srmbed-monitoring part of castor/daemons/srmbed feature task
Long-term projects
Headnode migration to Aquilon - Stager, scheduler, utility and nameserver configuration mainly complete. Stager, scheduler, utility tested seperately and all together on preprod. Will start combining the Stager, scheduler, utility features in one node
HA-proxyfication of the CASTOR SRMs: HA proxy is back and can be tested on preprod
Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.
Draining of 4 x 13 generation disk servers from Atlas that will be deployed on genTape
Draining of 10% of the 14 generation disk servers
Actions
RA/BD: Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/
RA to organise a meeting with the Fabric team to discuss outstanding issues with Data Services hardware
GP to talk to Alastair about draining of 10% of 14 gen disk servers
GP/RA to write a Nagios test to check for large number of requests that remain for a long time
Staffing
RA on call