Difference between revisions of "Vac configuration for GridPP DIRAC"

From GridPP Wiki
Jump to: navigation, search
Line 1: Line 1:
This page explains how to run [[Quick_Guide_to_Dirac|GridPP DIRAC]] virtual machines on Vac factory machines. Please see the [http://www.gridpp.ac.uk/vac/ Vac website] for Vac's Admin Guide and man pages, which explain how to install and configure Vac itself and get a working Vac factory. '''These instructions are based on Vac 0.17.0 or above.'''
+
This page explains how to run [[Quick_Guide_to_Dirac|GridPP DIRAC]] virtual machines on Vac factory machines. Please see the [http://www.gridpp.ac.uk/vac/ Vac website] for Vac's Admin Guide and man pages, which explain how to install and configure Vac itself and get a working Vac factory. '''These instructions are based on Vac 00.21 or above.'''
  
 
==Requirements==
 
==Requirements==
Line 8: Line 8:
 
* One or more CE's are grouped together to form a site, which will take the form VAC.Example.cc where Example is derived from your institutional name and cc is the country code. e.g. VAC.CERN-PROD.ch or VAC.UKI-NORTHGRID-MAN-HEP.uk. Site names are allocated and registered in the Dirac configuration service by the GridPP DIRAC service admins. Vac site names for UK sites are VAC.GOCDB_SITENAME.uk.  
 
* One or more CE's are grouped together to form a site, which will take the form VAC.Example.cc where Example is derived from your institutional name and cc is the country code. e.g. VAC.CERN-PROD.ch or VAC.UKI-NORTHGRID-MAN-HEP.uk. Site names are allocated and registered in the Dirac configuration service by the GridPP DIRAC service admins. Vac site names for UK sites are VAC.GOCDB_SITENAME.uk.  
 
* Obtain a host certificate which the VMs can use as a client certificate to fetch work from the central DIRAC task queue. One certificate can be used for all GridPP DIRAC VMs at a site. You should normally use a name which is specific to GridPP but is part of your site's DNS space. It doesn't need to correspond to a real host or really exist as an entry on your DNS servers: just that you are entitled to register it. So if your site's domain name is example.cc then a certificate for gridpp-vm.example.cc with a DN like /C=CC/O=XYZ/CN=gridpp-vm.example.cc would be a good choice.
 
* Obtain a host certificate which the VMs can use as a client certificate to fetch work from the central DIRAC task queue. One certificate can be used for all GridPP DIRAC VMs at a site. You should normally use a name which is specific to GridPP but is part of your site's DNS space. It doesn't need to correspond to a real host or really exist as an entry on your DNS servers: just that you are entitled to register it. So if your site's domain name is example.cc then a certificate for gridpp-vm.example.cc with a DN like /C=CC/O=XYZ/CN=gridpp-vm.example.cc would be a good choice.
* Place the hostcert.pem and hostkey.pem of the certificate in the gridpp (or similar) subdirectory of /var/lib/vac/machinetypes (Vac 0.19 or later) or  /var/lib/vac/vmtypes (Vac 0.18 or earlier)
+
* Place the hostcert.pem and hostkey.pem of the certificate in the gridpp (or similar) subdirectory of /var/lib/vac/machinetypes  
 
* Contact one of the DIRAC service admins (ie lcg-site-admin AT imperial.ac.uk) to agree a site name and to register your CE, Site, and certificate DN in the central GridPP DIRAC configuration.
 
* Contact one of the DIRAC service admins (ie lcg-site-admin AT imperial.ac.uk) to agree a site name and to register your CE, Site, and certificate DN in the central GridPP DIRAC configuration.
 
* Create a volume group vac_volume_group which is big enough to hold one 40GB logical volume for each VM the factory machine will run at the same time.  
 
* Create a volume group vac_volume_group which is big enough to hold one 40GB logical volume for each VM the factory machine will run at the same time.  
Line 14: Line 14:
  
 
==Updating /etc/vac.conf==
 
==Updating /etc/vac.conf==
 
With Vac 0.17.0, it is no longer necessary for sites to create custom user_data files, as Vac can create them on the fly using a template obtained from the GridPP webserver.
 
  
 
The details of the vac.conf options are given in the vac.conf(5) man page. However, the gridpp section should look like this, with suitable replacements for the target_share and user_data_option__ and user_data_file__ values:
 
The details of the vac.conf options are given in the vac.conf(5) man page. However, the gridpp section should look like this, with suitable replacements for the target_share and user_data_option__ and user_data_file__ values:
  [vmtype gridpp]
+
  [machinetype gridpp]
 
  target_share = 1
 
  target_share = 1
 
  user_data_option_dirac_site = VAC.Example.cc
 
  user_data_option_dirac_site = VAC.Example.cc
  user_data_option_submit_pool = gridppPool
+
  user_data_option_vo = gridpp
 +
accounting_fqan = /gridpp/Role=NULL/Capability=NULL
 
  user_data_option_cvmfs_proxy = http://squid-cache.example.cc:3128
 
  user_data_option_cvmfs_proxy = http://squid-cache.example.cc:3128
 
  user_data_file_hostcert = hostcert.pem
 
  user_data_file_hostcert = hostcert.pem
Line 31: Line 30:
 
  backoff_seconds = 3600  
 
  backoff_seconds = 3600  
 
  fizzle_seconds = 600
 
  fizzle_seconds = 600
  heartbeat_file = vm-heartbeat
+
  heartbeat_file = heartbeat
 
  heartbeat_seconds = 600
 
  heartbeat_seconds = 600
 
  max_wallclock_seconds = 100000
 
  max_wallclock_seconds = 100000
  accounting_fqan = /gridpp/Role=NULL/Capability=NULL
+
   
  
The submit pool option should be gridppPool to get jobs from the default pool of jobs submitted by members of gridpp_user.  
+
The vo option should be gridpp to get jobs from the default pool of jobs submitted by members of gridpp_user.  
  
 
Vac will destroy the VM if it runs for more than max_wallclock_seconds and you may want to experiment with shorter values. Most modern machines should be able to run jobs comfortably within 24 hours (86400 seconds.)
 
Vac will destroy the VM if it runs for more than max_wallclock_seconds and you may want to experiment with shorter values. Most modern machines should be able to run jobs comfortably within 24 hours (86400 seconds.)
  
If no work is available from the central DIRAC task queue and a VM stops with 'Nothing to do', backoff_seconds determines how long Vac waits before trying to run a GridPP VM again. This waiting is co-ordinated between all factory machines in a space using Vac's UDP protocol. For testing, you may want to set this to 0, but please do not leave it at that to avoid unnecessarily loading the central service.
+
If no work is available from the central DIRAC task queue and a VM stops with 'Nothing to do', backoff_seconds determines how long Vac waits before trying to run a GridPP VM again. This waiting is co-ordinated between all factory machines in a space using Vac's VacQuery UDP protocol. For testing, you may want to set this to 0, but please do not leave it at that to avoid unnecessarily loading the central service.
  
 
You can omit the rootpublickey option, but it is extremely useful for debugging. See the Vac Admin Guide for more about how to set it up.
 
You can omit the rootpublickey option, but it is extremely useful for debugging. See the Vac Admin Guide for more about how to set it up.
  
Vac re-reads its configuration files at every cycle (once a minute or so) and so the changes to vac.conf will take effect almost immediately. You should see Vac creating gridpp VMs in /var/log/vacd-factory and the VMs themselves attempting to contact the DIRAC matcher to fetch work in /var/log/vacd-machineoutputs .
+
Vac re-reads its configuration files at every cycle (once a minute or so) and so the changes to vac.conf will take effect almost immediately. You should see Vac creating gridpp VMs in /var/log/vacd-factory and the VMs themselves attempting to contact the DIRAC matcher to fetch work in the joboutputs subdirectories under /var/lib/vac/machines .

Revision as of 21:21, 22 March 2016

This page explains how to run GridPP DIRAC virtual machines on Vac factory machines. Please see the Vac website for Vac's Admin Guide and man pages, which explain how to install and configure Vac itself and get a working Vac factory. These instructions are based on Vac 00.21 or above.

Requirements

Before configuring Vac for GridPP DIRAC, you need to follow these steps:

  • When you configure Vac, you need to choose a Vac space name. This will be used as the Computing Element (CE) name in DIRAC.
  • One or more CE's are grouped together to form a site, which will take the form VAC.Example.cc where Example is derived from your institutional name and cc is the country code. e.g. VAC.CERN-PROD.ch or VAC.UKI-NORTHGRID-MAN-HEP.uk. Site names are allocated and registered in the Dirac configuration service by the GridPP DIRAC service admins. Vac site names for UK sites are VAC.GOCDB_SITENAME.uk.
  • Obtain a host certificate which the VMs can use as a client certificate to fetch work from the central DIRAC task queue. One certificate can be used for all GridPP DIRAC VMs at a site. You should normally use a name which is specific to GridPP but is part of your site's DNS space. It doesn't need to correspond to a real host or really exist as an entry on your DNS servers: just that you are entitled to register it. So if your site's domain name is example.cc then a certificate for gridpp-vm.example.cc with a DN like /C=CC/O=XYZ/CN=gridpp-vm.example.cc would be a good choice.
  • Place the hostcert.pem and hostkey.pem of the certificate in the gridpp (or similar) subdirectory of /var/lib/vac/machinetypes
  • Contact one of the DIRAC service admins (ie lcg-site-admin AT imperial.ac.uk) to agree a site name and to register your CE, Site, and certificate DN in the central GridPP DIRAC configuration.
  • Create a volume group vac_volume_group which is big enough to hold one 40GB logical volume for each VM the factory machine will run at the same time.
  • Identify a squid HTTP caching proxy to use with cvmfs. If you already have a proxy set up for cvmfs on gLite/EMI worker nodes at your site then you can use that too. You may be able to run without a proxy, but failures during job execution will be more likely.

Updating /etc/vac.conf

The details of the vac.conf options are given in the vac.conf(5) man page. However, the gridpp section should look like this, with suitable replacements for the target_share and user_data_option__ and user_data_file__ values:

[machinetype gridpp]
target_share = 1
user_data_option_dirac_site = VAC.Example.cc
user_data_option_vo = gridpp
accounting_fqan = /gridpp/Role=NULL/Capability=NULL
user_data_option_cvmfs_proxy = http://squid-cache.example.cc:3128
user_data_file_hostcert = hostcert.pem
user_data_file_hostkey = hostkey.pem 
user_data = https://repo.gridpp.ac.uk/vacproject/gridpp/user_data
vm_model = cernvm3
root_image = root_image = https://repo.gridpp.ac.uk/vacproject/gridpp/cernvm3.iso
rootpublickey = /root/.ssh/id_rsa.pub
backoff_seconds = 3600 
fizzle_seconds = 600
heartbeat_file = heartbeat
heartbeat_seconds = 600
max_wallclock_seconds = 100000

The vo option should be gridpp to get jobs from the default pool of jobs submitted by members of gridpp_user.

Vac will destroy the VM if it runs for more than max_wallclock_seconds and you may want to experiment with shorter values. Most modern machines should be able to run jobs comfortably within 24 hours (86400 seconds.)

If no work is available from the central DIRAC task queue and a VM stops with 'Nothing to do', backoff_seconds determines how long Vac waits before trying to run a GridPP VM again. This waiting is co-ordinated between all factory machines in a space using Vac's VacQuery UDP protocol. For testing, you may want to set this to 0, but please do not leave it at that to avoid unnecessarily loading the central service.

You can omit the rootpublickey option, but it is extremely useful for debugging. See the Vac Admin Guide for more about how to set it up.

Vac re-reads its configuration files at every cycle (once a minute or so) and so the changes to vac.conf will take effect almost immediately. You should see Vac creating gridpp VMs in /var/log/vacd-factory and the VMs themselves attempting to contact the DIRAC matcher to fetch work in the joboutputs subdirectories under /var/lib/vac/machines .