Difference between revisions of "Vac configuration for GridPP DIRAC"

From GridPP Wiki
Jump to: navigation, search
(Updating /etc/vac.conf)
(6 intermediate revisions by 2 users not shown)
Line 1: Line 1:
This page explains how to run [[Quick_Guide_to_Dirac|GridPP DIRAC]] virtual machines on Vac factory machines. Please see the [http://www.gridpp.ac.uk/vac/ Vac website] for Vac's Admin Guide and man pages, which explain how to install and configure Vac itself and get a working Vac factory. '''These instructions are based on Vac 0.17.0 or above.'''
+
This page explains how to run [[Quick_Guide_to_Dirac|GridPP DIRAC Service]] virtual machines on Vac factory machines. Please see the [http://www.gridpp.ac.uk/vac/ Vac website] for Vac's Admin Guide and man pages, which explain how to install and configure Vac itself and get a working Vac factory. '''These instructions are based on Vac 3.0 or above.'''
  
 
==Requirements==
 
==Requirements==
  
Before configuring Vac for GridPP DIRAC, you need to follow these steps:
+
Before configuring Vac for the GridPP DIRAC Service, you need to follow these steps:
  
* When you configure Vac, you need to choose a Vac space name. This will be used as the Computing Element (CE) name in DIRAC.
+
* When you configure Vac, you need to choose a Vac space name. This will be used as the Computing Element (CE) name in DIRAC, and are equivalent to the CEs of ARC or CREAM.
 
* One or more CE's are grouped together to form a site, which will take the form VAC.Example.cc where Example is derived from your institutional name and cc is the country code. e.g. VAC.CERN-PROD.ch or VAC.UKI-NORTHGRID-MAN-HEP.uk. Site names are allocated and registered in the Dirac configuration service by the GridPP DIRAC service admins. Vac site names for UK sites are VAC.GOCDB_SITENAME.uk.  
 
* One or more CE's are grouped together to form a site, which will take the form VAC.Example.cc where Example is derived from your institutional name and cc is the country code. e.g. VAC.CERN-PROD.ch or VAC.UKI-NORTHGRID-MAN-HEP.uk. Site names are allocated and registered in the Dirac configuration service by the GridPP DIRAC service admins. Vac site names for UK sites are VAC.GOCDB_SITENAME.uk.  
* Obtain a host certificate which the VMs can use as a client certificate to fetch work from the central DIRAC task queue. One certificate can be used for all GridPP DIRAC VMs at a site. You should normally use a name which is specific to GridPP but is part of your site's DNS space. It doesn't need to correspond to a real host or really exist as an entry on your DNS servers: just that you are entitled to register it. So if your site's domain name is example.cc then a certificate for gridpp-vm.example.cc with a DN like /C=CC/O=XYZ/CN=gridpp-vm.example.cc would be a good choice.
+
* Obtain a host certificate which the VMs can use as a client certificate to fetch work from the central DIRAC task queue. One certificate can be used for all GridPP DIRAC VMs at a site. You should normally use a name which is specific to GridPP but is part of your site's DNS space. It doesn't need to correspond to a real host or really exist as an entry on your DNS servers: just that you are entitled to register it. So if your site's domain name is example.cc then a certificate for gds-vm.example.cc with a DN like /C=CC/O=XYZ/CN=gds-vm.example.cc would be a good choice.
* Place the hostcert.pem and hostkey.pem of the certificate in the gridpp (or similar) subdirectory of /var/lib/vac/machinetypes (Vac 0.19 or later) or  /var/lib/vac/vmtypes (Vac 0.18 or earlier)
+
* Place the hostcert.pem and hostkey.pem of the certificate in the files subdirectory of the gds (or similar) subdirectory of /var/lib/vac/machinetypes . So /var/lib/vac/machinetypes/gds/files/hostcert.pem and /var/lib/vac/machinetypes/gds/files/hostkey.pem
* Contact one of the DIRAC service admins (ie lcg-site-admin AT imperial.ac.uk) to agree a site name and to register your CE, Site, and certificate DN in the central GridPP DIRAC configuration.
+
* Contact one of the DIRAC service admins (ie lcg-site-admin AT imperial.ac.uk) to agree a site name and to register your CE, Site, and certificate DN in the central GridPP DIRAC Service configuration.
 
* Create a volume group vac_volume_group which is big enough to hold one 40GB logical volume for each VM the factory machine will run at the same time.  
 
* Create a volume group vac_volume_group which is big enough to hold one 40GB logical volume for each VM the factory machine will run at the same time.  
* Identify a squid HTTP caching proxy to use with cvmfs. If you already have a proxy set up for cvmfs on gLite/EMI worker nodes at your site then you can use that too. You may be able to run without a proxy, but failures during job execution will be more likely.
+
* Identify a squid HTTP caching proxy to use with cvmfs. If you already have a proxy set up for cvmfs on worker nodes at your site then you can use that too. You may be able to run without a proxy, but failures during job execution will be more likely.
  
 
==Updating /etc/vac.conf==
 
==Updating /etc/vac.conf==
  
With Vac 0.17.0, it is no longer necessary for sites to create custom user_data files, as Vac can create them on the fly using a template obtained from the GridPP webserver.
+
The details of the vac.conf options are given in the vac.conf(5) man page. You should specify the location of the cvmfs proxy to use in the [settings] section which applies to all machine types:
  
The details of the vac.conf options are given in the vac.conf(5) man page. However, the gridpp section should look like this, with suitable replacements for the target_share and user_data_option__ and user_data_file__ values:
+
  user_data_option_cvmfs_proxy = http://squid01.example.cc
[vmtype gridpp]
+
target_share = 1
+
user_data_option_dirac_site = VAC.Example.cc
+
user_data_option_submit_pool = gridppPool
+
  user_data_option_cvmfs_proxy = http://squid-cache.example.cc:3128
+
user_data_file_hostcert = hostcert.pem
+
user_data_file_hostkey = hostkey.pem
+
user_data = https://repo.gridpp.ac.uk/vacproject/gridpp/user_data
+
vm_model = cernvm3
+
root_image = root_image = https://repo.gridpp.ac.uk/vacproject/gridpp/cernvm3.iso
+
rootpublickey = /root/.ssh/id_rsa.pub
+
backoff_seconds = 3600
+
fizzle_seconds = 600
+
heartbeat_file = vm-heartbeat
+
heartbeat_seconds = 600
+
max_wallclock_seconds = 100000
+
log_machineoutputs = True
+
accounting_fqan = /gridpp/Role=NULL/Capability=NULL
+
  
The submit pool option should be gridppPool to get jobs from the default pool of jobs submitted by members of gridpp_user.
+
The gds section should look like this, with a suitable replacement for the target_share:
  
Vac will destroy the VM if it runs for more than max_wallclock_seconds and you may want to experiment with shorter values. Most modern machines should be able to run jobs comfortably within 24 hours (86400 seconds.)
+
[vacuum_pipe gds]
 +
target_share = 1.0
 +
vacuum_pipe_url = https://repo.gridpp.ac.uk/vacproject/gds/all-vos.pipe
  
If no work is available from the central DIRAC task queue and a VM stops with 'Nothing to do', backoff_seconds determines how long Vac waits before trying to run a GridPP VM again. This waiting is co-ordinated between all factory machines in a space using Vac's UDP protocol. For testing, you may want to set this to 0, but please do not leave it at that to avoid unnecessarily loading the central service.
+
This causes Vac to fetch the specified vacuum pipe JSON file and then create a machinetype section in the Vac configuration for each VO supported by the GridPP DIRAC Service, defining how to create VMs for that VO. You can see the resulting expanded configuration in the /var/log/vacd-factory log file. All the VOs use the host certificate and key in the /var/lib/vac/machinetypes/gds/files/ directory, but appropriate values for the other Vac configuration options, including the FQANs to use for APEL accounting. The total target_share given in the vacuum_pipe section is shared out amongst the VOs in the vacuum pipe file, according to the target_share values given inside the vacuum pipe JSON file.
  
You can omit the rootpublickey option, but it is extremely useful for debugging. See the Vac Admin Guide for more about how to set it up.
+
You can override values for individual machinetypes by creating a partial machinetype section in your configuration file which Vac will merge with the existing options for that machinetype. For example, to give one VO a higher eventual target_share (the value used by Vac in share calculations, not relative to the other machinetypes in the pipe now):
  
With log_machineoutputs set to True, the stdout of the jobs will be appended to /var/log/vacd-machineoutputs once the VM has finished. Again, this is very useful for debugging and something the ops team may ask you for it if you run into problems.
+
[machinetype gds-vm-pheno]
 +
target_share = 2.0
  
Vac re-reads its configuration files at every cycle (once a minute or so) and so the changes to vac.conf will take effect almost immediately. You should see Vac creating gridpp VMs in /var/log/vacd-factory and the VMs themselves attempting to contact the DIRAC matcher to fetch work in /var/log/vacd-machineoutputs .
+
The configuration written to /var/log/vacd-factory can be used to discover the names of the machinetypes which are created when expanding the pipe values, and to check that your override(s) have taken effect.
 +
 
 +
If you replace the vacuum_pipe_url with https://repo.gridpp.ac.uk/vacproject/gds/all-vos-zero-shares.pipe then the machinetypes are all created with target_share zero. You can use this to selectively enable a subset of GridPP DIRAC Service VOs by creating machinetype sections for the VOs you want to run as above.
 +
 
 +
Vac re-reads its configuration files at every cycle (once a minute or so) and so the changes to vac.conf will take effect almost immediately. You should see Vac creating gds-vm-* VMs in /var/log/vacd-factory and the VMs themselves attempting to contact the DIRAC matcher to fetch work in the joboutputs subdirectories under /var/lib/vac/machines .

Revision as of 13:35, 1 May 2018

This page explains how to run GridPP DIRAC Service virtual machines on Vac factory machines. Please see the Vac website for Vac's Admin Guide and man pages, which explain how to install and configure Vac itself and get a working Vac factory. These instructions are based on Vac 3.0 or above.

Requirements

Before configuring Vac for the GridPP DIRAC Service, you need to follow these steps:

  • When you configure Vac, you need to choose a Vac space name. This will be used as the Computing Element (CE) name in DIRAC, and are equivalent to the CEs of ARC or CREAM.
  • One or more CE's are grouped together to form a site, which will take the form VAC.Example.cc where Example is derived from your institutional name and cc is the country code. e.g. VAC.CERN-PROD.ch or VAC.UKI-NORTHGRID-MAN-HEP.uk. Site names are allocated and registered in the Dirac configuration service by the GridPP DIRAC service admins. Vac site names for UK sites are VAC.GOCDB_SITENAME.uk.
  • Obtain a host certificate which the VMs can use as a client certificate to fetch work from the central DIRAC task queue. One certificate can be used for all GridPP DIRAC VMs at a site. You should normally use a name which is specific to GridPP but is part of your site's DNS space. It doesn't need to correspond to a real host or really exist as an entry on your DNS servers: just that you are entitled to register it. So if your site's domain name is example.cc then a certificate for gds-vm.example.cc with a DN like /C=CC/O=XYZ/CN=gds-vm.example.cc would be a good choice.
  • Place the hostcert.pem and hostkey.pem of the certificate in the files subdirectory of the gds (or similar) subdirectory of /var/lib/vac/machinetypes . So /var/lib/vac/machinetypes/gds/files/hostcert.pem and /var/lib/vac/machinetypes/gds/files/hostkey.pem
  • Contact one of the DIRAC service admins (ie lcg-site-admin AT imperial.ac.uk) to agree a site name and to register your CE, Site, and certificate DN in the central GridPP DIRAC Service configuration.
  • Create a volume group vac_volume_group which is big enough to hold one 40GB logical volume for each VM the factory machine will run at the same time.
  • Identify a squid HTTP caching proxy to use with cvmfs. If you already have a proxy set up for cvmfs on worker nodes at your site then you can use that too. You may be able to run without a proxy, but failures during job execution will be more likely.

Updating /etc/vac.conf

The details of the vac.conf options are given in the vac.conf(5) man page. You should specify the location of the cvmfs proxy to use in the [settings] section which applies to all machine types:

user_data_option_cvmfs_proxy = http://squid01.example.cc

The gds section should look like this, with a suitable replacement for the target_share:

[vacuum_pipe gds]
target_share = 1.0
vacuum_pipe_url = https://repo.gridpp.ac.uk/vacproject/gds/all-vos.pipe

This causes Vac to fetch the specified vacuum pipe JSON file and then create a machinetype section in the Vac configuration for each VO supported by the GridPP DIRAC Service, defining how to create VMs for that VO. You can see the resulting expanded configuration in the /var/log/vacd-factory log file. All the VOs use the host certificate and key in the /var/lib/vac/machinetypes/gds/files/ directory, but appropriate values for the other Vac configuration options, including the FQANs to use for APEL accounting. The total target_share given in the vacuum_pipe section is shared out amongst the VOs in the vacuum pipe file, according to the target_share values given inside the vacuum pipe JSON file.

You can override values for individual machinetypes by creating a partial machinetype section in your configuration file which Vac will merge with the existing options for that machinetype. For example, to give one VO a higher eventual target_share (the value used by Vac in share calculations, not relative to the other machinetypes in the pipe now):

[machinetype gds-vm-pheno]
target_share = 2.0

The configuration written to /var/log/vacd-factory can be used to discover the names of the machinetypes which are created when expanding the pipe values, and to check that your override(s) have taken effect.

If you replace the vacuum_pipe_url with https://repo.gridpp.ac.uk/vacproject/gds/all-vos-zero-shares.pipe then the machinetypes are all created with target_share zero. You can use this to selectively enable a subset of GridPP DIRAC Service VOs by creating machinetype sections for the VOs you want to run as above.

Vac re-reads its configuration files at every cycle (once a minute or so) and so the changes to vac.conf will take effect almost immediately. You should see Vac creating gds-vm-* VMs in /var/log/vacd-factory and the VMs themselves attempting to contact the DIRAC matcher to fetch work in the joboutputs subdirectories under /var/lib/vac/machines .