VacProject       Vac       Vac-in-a-Box       Vcycle       VacMon       GridPP      

ATLAS Vacuum VMs

This page explains how to use the ATLAS VMs maintained by GridPP with Vac or Vcycle, which are currently based on CernVM4 and provide a CentOS7 environment.

To use the ATLAS Vacuum VMs, you need to agree with ATLAS what your Vac or Vcycle space's Panda queue name will be (usually ending in _MCOREVAC).

Normally you should use the ATLAS vacuum pipe JSON file provided by GridPP. This defines the VMs in terms of image, contextualization file, and parameters. To use the current ATLAS pipe you need Vac 3.0 or greater. The pipe contains definitions of multiple types of VM, but the running VMs will normally all be the atlas-vm-mcore machinetype (the name you assign to the atlas vacuum pipe, usually "atlas", followed by the vm-mcore suffix specified in the VM definition.)

Vac/Vcycle configuration

The configuration for ATLAS in your Vac or Vcycle conf files can simply be:

[vacuum_pipe atlas]
vacuum_pipe_url = https://repo.gridpp.ac.uk/vacproject/atlas/atlas.pipe
target_share = CHANGEME
where you set CHANGEME to an appropriate value relative to your other experiments VMs (Vac or Vcycle does the normalisation to 100%).

Vac settings configuration

The ATLAS VMs make heavy use of CernVM-FS and so it is necessary to provide the VMs with a Squid cache they can access. In Vac 3.0 this can be set globally with

user_data_option_cvmfs_proxy = CHANGEME
in the [settings] section of the Vac configuration.

To run 8 processor ATLAS MCORE jobs, you need Vac to create 8 processor VMs by including this in your [settings]:

processors_per_superslot = 8

With Vac 00.21 onwards, it is not necessary to specify the amount of disk per-VM as Vac will share out the space in the vac_volume_group automatically. However, you should ensure there is at least 40GB per VM in the volume group.

Host certificate / key

You need to obtain a host certificate and key from your usual grid certificate authority which the VMs can use for authentication with ATLAS. You should normally use a DNS hostname which is specific to ATLAS but is part of your site's DNS space. It doesn't need to correspond to a real host or really exist as an entry on your DNS servers: just that you are entitled to register it. So if your site's domain name is example.cc then a certificate for atlas-vm.example.cc with a DN like /C=CC/O=XYZ/CN=atlas-vm.example.cc would be a good choice.

The x509cert.pem and x509key.pem for this certificate should be placed in ATLAS vacuum pipe's machinetype directory such as /var/lib/vac/machinetypes/atlas/ (for Vac) or /var/lib/vcycle/spaces/SPACENAME/machinetypes/atlas/ (for Vcycle). Again "atlas" in those file paths is the vacuum pipe name. (They must not be placed in the files subdirectory. This is because ATLAS VMs request that Vac/Vcycle create an X.509 proxy from the certificate/key on the fly, rather than just passing files from the files subdirectory directly into the VM.)

Monitoring

The bigpanda.cern.ch dashboard is extremely useful for monitoring the outcome of jobs. Go to the list of PanDA resources, find your Vac or Vcycle entry and look at the per-workernode and per-job listings to see what is going on. You can also look at similar PanDA resources at your site or using Vac etc elsewhere to see if problems are specific to you.