Big Bubbles (no troubles)

What sucks, who sucks and you suck

Creating vSphere VMs With Ansible

Ansible now contains a decent set of modules for managing virtual machines in VMware vSphere. As ever with Ansible, the key is not so much in knowing how to use these modules (which the docs explain fairly clearly) as in knowing how to organise the playbooks that call them. Here’s one example based on our own recent practice.

To use this, you should be running at least Ansible 2.7.5 as the module was broken in older 2.7 releases. I assume you already have the prerequisites, including an account with administrator privileges on your vCenter, a basic knowledge of how to create a new VM in vSphere, and so forth. To enable us to create new VMs for the systems we need to manage with our playbooks, first of all we wrap the vmware_guest module in a role. The role uses a combination of standard or typical default values, global variables for common settings and per-host variables specific to the VM in question. For our own purposes, we only need to be concerned with basic Linux VMs of a mostly similar specification, so we don’t worry about customising the configuration for different OS platforms.

For example, the role defaults might be:

1
2
3
4
5
6
# roles/create-vm/defaults/main.yml
vmware_scsi: 'lsilogic'
vmware_firmware: 'bios'
vmware_disktype: 'thin'
vmware_hw_vers: 13
vmware_netdev: 'vmxnet3'

This defines our standard VM SCSI controller, firmware, disk provisioning, hardware version and network device (all of these are compatible with CentOS, for example).

The global settings are defined in the group variables for ‘all’ hosts, and specify the local vCenter, site-specific names like the vSphere data centre and overall common settings for the VMware modules:

1
2
3
4
# group_vars/all.yml
vmware_user: "{{ lookup('env', 'USER') }}"
vmware_vc: "vcenter.our.domain"
vmware_datacenter: 'Main Datacenter'

Here we authenticate to the vCenter using our central directory, so we use the logged-in ID of the person running the playbook as the VMware username. Alternatively, you can create a specific account with limited privileges in vSphere for Ansible to use.

Finally, we configure the VM details in the host variables file under host_vars/:

1
2
3
4
5
6
7
8
9
10
11
12
# host_vars/vm1.yml
vmware_cluster: 'Firewalled Production'
vmware_folder: '/Main Datacenter/vm/prod/servers/linux'
vmware_datastore: 'main-datastore-1'
vmware_vms:
  - name: "vm1"
    cpus: 4
    mem: 8
    diskgb: 80
    net: 'PRODNET'
    mac: '00:50:56:de:ad:be'
    os: 'centos7_64Guest'

(VM folder names are prefixed with the data centre name followed by ‘/vm’. Note that in practice with this structure, one can define several VMs together in a list - e.g. within the group variables - but this is not necessary. In most cases, it’s probably cleaner to separate the VM configs by individual host.)

In the top level playbook, we also need to request the password for the vCenter user (or fetch it from a secure vault if using a specific account for Ansible):

1
2
3
4
5
6
7
8
9
10
# create-vms.yml
- hosts: all
  become: false
  gather_facts: false
  vars_prompt:
    - name: "vmware_passwd"
      prompt: "vCenter password"
      private: yes
  roles:
    - { role: create-vm, when: vmware_vms is defined }

We need to disable fact gathering as the hosts we’re creating may not exist yet so Ansible can’t connect to them.

Finally, we pull all these variables into a task defined within the create-vm role.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# roles/create-vm/tasks/main.yml
- name: create VMs
      vmware_guest:
        hostname: "{{ vmware_vc }}"
        username: "{{ vmware_user }}"
        password: "{{ vmware_passwd }}"
        cluster: "{{ vmware_cluster }}"
        datacenter: "{{ vmware_datacenter }}"
        datastore: "{{ vmware_datastore }}"
        cdrom:
          type: 'none'
        disk:
          - size_gb: "{{ item.disksize) }}"
            type: "{{ vmware_disktype }}"
        folder: "{{ vmware_folder }}"
        guest_id: "{{ item.os }}"
        hardware:
          boot_firmware: "{{ vmware_firmware }}"
          scsi: "{{ vmware_scsi }}"
          memory_mb: "{{ (item.mem * 1024) }}"
          num_cpus: "{{ item.cpus }}"
          version: "{{ vmware_hw_vers }}"
        name: "{{ item.name }}"
        networks:
          - name: "{{ item.net }}"
            device_type: "{{ vmware_netdev }}"
            mac: "{{ item.mac }}"
        state: present
      with_items: "{{ vmware_vms }}"
      register: vms_deployed
      delegate_to: localhost

The key thing here is that the task is delegated to ‘localhost’, i.e. the Ansible control node, and therefore the connection to the vCenter to create the VM will occur from the host where Ansible is run. (You can use a different host such as a dedicated vSphere management server but Ansible must be able to connect to it and it must have the Pyvmomi library installed.) This task loops through the vmware_vms list and creates each VM defined there through the vCenter.

If you change the settings for any VM, Ansible will attempt to modify its configuration in vSphere if possible. For example, you can adjust the allocated memory in a running VM (assuming the guest OS supports it and hot-adding memory is enabled for the VM) but attempting to shrink a virtual disk returns an error.

Currently, due to Ansible bug #34105, vmware_guest isn’t fully idempotent if you’re using distributed switches in your vSphere networking configuration; the task will report ‘changed’ every time it is run and you will see a “Reconfigure Virtual Machine” task logged in the vCenter, even if no aspect of the VM has been altered. (There’s a PR for this bug but it doesn’t appear to have been merged yet.) If this concerns you, you can first run a vmware_guest_find task to search for the listed VMs in vCenter, register a variable and use the result of that to drive the creation of any VMs that return ‘failed’ (see my previous post on using multiple values in a registered variable).

Obviously, at this point you’d still need to power on the new VM and install an OS on it. In fact, you’d probably instead deploy from a pre-built template, using the template and customization parameters of vmware_guest to configure it. The vmware_guest_powerstate module could then be used to power it up and initialise it, followed by vmware_guest_tools_wait to pause until it’s ready.