Domain XML format

This section describes the XML format used to represent domains, there are variations on the format based on the kind of domains run and the options used to launch them. For hypervisor specific details consult the driver docs

Element and attribute overview

The root element required for all virtual machines is named domain. It has two attributes, the type specifies the hypervisor used for running the domain. The allowed values are driver specific, but include "xen", "kvm", "qemu", "lxc" and "kqemu". The second attribute is id which is a unique integer identifier for the running guest machine. Inactive machines have no id value.

General metadata

<domain type='xen' id='3'>
  <name>fv0</name>
  <uuid>4dea22b31d52d8f32516782e98ab3fa0</uuid>
  <title>A short description - title - of the domain</title>
  <description>Some human readable description</description>
  <metadata>
    <app1:foo xmlns:app1="http://app1.org/app1/">..</app1:foo>
    <app2:bar xmlns:app2="http://app1.org/app2/">..</app2:bar>
  </metadata>
  ...
name
The content of the name element provides a short name for the virtual machine. This name should consist only of alpha-numeric characters and is required to be unique within the scope of a single host. It is often used to form the filename for storing the persistent configuration file. Since 0.0.1
uuid
The content of the uuid element provides a globally unique identifier for the virtual machine. The format must be RFC 4122 compliant, eg 3e3fce45-4f53-4fa7-bb32-11f34168b82b. If omitted when defining/creating a new machine, a random UUID is generated. It is also possible to provide the UUID via a sysinfo specification. Since 0.0.1, sysinfo since 0.8.7
title
The optional element title provides space for a short description of the domain. The title should not contain any newlines. Since 0.9.10.
description
The content of the description element provides a human readable description of the virtual machine. This data is not used by libvirt in any way, it can contain any information the user wants. Since 0.7.2
metadata
The metadata node can be used by applications to store custom metadata in the form of XML nodes/trees. Applications must use custom namespaces on their XML nodes/trees, with only one top-level element per namespace (if the application needs structure, they should have sub-elements to their namespace element). Since 0.9.10

Operating system booting

There are a number of different ways to boot virtual machines each with their own pros and cons.

BIOS bootloader

Booting via the BIOS is available for hypervisors supporting full virtualization. In this case the BIOS has a boot order priority (floppy, harddisk, cdrom, network) determining where to obtain/find the boot image.

  ...
  <os>
    <type>hvm</type>
    <loader>/usr/lib/xen/boot/hvmloader</loader>
    <boot dev='hd'/>
    <boot dev='cdrom'/>
    <bootmenu enable='yes'/>
    <smbios mode='sysinfo'/>
    <bios useserial='yes' rebootTimeout='0'/>
  </os>
  ...
type
The content of the type element specifies the type of operating system to be booted in the virtual machine. hvm indicates that the OS is one designed to run on bare metal, so requires full virtualization. linux (badly named!) refers to an OS that supports the Xen 3 hypervisor guest ABI. There are also two optional attributes, arch specifying the CPU architecture to virtualization, and machine referring to the machine type. The Capabilities XML provides details on allowed values for these. Since 0.0.1
loader
The optional loader tag refers to a firmware blob used to assist the domain creation process. At this time, it is only needed by Xen fully virtualized domains. Since 0.1.0
boot
The dev attribute takes one of the values "fd", "hd", "cdrom" or "network" and is used to specify the next boot device to consider. The boot element can be repeated multiple times to setup a priority list of boot devices to try in turn. Multiple devices of the same type are sorted according to their targets while preserving the order of buses. After defining the domain, its XML configuration returned by libvirt (through virDomainGetXMLDesc) lists devices in the sorted order. Once sorted, the first device is marked as bootable. Thus, e.g., a domain configured to boot from "hd" with vdb, hda, vda, and hdc disks assigned to it will boot from vda (the sorted list is vda, vdb, hda, hdc). Similar domain with hdc, vda, vdb, and hda disks will boot from hda (sorted disks are: hda, hdc, vda, vdb). It can be tricky to configure in the desired way, which is why per-device boot elements (see disks, network interfaces, and USB and PCI devices sections below) were introduced and they are the preferred way providing full control over booting order. The boot element and per-device boot elements are mutually exclusive. Since 0.1.3, per-device boot since 0.8.8
bootmenu
Whether or not to enable an interactive boot menu prompt on guest startup. The enable attribute can be either "yes" or "no". If not specified, the hypervisor default is used. Since 0.8.3
smbios
How to populate SMBIOS information visible in the guest. The mode attribute must be specified, and is either "emulate" (let the hypervisor generate all values), "host" (copy all of Block 0 and Block 1, except for the UUID, from the host's SMBIOS values; the virConnectGetSysinfo call can be used to see what values are copied), or "sysinfo" (use the values in the sysinfo element). If not specified, the hypervisor default is used. Since 0.8.7
bios
This element has attribute useserial with possible values yes or no. It enables or disables Serial Graphics Adapter which allows users to see BIOS messages on a serial port. Therefore, one needs to have serial port defined. Since 0.9.4. Since 0.10.2 (QEMU only) there is another attribute, rebootTimeout that controls whether and after how long the guest should start booting again in case the boot fails (according to BIOS). The value is in milliseconds with maximum of 65535 and special value -1 disables the reboot.

Host bootloader

Hypervisors employing paravirtualization do not usually emulate a BIOS, and instead the host is responsible to kicking off the operating system boot. This may use a pseudo-bootloader in the host to provide an interface to choose a kernel for the guest. An example is pygrub with Xen.

  ...
  <bootloader>/usr/bin/pygrub</bootloader>
  <bootloader_args>--append single</bootloader_args>
  ...
bootloader
The content of the bootloader element provides a fully qualified path to the bootloader executable in the host OS. This bootloader will be run to choose which kernel to boot. The required output of the bootloader is dependent on the hypervisor in use. Since 0.1.0
bootloader_args
The optional bootloader_args element allows command line arguments to be passed to the bootloader. Since 0.2.3

Direct kernel boot

When installing a new guest OS it is often useful to boot directly from a kernel and initrd stored in the host OS, allowing command line arguments to be passed directly to the installer. This capability is usually available for both para and full virtualized guests.

  ...
  <os>
    <type>hvm</type>
    <loader>/usr/lib/xen/boot/hvmloader</loader>
    <kernel>/root/f8-i386-vmlinuz</kernel>
    <initrd>/root/f8-i386-initrd</initrd>
    <cmdline>console=ttyS0 ks=http://example.com/f8-i386/os/</cmdline>
    <dtb>/root/ppc.dtb</dtb>
  </os>
  ...
type
This element has the same semantics as described earlier in the BIOS boot section
loader
This element has the same semantics as described earlier in the BIOS boot section
kernel
The contents of this element specify the fully-qualified path to the kernel image in the host OS.
initrd
The contents of this element specify the fully-qualified path to the (optional) ramdisk image in the host OS.
cmdline
The contents of this element specify arguments to be passed to the kernel (or installer) at boot time. This is often used to specify an alternate primary console (eg serial port), or the installation media source / kickstart file
dtb
The contents of this element specify the fully-qualified path to the (optional) device tree binary (dtb) image in the host OS. Since 1.0.4

Container boot

When booting a domain using container based virtualization, instead of a kernel / boot image, a path to the init binary is required, using the init element. By default this will be launched with no arguments. To specify the initial argv, use the initarg element, repeated as many time as is required. The cmdline element, if set will be used to provide an equivalent to /proc/cmdline but will not affect init argv.

  <os>
    <type arch='x86_64'>exe</type>
    <init>/bin/systemd</init>
    <initarg>--unit</initarg>
    <initarg>emergency.service</initarg>
  </os>
    

If you want to enable user namespace, set the idmap element. The uid and gid elements have three attributes:

start
First user ID in container.
target
The first user ID in container will be mapped to this target user ID in host.
count
How many users in container are allowed to map to host's user.
  <idmap>
    <uid start='0' target='1000' count='10'/>
    <gid start='0' target='1000' count='10'/>
  </idmap>
    

SMBIOS System Information

Some hypervisors allow control over what system information is presented to the guest (for example, SMBIOS fields can be populated by a hypervisor and inspected via the dmidecode command in the guest). The optional sysinfo element covers all such categories of information. Since 0.8.7

  ...
  <os>
    <smbios mode='sysinfo'/>
    ...
  </os>
  <sysinfo type='smbios'>
    <bios>
      <entry name='vendor'>LENOVO</entry>
    </bios>
    <system>
      <entry name='manufacturer'>Fedora</entry>
      <entry name='product'>Virt-Manager</entry>
      <entry name='version'>0.9.4</entry>
    </system>
  </sysinfo>
  ...

The sysinfo element has a mandatory attribute type that determine the layout of sub-elements, with supported values of:

smbios
Sub-elements call out specific SMBIOS values, which will affect the guest if used in conjunction with the smbios sub-element of the os element. Each sub-element of sysinfo names a SMBIOS block, and within those elements can be a list of entry elements that describe a field within the block. The following blocks and entries are recognized:
bios
This is block 0 of SMBIOS, with entry names drawn from:
vendor
BIOS Vendor's Name
version
BIOS Version
date
BIOS release date. If supplied, is in either mm/dd/yy or mm/dd/yyyy format. If the year portion of the string is two digits, the year is assumed to be 19yy.
release
System BIOS Major and Minor release number values concatenated together as one string separated by a period, for example, 10.22.
system
This is block 1 of SMBIOS, with entry names drawn from:
manufacturer
Manufacturer of BIOS
product
Product Name
version
Version of the product
serial
Serial number
uuid
Universal Unique ID number. If this entry is provided alongside a top-level uuid element, then the two values must match.
sku
SKU number to identify a particular configuration.
family
Identify the family a particular computer belongs to.
NB: Incorrectly supplied entries in either the bios or system blocks will be ignored without error. Other than uuid validation and date format checking, all values are passed as strings to the hypervisor driver.

CPU Allocation

<domain>
  ...
  <vcpu placement='static' cpuset="1-4,^3,6" current="1">2</vcpu>
  ...
</domain>
vcpu
The content of this element defines the maximum number of virtual CPUs allocated for the guest OS, which must be between 1 and the maximum supported by the hypervisor. Since 0.4.4, this element can contain an optional cpuset attribute, which is a comma-separated list of physical CPU numbers that domain process and virtual CPUs can be pinned to by default. (NB: The pinning policy of domain process and virtual CPUs can be specified separately by cputune. If attribute emulatorpin of cputune is specified, cpuset specified by vcpu here will be ignored; Similarly, For virtual CPUs which has vcpupin specified, cpuset specified by cpuset here will be ignored; For virtual CPUs which doesn't have vcpupin specified, it will be pinned to the physical CPUs specified by cpuset here). Each element in that list is either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a previous range. Since 0.8.5, the optional attribute current can be used to specify whether fewer than the maximum number of virtual CPUs should be enabled. Since 0.9.11 (QEMU and KVM only), the optional attribute placement can be used to indicate the CPU placement mode for domain process, its value can be either "static" or "auto", defaults to placement of numatune, or "static" if cpuset is specified. "auto" indicates the domain process will be pinned to the advisory nodeset from querying numad, and the value of attribute cpuset will be ignored if it's specified. If both cpuset and placement are not specified, or if placement is "static", but no cpuset is specified, the domain process will be pinned to all the available physical CPUs.

CPU Tuning

<domain>
  ...
  <cputune>
    <vcpupin vcpu="0" cpuset="1-4,^2"/>
    <vcpupin vcpu="1" cpuset="0,1"/>
    <vcpupin vcpu="2" cpuset="2,3"/>
    <vcpupin vcpu="3" cpuset="0,4"/>
    <emulatorpin cpuset="1-3"/>
    <shares>2048</shares>
    <period>1000000</period>
    <quota>-1</quota>
    <emulator_period>1000000</emulator_period>
    <emulator_quota>-1</emulator_quota>
  </cputune>
  ...
</domain>
cputune
The optional cputune element provides details regarding the cpu tunable parameters for the domain. Since 0.9.0
vcpupin
The optional vcpupin element specifies which of host's physical CPUs the domain VCPU will be pinned to. If this is omitted, and attribute cpuset of element vcpu is not specified, the vCPU is pinned to all the physical CPUs by default. It contains two required attributes, the attribute vcpu specifies vcpu id, and the attribute cpuset is same as attribute cpuset of element vcpu. (NB: Only qemu driver support) Since 0.9.0
emulatorpin
The optional emulatorpin element specifies which of host physical CPUs the "emulator", a subset of a domain not including vcpu, will be pinned to. If this is omitted, and attribute cpuset of element vcpu is not specified, "emulator" is pinned to all the physical CPUs by default. It contains one required attribute cpuset specifying which physical CPUs to pin to. NB, emulatorpin is not allowed if attribute placement of element vcpu is "auto".
shares
The optional shares element specifies the proportional weighted share for the domain. If this is omitted, it defaults to the OS provided defaults. NB, There is no unit for the value, it's a relative measure based on the setting of other VM, e.g. A VM configured with value 2048 will get twice as much CPU time as a VM configured with value 1024. Since 0.9.0
period
The optional period element specifies the enforcement interval(unit: microseconds). Within period, each vcpu of the domain will not be allowed to consume more than quota worth of runtime. The value should be in range [1000, 1000000]. A period with value 0 means no value. Only QEMU driver support since 0.9.4, LXC since 0.9.10
quota
The optional quota element specifies the maximum allowed bandwidth(unit: microseconds). A domain with quota as any negative value indicates that the domain has infinite bandwidth, which means that it is not bandwidth controlled. The value should be in range [1000, 18446744073709551] or less than 0. A quota with value 0 means no value. You can use this feature to ensure that all vcpus run at the same speed. Only QEMU driver support since 0.9.4, LXC since 0.9.10
emulator_period
The optional emulator_period element specifies the enforcement interval(unit: microseconds). Within emulator_period, emulator threads(those excluding vcpus) of the domain will not be allowed to consume more than emulator_quota worth of runtime. The value should be in range [1000, 1000000]. A period with value 0 means no value. Only QEMU driver support since 0.10.0
emulator_quota
The optional emulator_quota element specifies the maximum allowed bandwidth(unit: microseconds) for domain's emulator threads(those excluding vcpus). A domain with emulator_quota as any negative value indicates that the domain has infinite bandwidth for emulator threads (those excluding vcpus), which means that it is not bandwidth controlled. The value should be in range [1000, 18446744073709551] or less than 0. A quota with value 0 means no value. Only QEMU driver support since 0.10.0

Memory Allocation

<domain>
  ...
  <memory unit='KiB'>524288</memory>
  <currentMemory unit='KiB'>524288</currentMemory>
  ...
</domain>
memory
The maximum allocation of memory for the guest at boot time. The units for this value are determined by the optional attribute unit, which defaults to "KiB" (kibibytes, 210 or blocks of 1024 bytes). Valid units are "b" or "bytes" for bytes, "KB" for kilobytes (103 or 1,000 bytes), "k" or "KiB" for kibibytes (1024 bytes), "MB" for megabytes (106 or 1,000,000 bytes), "M" or "MiB" for mebibytes (220 or 1,048,576 bytes), "GB" for gigabytes (109 or 1,000,000,000 bytes), "G" or "GiB" for gibibytes (230 or 1,073,741,824 bytes), "TB" for terabytes (1012 or 1,000,000,000,000 bytes), or "T" or "TiB" for tebibytes (240 or 1,099,511,627,776 bytes). However, the value will be rounded up to the nearest kibibyte by libvirt, and may be further rounded to the granularity supported by the hypervisor. Some hypervisors also enforce a minimum, such as 4000KiB. In the case of crash, optional attribute dumpCore can be used to control whether the guest memory should be included in the generated coredump or not (values "on", "off"). unit since 0.9.11, dumpCore since 0.10.2 (QEMU only)
currentMemory
The actual allocation of memory for the guest. This value can be less than the maximum allocation, to allow for ballooning up the guests memory on the fly. If this is omitted, it defaults to the same value as the memory element. The unit attribute behaves the same as for memory.

Memory Backing

<domain>
  ...
  <memoryBacking>
    <hugepages/>
    <nosharepages/>
    <locked/>
  </memoryBacking>
  ...
</domain>

The optional memoryBacking element may contain several elements that influence how virtual memory pages are backed by host pages.

hugepages
This tells the hypervisor that the guest should have its memory allocated using hugepages instead of the normal native page size.
nosharepages
Instructs hypervisor to disable shared pages (memory merge, KSM) for this domain. Since 1.0.6
locked
When set and supported by the hypervisor, memory pages belonging to the domain will be locked in host's memory and the host will not be allowed to swap them out. For QEMU/KVM this requires hard_limit memory tuning element to be used and set to the maximum memory configured for the domain plus any memory consumed by the QEMU process itself. Since 1.0.6

Memory Tuning

<domain>
  ...
  <memtune>
    <hard_limit unit='G'>1</hard_limit>
    <soft_limit unit='M'>128</soft_limit>
    <swap_hard_limit unit='G'>2</swap_hard_limit>
    <min_guarantee unit='bytes'>67108864</min_guarantee>
  </memtune>
  ...
</domain>
memtune
The optional memtune element provides details regarding the memory tunable parameters for the domain. If this is omitted, it defaults to the OS provided defaults. For QEMU/KVM, the parameters are applied to the QEMU process as a whole. Thus, when counting them, one needs to add up guest RAM, guest video RAM, and some memory overhead of QEMU itself. The last piece is hard to determine so one needs guess and try. For each tunable, it is possible to designate which unit the number is in on input, using the same values as for <memory>. For backwards compatibility, output is always in KiB. unit since 0.9.11
hard_limit
The optional hard_limit element is the maximum memory the guest can use. The units for this value are kibibytes (i.e. blocks of 1024 bytes). However, users of QEMU and KVM are strongly advised not to set this limit as domain may get killed by the kernel if the guess is too low. To determine the memory needed for a process to run is an undecidable problem.
soft_limit
The optional soft_limit element is the memory limit to enforce during memory contention. The units for this value are kibibytes (i.e. blocks of 1024 bytes)
swap_hard_limit
The optional swap_hard_limit element is the maximum memory plus swap the guest can use. The units for this value are kibibytes (i.e. blocks of 1024 bytes). This has to be more than hard_limit value provided
min_guarantee
The optional min_guarantee element is the guaranteed minimum memory allocation for the guest. The units for this value are kibibytes (i.e. blocks of 1024 bytes)

NUMA Node Tuning

<domain>
  ...
  <numatune>
    <memory mode="strict" nodeset="1-4,^3"/>
  </numatune>
  ...
</domain>
numatune
The optional numatune element provides details of how to tune the performance of a NUMA host via controlling NUMA policy for domain process. NB, only supported by QEMU driver. Since 0.9.3
memory
The optional memory element specifies how to allocate memory for the domain process on a NUMA host. It contains several optional attributes. Attribute mode is either 'interleave', 'strict', or 'preferred', defaults to 'strict'. Attribute nodeset specifies the NUMA nodes, using the same syntax as attribute cpuset of element vcpu. Attribute placement (since 0.9.12) can be used to indicate the memory placement mode for domain process, its value can be either "static" or "auto", defaults to placement of vcpu, or "static" if nodeset is specified. "auto" indicates the domain process will only allocate memory from the advisory nodeset returned from querying numad, and the value of attribute nodeset will be ignored if it's specified. If placement of vcpu is 'auto', and numatune is not specified, a default numatune with placement 'auto' and mode 'strict' will be added implicitly. Since 0.9.3

Block I/O Tuning

<domain>
  ...
  <blkiotune>
    <weight>800</weight>
    <device>
      <path>/dev/sda</path>
      <weight>1000</weight>
    </device>
    <device>
      <path>/dev/sdb</path>
      <weight>500</weight>
    </device>
  </blkiotune>
  ...
</domain>
blkiotune
The optional blkiotune element provides the ability to tune Blkio cgroup tunable parameters for the domain. If this is omitted, it defaults to the OS provided defaults. Since 0.8.8
weight
The optional weight element is the overall I/O weight of the guest. The value should be in the range [100, 1000]. After kernel 2.6.39, the value could be in the range [10, 1000].
device
The domain may have multiple device elements that further tune the weights for each host block device in use by the domain. Note that multiple guest disks can share a single host block device, if they are backed by files within the same host file system, which is why this tuning parameter is at the global domain level rather than associated with each guest disk device (contrast this to the <iotune> element which can apply to an individual <disk>). Each device element has two mandatory sub-elements, path describing the absolute path of the device, and weight giving the relative weight of that device, in the range [100, 1000]. After kernel 2.6.39, the value could be in the range [10, 1000].Since 0.9.8

Resource partitioning

Hypervisors may allow for virtual machines to be placed into resource partitions, potentially with nesting of said partitions. The resource element groups together configuration related to resource partitioning. It currently supports a child element partition whose content defines the path of the resource partition in which to place the domain. If no partition is listed, then the domain will be placed in a default partition. It is the responsibility of the app/admin to ensure that the partition exists prior to starting the guest. Only the (hypervisor specific) default partition can be assumed to exist by default.

  ...
  <resource>
    <partition>/virtualmachines/production</partition>
  </resource>
  ...

Resource partitions are currently supported by the QEMU and LXC drivers, which map partition paths to cgroups directories, in all mounted controllers. Since 1.0.5

CPU model and topology

Requirements for CPU model, its features and topology can be specified using the following collection of elements. Since 0.7.5

  ...
  <cpu match='exact'>
    <model fallback='allow'>core2duo</model>
    <vendor>Intel</vendor>
    <topology sockets='1' cores='2' threads='1'/>
    <feature policy='disable' name='lahf_lm'/>
  </cpu>
  ...
  <cpu mode='host-model'>
    <model fallback='forbid'/>
    <topology sockets='1' cores='2' threads='1'/>
  </cpu>
  ...
  <cpu mode='host-passthrough'/>
  ...

In case no restrictions need to be put on CPU model and its features, a simpler cpu element can be used. Since 0.7.6

  ...
  <cpu>
    <topology sockets='1' cores='2' threads='1'/>
  </cpu>
  ...
cpu
The cpu element is the main container for describing guest CPU requirements. Its match attribute specified how strictly has the virtual CPU provided to the guest match these requirements. Since 0.7.6 the match attribute can be omitted if topology is the only element within cpu. Possible values for the match attribute are:
minimum
The specified CPU model and features describes the minimum requested CPU.
exact
The virtual CPU provided to the guest will exactly match the specification
strict
The guest will not be created unless the host CPU does exactly match the specification.
Since 0.8.5 the match attribute can be omitted and will default to exact. Since 0.9.10, an optional mode attribute may be used to make it easier to configure a guest CPU to be as close to host CPU as possible. Possible values for the mode attribute are:
custom
In this mode, the cpu element describes the CPU that should be presented to the guest. This is the default when no mode attribute is specified. This mode makes it so that a persistent guest will see the same hardware no matter what host the guest is booted on.
host-model
The host-model mode is essentially a shortcut to copying host CPU definition from capabilities XML into domain XML. Since the CPU definition is copied just before starting a domain, exactly the same XML can be used on different hosts while still providing the best guest CPU each host supports. The match attribute can't be used in this mode. Specifying CPU model is not supported either, but model's fallback attribute may still be used. Using the feature element, specific flags may be enabled or disabled specifically in addition to the host model. This may be used to fine tune features that can be emulated. (Since 1.1.1). Libvirt does not model every aspect of each CPU so the guest CPU will not match the host CPU exactly. On the other hand, the ABI provided to the guest is reproducible. During migration, complete CPU model definition is transferred to the destination host so the migrated guest will see exactly the same CPU model even if the destination host contains more capable CPUs for the running instance of the guest; but shutting down and restarting the guest may present different hardware to the guest according to the capabilities of the new host. Beware, due to the way libvirt detects host CPU and due to the fact libvirt does not talk to QEMU/KVM when creating the CPU model, CPU configuration created using host-model may not work as expected. The guest CPU may differ from the configuration and it may also confuse guest OS by using a combination of CPU features and other parameters (such as CPUID level) that don't work. Until these issues are fixed, it's a good idea to avoid using host-model and use custom mode with just the CPU model from host capabilities XML.
host-passthrough
With this mode, the CPU visible to the guest should be exactly the same as the host CPU even in the aspects that libvirt does not understand. Though the downside of this mode is that the guest environment cannot be reproduced on different hardware. Thus, if you hit any bugs, you are on your own. Neither model nor feature elements are allowed in this mode.
In both host-model and host-passthrough mode, the real (approximate in host-passthrough mode) CPU definition which would be used on current host can be determined by specifying VIR_DOMAIN_XML_UPDATE_CPU flag when calling virDomainGetXMLDesc API. When running a guest that might be prone to operating system reactivation when presented with different hardware, and which will be migrated between hosts with different capabilities, you can use this output to rewrite XML to the custom mode for more robust migration.
model
The content of the model element specifies CPU model requested by the guest. The list of available CPU models and their definition can be found in cpu_map.xml file installed in libvirt's data directory. If a hypervisor is not able to use the exact CPU model, libvirt automatically falls back to a closest model supported by the hypervisor while maintaining the list of CPU features. Since 0.9.10, an optional fallback attribute can be used to forbid this behavior, in which case an attempt to start a domain requesting an unsupported CPU model will fail. Supported values for fallback attribute are: allow (this is the default), and forbid. The optional vendor_id attribute (Since 0.10.0) can be used to set the vendor id seen by the guest. It must be exactly 12 characters long. If not set the vendor id of the host is used. Typical possible values are "AuthenticAMD" and "GenuineIntel".
vendor
Since 0.8.3 the content of the vendor element specifies CPU vendor requested by the guest. If this element is missing, the guest can be run on a CPU matching given features regardless on its vendor. The list of supported vendors can be found in cpu_map.xml.
topology
The topology element specifies requested topology of virtual CPU provided to the guest. Three non-zero values have to be given for sockets, cores, and threads: total number of CPU sockets, number of cores per socket, and number of threads per core, respectively.
feature
The cpu element can contain zero or more elements used to fine-tune features provided by the selected CPU model. The list of known feature names can be found in the same file as CPU models. The meaning of each feature element depends on its policy attribute, which has to be set to one of the following values:
force
The virtual CPU will claim the feature is supported regardless of it being supported by host CPU.
require
Guest creation will fail unless the feature is supported by host CPU.
optional
The feature will be supported by virtual CPU if and only if it is supported by host CPU.
disable
The feature will not be supported by virtual CPU.
forbid
Guest creation will fail if the feature is supported by host CPU.
Since 0.8.5 the policy attribute can be omitted and will default to require.

Guest NUMA topology can be specified using the numa element. Since 0.9.8

  ...
  <cpu>
    ...
    <numa>
      <cell cpus='0-3' memory='512000'/>
      <cell cpus='4-7' memory='512000'/>
    </numa>
    ...
  </cpu>
  ...

Each cell element specifies a NUMA cell or a NUMA node. cpus specifies the CPU or range of CPUs that are part of the node. memory specifies the node memory in kibibytes (i.e. blocks of 1024 bytes). Each cell or node is assigned cellid or nodeid in the increasing order starting from 0.

This guest NUMA specification is currently available only for QEMU/KVM.

Events configuration

It is sometimes necessary to override the default actions taken on various events. Not all hypervisors support all events and actions. The actions may be taken as a result of calls to libvirt APIs virDomainReboot, virDomainShutdown, or virDomainShutdownFlags. Using virsh reboot or virsh shutdown would also trigger the event.

  ...
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <on_lockfailure>poweroff</on_lockfailure>
  ...

The following collections of elements allow the actions to be specified when a guest OS triggers a lifecycle operation. A common use case is to force a reboot to be treated as a poweroff when doing the initial OS installation. This allows the VM to be re-configured for the first post-install bootup.

on_poweroff
The content of this element specifies the action to take when the guest requests a poweroff.
on_reboot
The content of this element specifies the action to take when the guest requests a reboot.
on_crash
The content of this element specifies the action to take when the guest crashes.

Each of these states allow for the same four possible actions.

destroy
The domain will be terminated completely and all resources released.
restart
The domain will be terminated and then restarted with the same configuration.
preserve
The domain will be terminated and its resource preserved to allow analysis.
rename-restart
The domain will be terminated and then restarted with a new name.

QEMU/KVM supports the on_poweroff and on_reboot events handling the destroy and restart actions. The preserve action for an on_reboot event is treated as a destroy and the rename-restart action for an on_poweroff event is treated as a restart event.

The on_crash event supports these additional actions since 0.8.4.

coredump-destroy
The crashed domain's core will be dumped, and then the domain will be terminated completely and all resources released
coredump-restart
The crashed domain's core will be dumped, and then the domain will be restarted with the same configuration

The on_lockfailure element (since 1.0.0) may be used to configure what action should be taken when a lock manager loses resource locks. The following actions are recognized by libvirt, although not all of them need to be supported by individual lock managers. When no action is specified, each lock manager will take its default action.

poweroff
The domain will be forcefully powered off.
restart
The domain will be powered off and started up again to reacquire its locks.
pause
The domain will be paused so that it can be manually resumed when lock issues are solved.
ignore
Keep the domain running as if nothing happened.

Power Management

Since 0.10.2 it is possible to forcibly enable or disable BIOS advertisements to the guest OS. (NB: Only qemu driver support)

  ...
  <pm>
    <suspend-to-disk enabled='no'/>
    <suspend-to-mem enabled='yes'/>
  </pm>
  ...
pm
These elements enable ('yes') or disable ('no') BIOS support for S3 (suspend-to-disk) and S4 (suspend-to-mem) ACPI sleep states. If nothing is specified, then the hypervisor will be left with its default value.

Hypervisor features

Hypervisors may allow certain CPU / machine features to be toggled on/off.

  ...
  <features>
    <pae/>
    <acpi/>
    <apic/>
    <hap/>
    <privnet/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='4096'/>
    </hyperv>
    <pvspinlock/>

  </features>
  ...

All features are listed within the features element, omitting a togglable feature tag turns it off. The available features can be found by asking for the capabilities XML, but a common set for fully virtualized domains are:

pae
Physical address extension mode allows 32-bit guests to address more than 4 GB of memory.
acpi
ACPI is useful for power management, for example, with KVM guests it is required for graceful shutdown to work.
apic
APIC allows the use of programmable IRQ management. Since 0.10.2 (QEMU only) there is an optional attribute eoi with values on and off which toggles the availability of EOI (End of Interrupt) for the guest.
hap
Enable use of Hardware Assisted Paging if available in the hardware.
viridian
Enable Viridian hypervisor extensions for paravirtualizing guest operating systems
privnet
Always create a private network namespace. This is automatically set if any interface devices are defined. This feature is only relevant for container based virtualization drivers, such as LXC.
hyperv
Enable various features improving behavior of guests running Microsoft Windows.
FeatureDescriptionValueSince
relaxedRelax constraints on timers on, off1.0.0 (QEMU only)
vapicEnable virtual APICon, off1.1.0 (QEMU only)
spinlocksEnable spinlock supporton, off; retries - at least 40951.1.0 (QEMU only)
pvspinlock
Notify the guest that the host supports paravirtual spinlocks for example by exposing the pvticketlocks mechanism. This feature can be explicitly disabled by using state='off' attribute.

Time keeping

The guest clock is typically initialized from the host clock. Most operating systems expect the hardware clock to be kept in UTC, and this is the default. Windows, however, expects it to be in so called 'localtime'.

  ...
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup' track='guest'>
      <catchup threshold='123' slew='120' limit='10000'/>
    </timer>
    <timer name='pit' tickpolicy='delay'/>
  </clock>
  ...
clock

The offset attribute takes four possible values, allowing fine grained control over how the guest clock is synchronized to the host. NB, not all hypervisors support all modes.

utc
The guest clock will always be synchronized to UTC when booted. Since 0.9.11 'utc' mode can be converted to 'variable' mode, which can be controlled by using the adjustment attribute. If the value is 'reset', the conversion is never done (not all hypervisors can synchronize to UTC on each boot; use of 'reset' will cause an error on those hypervisors). A numeric value forces the conversion to 'variable' mode using the value as the initial adjustment. The default adjustment is hypervisor specific.
localtime
The guest clock will be synchronized to the host's configured timezone when booted, if any. Since 0.9.11, the adjustment attribute behaves the same as in 'utc' mode.
timezone
The guest clock will be synchronized to the requested timezone using the timezone attribute. Since 0.7.7
variable
The guest clock will have an arbitrary offset applied relative to UTC or localtime, depending on the basis attribute. The delta relative to UTC (or localtime) is specified in seconds, using the adjustment attribute. The guest is free to adjust the RTC over time and expect that it will be honored at next reboot. This is in contrast to 'utc' and 'localtime' mode (with the optional attribute adjustment='reset'), where the RTC adjustments are lost at each reboot. Since 0.7.7 Since 0.9.11 the basis attribute can be either 'utc' (default) or 'localtime'.

A clock may have zero or more timer sub-elements. Since 0.8.0

timer

Each timer element requires a name attribute, and has other optional attributes that depend on the name specified. Various hypervisors support different combinations of attributes.

name
The name attribute selects which timer is being modified, and can be one of "platform" (currently unsupported), "hpet" (libxl, xen, qemu), "kvmclock" (qemu), "pit" (qemu), "rtc" (qemu), "tsc" (libxl) or "hypervclock" (qemu - since 1.2.2). The hypervclock timer adds support for the reference time counter and the reference page for iTSC feature for guests running the Microsoft Windows operating system.
track
The track attribute specifies what the timer tracks, and can be "boot", "guest", or "wall". Only valid for name="rtc" or name="platform".
tickpolicy

The tickpolicy attribute determines what happens when QEMU misses a deadline for injecting a tick to the guest:

delay
Continue to deliver ticks at the normal rate. The guest time will be delayed due to the late tick
catchup
Deliver ticks at a higher rate to catch up with the missed tick. The guest time should not be delayed once catchup is complete.
merge
Merge the missed tick(s) into one tick and inject. The guest time may be delayed, depending on how the OS reacts to the merging of ticks
discard
Throw away the missed tick(s) and continue with future injection normally. The guest time may be delayed, unless the OS has explicit handling of lost ticks

If the policy is "catchup", there can be further details in the catchup sub-element.

catchup
The catchup element has three optional attributes, each a positive integer. The attributes are threshold, slew, and limit.

Note that hypervisors are not required to support all policies across all time sources

frequency
The frequency attribute is an unsigned integer specifying the frequency at which name="tsc" runs.
mode
The mode attribute controls how the name="tsc" timer is managed, and can be "auto", "native", "emulate", "paravirt", or "smpsafe". Other timers are always emulated.
present
The present attribute can be "yes" or "no" to specify whether a particular timer is available to the guest.

Devices

The final set of XML elements are all used to describe devices provided to the guest domain. All devices occur as children of the main devices element. Since 0.1.3

  ...
  <devices>
    <emulator>/usr/lib/xen/bin/qemu-dm</emulator>
  </devices>
  ...
emulator
The contents of the emulator element specify the fully qualified path to the device model emulator binary. The capabilities XML specifies the recommended default emulator to use for each particular domain type / architecture combination.

Hard drives, floppy disks, CDROMs

Any device that looks like a disk, be it a floppy, harddisk, cdrom, or paravirtualized driver is specified via the disk element.

  ...
  <devices>
    <disk type='file' snapshot='external'>
      <driver name="tap" type="aio" cache="default"/>
      <source file='/var/lib/xen/images/fv0' startupPolicy='optional'>
        <seclabel relabel='no'/>
      </source>
      <target dev='hda' bus='ide'/>
      <iotune>
        <total_bytes_sec>10000000</total_bytes_sec>
        <read_iops_sec>400000</read_iops_sec>
        <write_iops_sec>100000</write_iops_sec>
      </iotune>
      <boot order='2'/>
      <encryption type='...'>
        ...
      </encryption>
      <shareable/>
      <serial>
        ...
      </serial>
    </disk>
      ...
    <disk type='network'>
      <driver name="qemu" type="raw" io="threads" ioeventfd="on" event_idx="off"/>
      <source protocol="sheepdog" name="image_name">
        <host name="hostname" port="7000"/>
      </source>
      <target dev="hdb" bus="ide"/>
      <boot order='1'/>
      <transient/>
      <address type='drive' controller='0' bus='1' unit='0'/>
    </disk>
    <disk type='network'>
      <driver name="qemu" type="raw"/>
      <source protocol="rbd" name="image_name2">
        <host name="hostname" port="7000"/>
      </source>
      <target dev="hdd" bus="ide"/>
      <auth username='myuser'>
        <secret type='ceph' usage='mypassid'/>
      </auth>
    </disk>
    <disk type='block' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='hdc' bus='ide' tray='open'/>
      <readonly/>
    </disk>
    <disk type='network' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source protocol="http" name="url_path">
        <host name="hostname" port="80"/>
      </source>
      <target dev='hdc' bus='ide' tray='open'/>
      <readonly/>
    </disk>
    <disk type='network' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source protocol="https" name="url_path">
        <host name="hostname" port="443"/>
      </source>
      <target dev='hdc' bus='ide' tray='open'/>
      <readonly/>
    </disk>
    <disk type='network' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source protocol="ftp" name="url_path">
        <host name="hostname" port="21"/>
      </source>
      <target dev='hdc' bus='ide' tray='open'/>
      <readonly/>
    </disk>
    <disk type='network' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source protocol="ftps" name="url_path">
        <host name="hostname" port="990"/>
      </source>
      <target dev='hdc' bus='ide' tray='open'/>
      <readonly/>
    </disk>
    <disk type='network' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source protocol="tftp" name="url_path">
        <host name="hostname" port="69"/>
      </source>
      <target dev='hdc' bus='ide' tray='open'/>
      <readonly/>
    </disk>
    <disk type='block' device='lun'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/sda'/>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='3' unit='0'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/sda'/>
      <geometry cyls='16383' heads='16' secs='63' trans='lba'/>
      <blockio logical_block_size='512' physical_block_size='4096'/>
      <target dev='hda' bus='ide'/>
    </disk>
    <disk type='volume' device='disk'>
      <driver name='qemu' type='raw'/>
      <source pool='blk-pool0' volume='blk-pool0-vol0'/>
      <target dev='hda' bus='ide'/>
    </disk>
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw'/>
      <source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-nopool/2'>
        <host name='example.com' port='3260'/>
      </source>
      <auth username='myuser'>
        <secret type='chap' usage='libvirtiscsi'/>
      </auth>
      <target dev='vda' bus='virtio'/>
    </disk>
    <disk type='network' device='lun'>
      <driver name='qemu' type='raw'/>
      <source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-nopool/1'>
       iqn.2013-07.com.example:iscsi-pool
        <host name='example.com' port='3260'/>
      </source>
      <auth username='myuser'>
        <secret type='chap' usage='libvirtiscsi'/>
      </auth>
      <target dev='sda' bus='scsi'/>
    </disk>
    <disk type='volume' device='disk'>
      <driver name='qemu' type='raw'/>
      <source pool='iscsi-pool' volume='unit:0:0:1' mode='host'/>
      <auth username='myuser'>
        <secret type='chap' usage='libvirtiscsi'/>
      </auth>
      <target dev='vda' bus='virtio'/>
    </disk>
    <disk type='volume' device='disk'>
      <driver name='qemu' type='raw'/>
      <source pool='iscsi-pool' volume='unit:0:0:2' mode='direct'/>
      <auth username='myuser'>
        <secret type='chap' usage='libvirtiscsi'/>
      </auth>
      <target dev='vda' bus='virtio'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/domain.qcow'/>
      <backingStore type='file'>
        <format type='qcow2'/>
        <source file='/var/lib/libvirt/images/snapshot.qcow'/>
        <backingStore type='block'>
          <format type='raw'/>
          <source dev='/dev/mapper/base'/>
          <backingStore/>
        </backingStore>
      </backingStore>
      <target dev='vda' bus='virtio'/>
    </disk>
  </devices>
  ...
disk
The disk element is the main container for describing disks (since 0.0.3).
type attribute since 0.0.3
Valid values are "file", "block", "dir" (since 0.7.5), "network" (since 0.8.7), or "volume" (since 1.0.5) and refer to the underlying source for the disk.
device attribute since 0.1.4
Indicates how the disk is to be exposed to the guest OS. Possible values for this attribute are "floppy", "disk", "cdrom", and "lun", defaulting to "disk".

Using "lun" (since 0.9.10) is only valid when type is "block", and behaves identically to "disk", except that generic SCSI commands from the guest are accepted and passed through to the physical device. Also note that device='lun' will only be recognized for actual raw devices, but never for individual partitions or LVM partitions (in those cases, the kernel will reject the generic SCSI commands, making it identical to device='disk').

rawio attribute since 0.9.10
Indicates whether the disk is needs rawio capability; valid settings are "yes" or "no" (default is "no"). If any one disk in a domain has rawio='yes', rawio capability will be enabled for all disks in the domain (because, in the case of QEMU, this capability can only be set on a per-process basis). This attribute is only valid when device is "lun". NB, rawio intends to confine the capability per-device, however, current QEMU implementation gives the domain process broader capability than that (per-process basis, affects all the domain disks). To confine the capability as much as possible for QEMU driver as this stage, sgio is recommended, it's more secure than rawio.
sgio attribute since 1.0.2
Indicates whether the kernel will filter unprivileged SG_IO commands for the disk, valid settings are "filtered" or "unfiltered". Defaults to "filtered". Similar to rawio, sgio is only valid for device 'lun'.
snapshot attribute since 0.9.5
Indicates the default behavior of the disk during disk snapshots: "internal" requires a file format such as qcow2 that can store both the snapshot and the data changes since the snapshot; "external" will separate the snapshot from the live data; and "no" means the disk will not participate in snapshots. Read-only disks default to "no", while the default for other disks depends on the hypervisor's capabilities. Some hypervisors allow a per-snapshot choice as well, during domain snapshot creation. Not all snapshot modes are supported; for example, snapshot='yes' with a transient disk generally does not make sense.
source
Representation of the disk source depends on the disk type attribute value as follows:
type='file' since 0.0.3
The file attribute specifies the fully-qualified path to the file holding the disk.
type='block' since 0.0.3
The dev attribute specifies the path to the host device to serve as the disk.
type='dir' since 0.7.5
The dir attribute specifies the fully-qualified path to the directory to use as the disk.
type='network' since 0.8.7
The protocol attribute specifies the protocol to access to the requested image. Possible values are "nbd", "iscsi", "rbd", "sheepdog" or "gluster". If the protocol attribute is "rbd", "sheepdog" or "gluster", an additional attribute name is mandatory to specify which volume/image will be used. For "nbd", the name attribute is optional. For "iscsi" (since 1.0.4), the name attribute may include a logical unit number, separated from the target's name by a slash (e.g., iqn.2013-07.com.example:iscsi-pool/1). If not specified, the default LUN is zero.
type='volume' since 1.0.5
The underlying disk source is represented by attributes pool and volume. Attribute pool specifies the name of the storage pool (managed by libvirt) where the disk source resides. Attribute volume specifies the name of storage volume (managed by libvirt) used as the disk source. The value for the volume attribute will be the output from the "Name" column of a virsh vol-list [pool-name] command.

Use the attribute mode (since 1.1.1) to indicate how to represent the LUN as the disk source. Valid values are "direct" and "host". If mode is not specified, the default is to use "host". Using "direct" as the mode value indicates to use the storage pool's source element host attribute as the disk source to generate the libiscsi URI (e.g. 'file=iscsi://example.com:3260/iqn.2013-07.com.example:iscsi-pool/1'). Using "host" as the mode value indicates to use the LUN's path as it shows up on host (e.g. 'file=/dev/disk/by-path/ip-example.com:3260-iscsi-iqn.2013-07.com.example:iscsi-pool-lun-1').

With "file", "block", and "volume", one or more optional sub-elements seclabel, described below (and since 0.9.9), can be used to override the domain security labeling policy for just that source file. (NB, for "volume" type disk, seclabel is only valid when the specified storage volume is of 'file' or 'block' type).

When the disk type is "network", the source may have zero or more host sub-elements used to specify the hosts to connect.

For a "file" or "volume" disk type which represents a cdrom or floppy (the device attribute), it is possible to define policy what to do with the disk if the source file is not accessible. (NB, startupPolicy is not valid for "volume" disk unless the specified storage volume is of "file" type). This is done by the startupPolicy attribute (since 0.9.7), accepting these values:

mandatory fail if missing for any reason (the default)
requisite fail if missing on boot up, drop if missing on migrate/restore/revert
optional drop if missing at any start attempt

Since 1.1.2 the startupPolicy is extended to support hard disks besides cdrom and floppy. On guest cold bootup, if a certain disk is not accessible or its disk chain is broken, with startupPolicy 'optional' the guest will drop this disk. This feature doesn't support migration currently.

backingStore
This element describes the backing store used by the disk specified by sibling source element. It is currently ignored on input and only used for output to describe the detected backing chains. Since 1.2.4. An empty backingStore element means the sibling source is self-contained and is not based on any backing store. The following attributes and sub-elements are supported in backingStore:
type attribute
The type attribute represents the type of disk used by the backing store, see disk type attribute above for more details and possible values.
index attribute
This attribute is only valid in output (and ignored on input) and it can be used to refer to a specific part of the disk chain when doing block operations (such as via the virDomainBlockRebase API). For example, vda[2] refers to the backing store with index='2' of the disk with vda target.
format sub-element
The format element contains type attribute which specifies the internal format of the backing store, such as raw or qcow2.
source sub-element
This element has the same structure as the source element in disk. It specifies which file, device, or network location contains the data of the described backing store.
backingStore sub-element
If the backing store is not self-contained, the next element in the chain is described by nested backingStore element.
mirror
This element is present if the hypervisor has started a block copy operation (via the virDomainBlockCopy API), where the mirror location in attribute file will eventually have the same contents as the source, and with the file format in attribute format (which might differ from the format of the source). If attribute ready is present, then it is known the disk is ready to pivot; otherwise, the disk is probably still copying. For now, this element only valid in output; it is ignored on input. Since 0.9.12
target
The target element controls the bus / device under which the disk is exposed to the guest OS. The dev attribute indicates the "logical" device name. The actual device name specified is not guaranteed to map to the device name in the guest OS. Treat it as a device ordering hint. The optional bus attribute specifies the type of disk device to emulate; possible values are driver specific, with typical values being "ide", "scsi", "virtio", "xen", "usb", "sata", or "sd" "sd" since 1.1.2. If omitted, the bus type is inferred from the style of the device name (e.g. a device named 'sda' will typically be exported using a SCSI bus). The optional attribute tray indicates the tray status of the removable disks (i.e. CDROM or Floppy disk), the value can be either "open" or "closed", defaults to "closed". NB, the value of tray could be updated while the domain is running. The optional attribute removable sets the removable flag for USB disks, and its value can be either "on" or "off", defaulting to "off". Since 0.0.3; bus attribute since 0.4.3; tray attribute since 0.9.11; "usb" attribute value since after 0.4.4; "sata" attribute value since 0.9.7; "removable" attribute value since 1.1.3
iotune
The optional iotune element provides the ability to provide additional per-device I/O tuning, with values that can vary for each device (contrast this to the <blkiotune> element, which applies globally to the domain). Currently, the only tuning available is Block I/O throttling for qemu. This element has optional sub-elements; any sub-element not specified or given with a value of 0 implies no limit. Since 0.9.8
total_bytes_sec
The optional total_bytes_sec element is the total throughput limit in bytes per second. This cannot appear with read_bytes_sec or write_bytes_sec.
read_bytes_sec
The optional read_bytes_sec element is the read throughput limit in bytes per second.
write_bytes_sec
The optional write_bytes_sec element is the write throughput limit in bytes per second.
total_iops_sec
The optional total_iops_sec element is the total I/O operations per second. This cannot appear with read_iops_sec or write_iops_sec.
read_iops_sec
The optional read_iops_sec element is the read I/O operations per second.
write_iops_sec
The optional write_iops_sec element is the write I/O operations per second.
driver
The optional driver element allows specifying further details related to the hypervisor driver used to provide the disk. Since 0.1.8
  • If the hypervisor supports multiple backend drivers, then the name attribute selects the primary backend driver name, while the optional type attribute provides the sub-type. For example, xen supports a name of "tap", "tap2", "phy", or "file", with a type of "aio", while qemu only supports a name of "qemu", but multiple types including "raw", "bochs", "qcow2", and "qed".
  • The optional cache attribute controls the cache mechanism, possible values are "default", "none", "writethrough", "writeback", "directsync" (like "writethrough", but it bypasses the host page cache) and "unsafe" (host may cache all disk io, and sync requests from guest are ignored). Since 0.6.0, "directsync" since 0.9.5, "unsafe" since 0.9.7
  • The optional error_policy attribute controls how the hypervisor will behave on a disk read or write error, possible values are "stop", "report", "ignore", and "enospace".Since 0.8.0, "report" since 0.9.7 The default setting of error_policy is "report". There is also an optional rerror_policy that controls behavior for read errors only. Since 0.9.7. If no rerror_policy is given, error_policy is used for both read and write errors. If rerror_policy is given, it overrides the error_policy for read errors. Also note that "enospace" is not a valid policy for read errors, so if error_policy is set to "enospace" and no rerror_policy is given, the read error policy will be left at its default, which is "report".
  • The optional io attribute controls specific policies on I/O; qemu guests support "threads" and "native". Since 0.8.8
  • The optional ioeventfd attribute allows users to set domain I/O asynchronous handling for disk device. The default is left to the discretion of the hypervisor. Accepted values are "on" and "off". Enabling this allows qemu to execute VM while a separate thread handles I/O. Typically guests experiencing high system CPU utilization during I/O will benefit from this. On the other hand, on overloaded host it could increase guest I/O latency. Since 0.9.3 (QEMU and KVM only) In general you should leave this option alone, unless you are very certain you know what you are doing.
  • The optional event_idx attribute controls some aspects of device event processing. The value can be either 'on' or 'off' - if it is on, it will reduce the number of interrupts and exits for the guest. The default is determined by QEMU; usually if the feature is supported, default is on. In case there is a situation where this behavior is suboptimal, this attribute provides a way to force the feature off. Since 0.9.5 (QEMU and KVM only) In general you should leave this option alone, unless you are very certain you know what you are doing.
  • The optional copy_on_read attribute controls whether to copy read backing file into the image file. The value can be either "on" or "off". Copy-on-read avoids accessing the same backing file sectors repeatedly and is useful when the backing file is over a slow network. By default copy-on-read is off. Since 0.9.10 (QEMU and KVM only)
  • The optional discard attribute controls whether to discard (also known as "trim" or "unmap") requests are ignored or passed to the filesystem. The value can be either "unmap" (allow the discard request to be passed) or "ignore" (ignore the discard request). Since 1.0.6 (QEMU and KVM only)
boot
Specifies that the disk is bootable. The order attribute determines the order in which devices will be tried during boot sequence. The per-device boot elements cannot be used together with general boot elements in BIOS bootloader section. Since 0.8.8
encryption
If present, specifies how the volume is encrypted. See the Storage Encryption page for more information.
readonly
If present, this indicates the device cannot be modified by the guest. For now, this is the default for disks with attribute device='cdrom'.
shareable
If present, this indicates the device is expected to be shared between domains (assuming the hypervisor and OS support this), which means that caching should be deactivated for that device.
transient
If present, this indicates that changes to the device contents should be reverted automatically when the guest exits. With some hypervisors, marking a disk transient prevents the domain from participating in migration or snapshots. Since 0.9.5
serial
If present, this specify serial number of virtual hard drive. For example, it may look like <serial>WD-WMAP9A966149</serial>. Since 0.7.1
wwn
If present, this element specifies the WWN (World Wide Name) of a virtual hard disk or CD-ROM drive. It must be composed of 16 hexadecimal digits. Since 0.10.1
vendor
If present, this element specifies the vendor of a virtual hard disk or CD-ROM device. It must not be longer than 8 printable characters. Since 1.0.1
product
If present, this element specifies the product of a virtual hard disk or CD-ROM device. It must not be longer than 16 printable characters. Since 1.0.1
host
The host element supports 4 attributes, viz. "name", "port", "transport" and "socket", which specify the hostname, the port number, transport type and path to socket, respectively. The meaning of this element and the number of the elements depend on the protocol attribute.
Protocol Meaning Number of hosts Default port
nbd a server running nbd-server only one 10809
iscsi an iSCSI server only one 3260
rbd monitor servers of RBD one or more 6789
sheepdog one of the sheepdog servers (default is localhost:7000) zero or one 7000
gluster a server running glusterd daemon only one 24007
gluster supports "tcp", "rdma", "unix" as valid values for the transport attribute. nbd supports "tcp" and "unix". Others only support "tcp". If nothing is specified, "tcp" is assumed. If the transport is "unix", the socket attribute specifies the path to an AF_UNIX socket.
address
If present, the address element ties the disk to a given slot of a controller (the actual <controller> device can often be inferred by libvirt, although it can be explicitly specified). The type attribute is mandatory, and is typically "pci" or "drive". For a "pci" controller, additional attributes for bus, slot, and function must be present, as well as optional domain and multifunction. Multifunction defaults to 'off'; any other value requires QEMU 0.1.3 and libvirt 0.9.7. For a "drive" controller, additional attributes controller, bus, target (libvirt 0.9.11), and unit are available, each defaulting to 0.
auth
If present, the auth element provides the authentication credentials needed to access the source. It includes a mandatory attribute username, which identifies the username to use during authentication, as well as a sub-element secret with mandatory attribute type, to tie back to a libvirt secret object that holds the actual password or other credentials (the domain XML intentionally does not expose the password, only the reference to the object that does manage the password). For now, the known secret types are "ceph", for Ceph RBD network sources, and "iscsi", for CHAP authentication of iSCSI targets. Both require either a uuid attribute with the UUID of the secret object, or a usage attribute matching the key that was specified in the secret object. libvirt 0.9.7
geometry
The optional geometry element provides the ability to override geometry settings. This mostly useful for S390 DASD-disks or older DOS-disks. 0.10.0
cyls
The cyls attribute is the number of cylinders.
heads
The heads attribute is the number of heads.
secs
The secs attribute is the number of sectors per track.
trans
The optional trans attribute is the BIOS-Translation-Modus (none, lba or auto)
blockio
If present, the blockio element allows to override any of the block device properties listed below. Since 0.10.2 (QEMU and KVM)
logical_block_size
The logical block size the disk will report to the guest OS. For Linux this would be the value returned by the BLKSSZGET ioctl and describes the smallest units for disk I/O.
physical_block_size
The physical block size the disk will report to the guest OS. For Linux this would be the value returned by the BLKPBSZGET ioctl and describes the disk's hardware sector size which can be relevant for the alignment of disk data.

Filesystems

A directory on the host that can be accessed directly from the guest. since 0.3.3, since 0.8.5 for QEMU/KVM

  ...
  <devices>
    <filesystem type='template'>
      <source name='my-vm-template'/>
      <target dir='/'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='path' wrpolicy='immediate'/>
      <source dir='/export/to/guest'/>
      <target dir='/import/from/host'/>
      <readonly/>
    </filesystem>
    <filesystem type='file' accessmode='passthrough'>
      <driver name='loop' type='raw'/>
      <driver type='path' wrpolicy='immediate'/>
      <source file='/export/to/guest.img'/>
      <target dir='/import/from/host'/>
      <readonly/>
    </filesystem>
    ...
  </devices>
  ...
filesystem
The filesystem attribute type specifies the type of the source. The possible values are:
type='mount'
A host directory to mount in the guest. Used by LXC, OpenVZ (since 0.6.2) and QEMU/KVM (since 0.8.5). This is the default type if one is not specified. This mode also has an optional sub-element driver, with an attribute type='path' or type='handle' (since 0.9.7). The driver block has an optional attribute wrpolicy that further controls interaction with the host page cache; omitting the attribute gives default behavior, while the value immediate means that a host writeback is immediately triggered for all pages touched during a guest file write operation (since 0.9.10).
type='template'
OpenVZ filesystem template. Only used by OpenVZ driver.
type='file'
A host file will be treated as an image and mounted in the guest. The filesystem format will be autodetected. Only used by LXC driver.
type='block'
A host block device to mount in the guest. The filesystem format will be autodetected. Only used by LXC driver (since 0.9.5).
type='ram'
An in-memory filesystem, using memory from the host OS. The source element has a single attribute usage which gives the memory usage limit in KiB, unless units are specified by the units attribute. Only used by LXC driver. (since 0.9.13)
type='bind'
A directory inside the guest will be bound to another directory inside the guest. Only used by LXC driver (since 0.9.13)
The filesystem block has an optional attribute accessmode which specifies the security mode for accessing the source (since 0.8.5). Currently this only works with type='mount' for the QEMU/KVM driver. The possible values are:
accessmode='passthrough'
The source is accessed with the permissions of the user inside the guest. This is the default accessmode if one is not specified. More info
accessmode='mapped'
The source is accessed with the permissions of the hypervisor (QEMU process). More info
accessmode='squash'
Similar to 'passthrough', the exception is that failure of privileged operations like 'chown' are ignored. This makes a passthrough-like mode usable for people who run the hypervisor as non-root. More info
driver
The optional driver element allows specifying further details related to the hypervisor driver used to provide the filesystem. Since 1.0.6
  • If the hypervisor supports multiple backend drivers, then the type attribute selects the primary backend driver name, while the format attribute provides the format type. For example, LXC supports a type of "loop", with a format of "raw" or "nbd" with any format. QEMU supports a type of "path" or "handle", but no formats.
source
The resource on the host that is being accessed in the guest. The name attribute must be used with type='template', and the dir attribute must be used with type='mount'. The usage attribute is used with type='ram' to set the memory limit in KiB, unless units are specified by the units attribute.
target
Where the source can be accessed in the guest. For most drivers this is an automatic mount point, but for QEMU/KVM this is merely an arbitrary string tag that is exported to the guest as a hint for where to mount.
readonly
Enables exporting filesystem as a readonly mount for guest, by default read-write access is given (currently only works for QEMU/KVM driver).
space_hard_limit
Maximum space available to this guest's filesystem. Since 0.9.13
space_soft_limit
Maximum space available to this guest's filesystem. The container is permitted to exceed its soft limits for a grace period of time. Afterwards the hard limit is enforced. Since 0.9.13

Device Addresses

Many devices have an optional <address> sub-element to describe where the device is placed on the virtual bus presented to the guest. If an address (or any optional attribute within an address) is omitted on input, libvirt will generate an appropriate address; but an explicit address is required if more control over layout is required. See below for device examples including an address element.

Every address has a mandatory attribute type that describes which bus the device is on. The choice of which address to use for a given device is constrained in part by the device and the architecture of the guest. For example, a <disk> device uses type='drive', while a <console> device would use type='pci' on i686 or x86_64 guests, or type='spapr-vio' on PowerPC64 pseries guests. Each address type has further optional attributes that control where on the bus the device will be placed:

type='pci'
PCI addresses have the following additional attributes: domain (a 2-byte hex integer, not currently used by qemu), bus (a hex value between 0 and 0xff, inclusive), slot (a hex value between 0x0 and 0x1f, inclusive), and function (a value between 0 and 7, inclusive). Also available is the multifunction attribute, which controls turning on the multifunction bit for a particular slot/function in the PCI control register (since 0.9.7, requires QEMU 0.13). multifunction defaults to 'off', but should be set to 'on' for function 0 of a slot that will have multiple functions used.
type='drive'
Drive addresses have the following additional attributes: controller (a 2-digit controller number), bus (a 2-digit bus number), target (a 2-digit bus number), and unit (a 2-digit unit number on the bus).
type='virtio-serial'
Each virtio-serial address has the following additional attributes: controller (a 2-digit controller number), bus (a 2-digit bus number), and slot (a 2-digit slot within the bus).
type='ccid'
A CCID address, for smart-cards, has the following additional attributes: bus (a 2-digit bus number), and slot attribute (a 2-digit slot within the bus). Since 0.8.8.
type='usb'
USB addresses have the following additional attributes: bus (a hex value between 0 and 0xfff, inclusive), and port (a dotted notation of up to four octets, such as 1.2 or 2.1.3.1).
type='spapr-vio'
On PowerPC pseries guests, devices can be assigned to the SPAPR-VIO bus. It has a flat 64-bit address space; by convention, devices are generally assigned at a non-zero multiple of 0x1000, but other addresses are valid and permitted by libvirt. Each address has the following additional attribute: reg (the hex value address of the starting register). Since 0.9.9.
type='ccw'
s390 guests with a machine value of s390-ccw-virtio use the native CCW bus for I/O devices. CCW bus addresses have the following additional attributes: cssid (a hex value between 0 and 0xfe, inclusive), ssid (a value between 0 and 3, inclusive) and devno (a hex value between 0 and 0xffff, inclusive). Partially specified bus addresses are not allowed. If omitted, libvirt will assign a free bus address with cssid=0xfe and ssid=0. Virtio devices for s390 must have their cssid set to 0xfe in order to be recognized by the guest operating system. Since 1.0.4
type='isa'
ISA addresses have the following additional attributes: iobase and irq. Since 1.2.1

Controllers

Depending on the guest architecture, some device buses can appear more than once, with a group of virtual devices tied to a virtual controller. Normally, libvirt can automatically infer such controllers without requiring explicit XML markup, but sometimes it is necessary to provide an explicit controller element.

  ...
  <devices>
    <controller type='ide' index='0'/>
    <controller type='virtio-serial' index='0' ports='16' vectors='4'/>
    <controller type='virtio-serial' index='1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </controller>
    ...
  </devices>
  ...

Each controller has a mandatory attribute type, which must be one of "ide", "fdc", "scsi", "sata", "usb", "ccid", "virtio-serial" or "pci", and a mandatory attribute index which is the decimal integer describing in which order the bus controller is encountered (for use in controller attributes of <address> elements). The "virtio-serial" controller has two additional optional attributes ports and vectors, which control how many devices can be connected through the controller. A "scsi" controller has an optional attribute model, which is one of "auto", "buslogic", "ibmvscsi", "lsilogic", "lsisas1068", "lsisas1078", "virtio-scsi" or "vmpvscsi". A "usb" controller has an optional attribute model, which is one of "piix3-uhci", "piix4-uhci", "ehci", "ich9-ehci1", "ich9-uhci1", "ich9-uhci2", "ich9-uhci3", "vt82c686b-uhci", "pci-ohci" or "nec-xhci". Additionally, since 0.10.0, if the USB bus needs to be explicitly disabled for the guest, model='none' may be used. Since 1.0.5, no default USB controller will be built on s390. The PowerPC64 "spapr-vio" addresses do not have an associated controller.

For controllers that are themselves devices on a PCI or USB bus, an optional sub-element <address> can specify the exact relationship of the controller to its master bus, with semantics given above.

An optional sub-element driver can specify the driver specific options. Currently it only supports attribute queues (1.0.5, QEMU and KVM only), which specifies the number of queues for the controller. For best performance, it's recommended to specify a value matching the number of vCPUs.

USB companion controllers have an optional sub-element <master> to specify the exact relationship of the companion to its master controller. A companion controller is on the same bus as its master, so the companion index value should be equal.

  ...
  <devices>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0' bus='0' slot='4' function='7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0' bus='0' slot='4' function='0' multifunction='on'/>
    </controller>
    ...
  </devices>
  ...

PCI controllers have an optional model attribute with possible values pci-root, pcie-root, pci-bridge, or dmi-to-pci-bridge. The root controllers (pci-root and pcie-root) have an optional pcihole64 element specifying how big (in kilobytes, or in the unit specified by pcihole64's unit attribute) the 64-bit PCI hole should be. Some guests (like Windows XP or Windows Server 2003) might crash when QEMU and Seabios are recent enough to support 64-bit PCI holes, unless this is disabled (set to 0). Since 1.1.2 (QEMU only)

For machine types which provide an implicit PCI bus, the pci-root controller with index=0 is auto-added and required to use PCI devices. pci-root has no address. PCI bridges are auto-added if there are too many devices to fit on the one bus provided by pci-root, or a PCI bus number greater than zero was specified. PCI bridges can also be specified manually, but their addresses should only refer to PCI buses provided by already specified PCI controllers. Leaving gaps in the PCI controller indexes might lead to an invalid configuration. (pci-root and pci-bridge since 1.0.5)

  ...
  <devices>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='pci' index='1' model='pci-bridge'>
      <address type='pci' domain='0' bus='0' slot='5' function='0' multifunction='off'/>
    </controller>
  </devices>
  ...

For machine types which provide an implicit PCI Express (PCIe) bus (for example, the machine types based on the Q35 chipset), the pcie-root controller with index=0 is auto-added to the domain's configuration. pcie-root has also no address, provides 31 slots (numbered 1-31) and can only be used to attach PCIe devices. In order to connect standard PCI devices on a system which has a pcie-root controller, a pci controller with model='dmi-to-pci-bridge' is automatically added. A dmi-to-pci-bridge controller plugs into a PCIe slot (as provided by pcie-root), and itself provides 31 standard PCI slots (which are not hot-pluggable). In order to have hot-pluggable PCI slots in the guest system, a pci-bridge controller will also be automatically created and connected to one of the slots of the auto-created dmi-to-pci-bridge controller; all guest devices with PCI addresses that are auto-determined by libvirt will be placed on this pci-bridge device. (since 1.1.2).

  ...
  <devices>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='dmi-to-pci-bridge'>
      <address type='pci' domain='0' bus='0' slot='0xe' function='0'/>
    </controller>
    <controller type='pci' index='2' model='pci-bridge'>
      <address type='pci' domain='0' bus='1' slot='1' function='0'/>
    </controller>
  </devices>
  ...

Device leases

When using a lock manager, it may be desirable to record device leases against a VM. The lock manager will ensure the VM won't start unless the leases can be acquired.

  ...
  <devices>
    ...
    <lease>
      <lockspace>somearea</lockspace>
      <key>somekey</key>
      <target path='/some/lease/path' offset='1024'/>
    </lease>
    ...
  </devices>
  ...
lockspace
This is an arbitrary string, identifying the lockspace within which the key is held. Lock managers may impose extra restrictions on the format, or length of the lockspace name.
key
This is an arbitrary string, uniquely identifying the lease to be acquired. Lock managers may impose extra restrictions on the format, or length of the key.
target
This is the fully qualified path of the file associated with the lockspace. The offset specifies where the lease is stored within the file. If the lock manager does not require a offset, just pass 0.

Host device assignment

USB / PCI / SCSI devices

USB, PCI and SCSI devices attached to the host can be passed through to the guest using the hostdev element. since after 0.4.4 for USB, 0.6.0 for PCI(KVM only) and 1.0.6 for SCSI(KVM only):

  ...
  <devices>
    <hostdev mode='subsystem' type='usb'>
      <source startupPolicy='optional'>
        <vendor id='0x1234'/>
        <product id='0xbeef'/>
      </source>
      <boot order='2'/>
    </hostdev>
  </devices>
  ...

or:

  ...
  <devices>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address bus='0x06' slot='0x02' function='0x0'/>
      </source>
      <boot order='1'/>
      <rom bar='on' file='/etc/fake/boot.bin'/>
    </hostdev>
  </devices>
  ...

or:

  ...
  <devices>
    <hostdev mode='subsystem' type='scsi'>
      <source>
        <adapter name='scsi_host0'/>
        <address type='scsi' bus='0' target='0' unit='0'/>
      </source>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </hostdev>
  </devices>
  ...
hostdev
The hostdev element is the main container for describing host devices. For usb device passthrough mode is always "subsystem" and type is "usb" for a USB device, "pci" for a PCI device and "scsi" for a SCSI device. When managed is "yes" for a PCI device, it is detached from the host before being passed on to the guest, and reattached to the host after the guest exits. If managed is omitted or "no", and for USB devices, the user is responsible to call virNodeDeviceDettach (or virsh nodedev-dettach) before starting the guest or hot-plugging the device, and virNodeDeviceReAttach (or virsh nodedev-reattach) after hot-unplug or stopping the guest. For SCSI device, user is responsible to make sure the device is not used by host. The optional sgio (since 1.0.6) attribute indicates whether the kernel will filter unprivileged SG_IO commands for the disk, valid settings are "filtered" or "unfiltered". Defaults to "filtered".
source
The source element describes the device as seen from the host. The USB device can either be addressed by vendor / product id using the vendor and product elements or by the device's address on the hosts using the address element. PCI devices on the other hand can only be described by their address. SCSI devices are described by both the adapter and address elements. Since 1.0.0, the source element of USB devices may contain startupPolicy attribute which can be used to define policy what to do if the specified host USB device is not found. The attribute accepts the following values:
mandatory fail if missing for any reason (the default)
requisite fail if missing on boot up, drop if missing on migrate/restore/revert
optional drop if missing at any start attempt
vendor, product
The vendor and product elements each have an id attribute that specifies the USB vendor and product id. The ids can be given in decimal, hexadecimal (starting with 0x) or octal (starting with 0) form.
boot
Specifies that the device is bootable. The order attribute determines the order in which devices will be tried during boot sequence. The per-device boot elements cannot be used together with general boot elements in BIOS bootloader section. Since 0.8.8 for PCI devices, Since 1.0.1 for USB devices.
rom
The rom element is used to change how a PCI device's ROM is presented to the guest. The optional bar attribute can be set to "on" or "off", and determines whether or not the device's ROM will be visible in the guest's memory map. (In PCI documentation, the "rombar" setting controls the presence of the Base Address Register for the ROM). If no rom bar is specified, the qemu default will be used (older versions of qemu used a default of "off", while newer qemus have a default of "on"). Since 0.9.7 (QEMU and KVM only). The optional file attribute is used to point to a binary file to be presented to the guest as the device's ROM BIOS. This can be useful, for example, to provide a PXE boot ROM for a virtual function of an sr-iov capable ethernet device (which has no boot ROMs for the VFs). Since 0.9.10 (QEMU and KVM only).
address
The address element for USB devices has a bus and device attribute to specify the USB bus and device number the device appears at on the host. The values of these attributes can be given in decimal, hexadecimal (starting with 0x) or octal (starting with 0) form. For PCI devices the element carries 3 attributes allowing to designate the device as can be found with the lspci or with virsh nodedev-list. See above for more details on the address element.
driver
PCI devices can have an optional driver subelement that specifies which backend driver to use for PCI device assignment. Use the name attribute to select either "vfio" (for the new VFIO device assignment backend, which is compatible with UEFI SecureBoot) or "kvm" (the legacy device assignment handled directly by the KVM kernel module)Since 1.0.5 (QEMU and KVM only, requires kernel 3.6 or newer). When specified, device assignment will fail if the requested method of device assignment isn't available on the host. When not specified, the default is "vfio" on systems where the VFIO driver is available and loaded, and "kvm" on older systems, or those where the VFIO driver hasn't been loaded Since 1.1.3 (prior to that the default was always "kvm").
readonly
Indicates that the device is readonly, only supported by SCSI host device now. Since 1.0.6 (QEMU and KVM only)
shareable
If present, this indicates the device is expected to be shared between domains (assuming the hypervisor and OS support this). Only supported by SCSI host device. Since 1.0.6

Note: Although shareable was introduced in 1.0.6, it did not work as as expected until 1.2.2.

Block / character devices

Block / character devices from the host can be passed through to the guest using the hostdev element. This is only possible with container based virtualization. since after 1.0.1 for LXC:

...
<hostdev mode='capabilities' type='storage'>
  <source>
    <block>/dev/sdf1</block>
  </source>
</hostdev>
...
    
...
<hostdev mode='capabilities' type='misc'>
  <source>
    <char>/dev/input/event3</char>
  </source>
</hostdev>
...
    
...
<hostdev mode='capabilities' type='net'>
  <source>
    <interface>eth0</interface>
  </source>
</hostdev>
...
    
hostdev
The hostdev element is the main container for describing host devices. For block/character device passthrough mode is always "capabilities" and type is "block" for a block device, "char" for a character device and "net" for a host network interface.
source
The source element describes the device as seen from the host. For block devices, the path to the block device in the host OS is provided in the nested "block" element, while for character devices the "char" element is used. For network interfaces, the name of the interface is provided in the "interface" element.

Redirected devices

USB device redirection through a character device is supported since after 0.9.5 (KVM only):

  ...
  <devices>
    <redirdev bus='usb' type='tcp'>
      <source mode='connect' host='localhost' service='4000'/>
      <boot order='1'/>
    </redirdev>
    <redirfilter>
      <usbdev class='0x08' vendor='0x1234' product='0xbeef' version='2.00' allow='yes'/>
      <usbdev allow='no'/>
    </redirfilter>
  </devices>
  ...
redirdev
The redirdev element is the main container for describing redirected devices. bus must be "usb" for a USB device. An additional attribute type is required, matching one of the supported serial device types, to describe the host side of the tunnel; type='tcp' or type='spicevmc' (which uses the usbredir channel of a SPICE graphics device) are typical. The redirdev element has an optional sub-element <address> which can tie the device to a particular controller. Further sub-elements, such as <source>, may be required according to the given type, although a <target> sub-element is not required (since the consumer of the character device is the hypervisor itself, rather than a device visible in the guest).
boot
Specifies that the device is bootable. The order attribute determines the order in which devices will be tried during boot sequence. The per-device boot elements cannot be used together with general boot elements in BIOS bootloader section. (Since 1.0.1)
redirfilter
The redirfilter element is used for creating the filter rule to filter out certain devices from redirection. It uses sub-element <usbdev> to define each filter rule. class attribute is the USB Class code, for example, 0x08 represents mass storage devices. The USB device can be addressed by vendor / product id using the vendor and product attributes. version is the bcdDevice value of USB device, such as 1.00, 1.10 and 2.00. These four attributes are optional and -1 can be used to allow any value for them. allow attribute is mandatory, 'yes' means allow, 'no' for deny.

Smartcard devices

A virtual smartcard device can be supplied to the guest via the smartcard element. A USB smartcard reader device on the host cannot be used on a guest with simple device passthrough, since it will then not be available on the host, possibly locking the host computer when it is "removed". Therefore, some hypervisors provide a specialized virtual device that can present a smartcard interface to the guest, with several modes for describing how credentials are obtained from the host or even a from a channel created to a third-party smartcard provider. Since 0.8.8

  ...
  <devices>
    <smartcard mode='host'/>
    <smartcard mode='host-certificates'>
      <certificate>cert1</certificate>
      <certificate>cert2</certificate>
      <certificate>cert3</certificate>
      <database>/etc/pki/nssdb/</database>
    </smartcard>
    <smartcard mode='passthrough' type='tcp'>
      <source mode='bind' host='127.0.0.1' service='2001'/>
      <protocol type='raw'/>
      <address type='ccid' controller='0' slot='0'/>
    </smartcard>
    <smartcard mode='passthrough' type='spicevmc'/>
  </devices>
  ...

The <smartcard> element has a mandatory attribute mode. The following modes are supported; in each mode, the guest sees a device on its USB bus that behaves like a physical USB CCID (Chip/Smart Card Interface Device) card.

mode='host'
The simplest operation, where the hypervisor relays all requests from the guest into direct access to the host's smartcard via NSS. No other attributes or sub-elements are required. See below about the use of an optional <address> sub-element.
mode='host-certificates'
Rather than requiring a smartcard to be plugged into the host, it is possible to provide three NSS certificate names residing in a database on the host. These certificates can be generated via the command certutil -d /etc/pki/nssdb -x -t CT,CT,CT -S -s CN=cert1 -n cert1, and the resulting three certificate names must be supplied as the content of each of three <certificate> sub-elements. An additional sub-element <database> can specify the absolute path to an alternate directory (matching the -d option of the certutil command when creating the certificates); if not present, it defaults to /etc/pki/nssdb.
mode='passthrough'
Rather than having the hypervisor directly communicate with the host, it is possible to tunnel all requests through a secondary character device to a third-party provider (which may in turn be talking to a smartcard or using three certificate files). In this mode of operation, an additional attribute type is required, matching one of the supported serial device types, to describe the host side of the tunnel; type='tcp' or type='spicevmc' (which uses the smartcard channel of a SPICE graphics device) are typical. Further sub-elements, such as <source>, may be required according to the given type, although a <target> sub-element is not required (since the consumer of the character device is the hypervisor itself, rather than a device visible in the guest).

Each mode supports an optional sub-element <address>, which fine-tunes the correlation between the smartcard and a ccid bus controller, documented above. For now, qemu only supports at most one smartcard, with an address of bus=0 slot=0.

Network interfaces

  ...
  <devices>
    <interface type='bridge'>
      <source bridge='xenbr0'/>
      <mac address='00:16:3e:5d:c7:9e'/>
      <script path='vif-bridge'/>
      <boot order='1'/>
      <rom bar='off'/>
    </interface>
  </devices>
  ...

There are several possibilities for specifying a network interface visible to the guest. Each subsection below provides more details about common setup options. Additionally, each <interface> element has an optional <address> sub-element that can tie the interface to a particular pci slot, with attribute type='pci' as documented above.

Virtual network

This is the recommended config for general guest connectivity on hosts with dynamic / wireless networking configs (or multi-host environments where the host hardware details are described separately in a <network> definition Since 0.9.4).

Provides a connection whose details are described by the named network definition. Depending on the virtual network's "forward mode" configuration, the network may be totally isolated (no <forward> element given), NAT'ing to an explicit network device or to the default route (<forward mode='nat'>), routed with no NAT (<forward mode='route'/>), or connected directly to one of the host's network interfaces (via macvtap) or bridge devices ((<forward mode='bridge|private|vepa|passthrough'/> Since 0.9.4)

For networks with a forward mode of bridge, private, vepa, and passthrough, it is assumed that the host has any necessary DNS and DHCP services already setup outside the scope of libvirt. In the case of isolated, nat, and routed networks, DHCP and DNS are provided on the virtual network by libvirt, and the IP range can be determined by examining the virtual network config with 'virsh net-dumpxml [networkname]'. There is one virtual network called 'default' setup out of the box which does NAT'ing to the default route and has an IP range of 192.168.122.0/255.255.255.0. Each guest will have an associated tun device created with a name of vnetN, which can also be overridden with the <target> element (see overriding the target element).

When the source of an interface is a network, a portgroup can be specified along with the name of the network; one network may have multiple portgroups defined, with each portgroup containing slightly different configuration information for different classes of network connections. Since 0.9.4.

Also, similar to direct network connections (described below), a connection of type network may specify a virtualport element, with configuration data to be forwarded to a vepa (802.1Qbg) or 802.1Qbh compliant switch (Since 0.8.2), or to an Open vSwitch virtual switch (Since 0.9.11).

Since the actual type of switch may vary depending on the configuration in the <network> on the host, it is acceptable to omit the virtualport type attribute, and specify attributes from multiple different virtualport types (and also to leave out certain attributes); at domain startup time, a complete <virtualport> element will be constructed by merging together the type and attributes defined in the network and the portgroup referenced by the interface. The newly-constructed virtualport is a combination of them. The attributes from lower virtualport can't make change on the ones defined in higher virtualport. Interface takes the highest priority, portgroup is lowest priority. (Since 0.10.0). For example, in order to work properly with both an 802.1Qbh switch and an Open vSwitch switch, you may choose to specify no type, but both an profileid (in case the switch is 802.1Qbh) and an interfaceid (in case the switch is Open vSwitch) (you may also omit the other attributes, such as managerid, typeid, or profileid, to be filled in from the network's <virtualport>). If you want to limit a guest to connecting only to certain types of switches, you can specify the virtualport type, but still omit some/all of the parameters - in this case if the host's network has a different type of virtualport, connection of the interface will fail.

  ...
  <devices>
    <interface type='network'>
      <source network='default'/>
    </interface>
    ...
    <interface type='network'>
      <source network='default' portgroup='engineering'/>
      <target dev='vnet7'/>
      <mac address="00:11:22:33:44:55"/>
      <virtualport>
        <parameters instanceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/>
      </virtualport>

    </interface>
  </devices>
  ...
Bridge to LAN

This is the recommended config for general guest connectivity on hosts with static wired networking configs.

Provides a bridge from the VM directly to the LAN. This assumes there is a bridge device on the host which has one or more of the hosts physical NICs enslaved. The guest VM will have an associated tun device created with a name of vnetN, which can also be overridden with the <target> element (see overriding the target element). The tun device will be enslaved to the bridge. The IP range / network configuration is whatever is used on the LAN. This provides the guest VM full incoming & outgoing net access just like a physical machine.

On Linux systems, the bridge device is normally a standard Linux host bridge. On hosts that support Open vSwitch, it is also possible to connect to an open vSwitch bridge device by adding a <virtualport type='openvswitch'/> to the interface definition. (Since 0.9.11). The Open vSwitch type virtualport accepts two parameters in its <parameters> element - an interfaceid which is a standard uuid used to uniquely identify this particular interface to Open vSwitch (if you do not specify one, a random interfaceid will be generated for you when you first define the interface), and an optional profileid which is sent to Open vSwitch as the interfaces "port-profile".

  ...
  <devices>
    ...
    <interface type='bridge'>
      <source bridge='br0'/>
    </interface>
    <interface type='bridge'>
      <source bridge='br1'/>
      <target dev='vnet7'/>
      <mac address="00:11:22:33:44:55"/>
    </interface>
    <interface type='bridge'>
      <source bridge='ovsbr'/>
      <virtualport type='openvswitch'>
        <parameters profileid='menial' interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/>
      </virtualport>
    </interface>
    ...
  </devices>
  ...
Userspace SLIRP stack

Provides a virtual LAN with NAT to the outside world. The virtual network has DHCP & DNS services and will give the guest VM addresses starting from 10.0.2.15. The default router will be 10.0.2.2 and the DNS server will be 10.0.2.3. This networking is the only option for unprivileged users who need their VMs to have outgoing access.

  ...
  <devices>
    <interface type='user'/>
    ...
    <interface type='user'>
      <mac address="00:11:22:33:44:55"/>
    </interface>
  </devices>
  ...
Generic ethernet connection

Provides a means for the administrator to execute an arbitrary script to connect the guest's network to the LAN. The guest will have a tun device created with a name of vnetN, which can also be overridden with the <target> element. After creating the tun device a shell script will be run which is expected to do whatever host network integration is required. By default this script is called /etc/qemu-ifup but can be overridden.

  ...
  <devices>
    <interface type='ethernet'/>
    ...
    <interface type='ethernet'>
      <target dev='vnet7'/>
      <script path='/etc/qemu-ifup-mynet'/>
    </interface>
  </devices>
  ...
Direct attachment to physical interface

Provides direct attachment of the virtual machine's NIC to the given physical interface of the host. Since 0.7.7 (QEMU and KVM only)
This setup requires the Linux macvtap driver to be available. (Since Linux 2.6.34.) One of the modes 'vepa' ( 'Virtual Ethernet Port Aggregator'), 'bridge' or 'private' can be chosen for the operation mode of the macvtap device, 'vepa' being the default mode. The individual modes cause the delivery of packets to behave as follows:

vepa
All VMs' packets are sent to the external bridge. Packets whose destination is a VM on the same host as where the packet originates from are sent back to the host by the VEPA capable bridge (today's bridges are typically not VEPA capable).
bridge
Packets whose destination is on the same host as where they originate from are directly delivered to the target macvtap device. Both origin and destination devices need to be in bridge mode for direct delivery. If either one of them is in vepa mode, a VEPA capable bridge is required.
private
All packets are sent to the external bridge and will only be delivered to a target VM on the same host if they are sent through an external router or gateway and that device sends them back to the host. This procedure is followed if either the source or destination device is in private mode.
passthrough
This feature attaches a virtual function of a SRIOV capable NIC directly to a VM without losing the migration capability. All packets are sent to the VF/IF of the configured network device. Depending on the capabilities of the device additional prerequisites or limitations may apply; for example, on Linux this requires kernel 2.6.38 or newer. Since 0.9.2
  ...
  <devices>
    ...
    <interface type='direct'>
      <source dev='eth0' mode='vepa'/>
    </interface>
  </devices>
  ...

The network access of direct attached virtual machines can be managed by the hardware switch to which the physical interface of the host machine is connected to.

The interface can have additional parameters as shown below, if the switch is conforming to the IEEE 802.1Qbg standard. The parameters of the virtualport element are documented in more detail in the IEEE 802.1Qbg standard. The values are network specific and should be provided by the network administrator. In 802.1Qbg terms, the Virtual Station Interface (VSI) represents the virtual interface of a virtual machine. Since 0.8.2

Please note that IEEE 802.1Qbg requires a non-zero value for the VLAN ID.

managerid
The VSI Manager ID identifies the database containing the VSI type and instance definitions. This is an integer value and the value 0 is reserved.
typeid
The VSI Type ID identifies a VSI type characterizing the network access. VSI types are typically managed by network administrator. This is an integer value.
typeidversion
The VSI Type Version allows multiple versions of a VSI Type. This is an integer value.
instanceid
The VSI Instance ID Identifier is generated when a VSI instance (i.e. a virtual interface of a virtual machine) is created. This is a globally unique identifier.
  ...
  <devices>
    ...
    <interface type='direct'>
      <source dev='eth0.2' mode='vepa'/>
      <virtualport type="802.1Qbg">
        <parameters managerid="11" typeid="1193047" typeidversion="2" instanceid="09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f"/>
      </virtualport>
    </interface>
  </devices>
  ...

The interface can have additional parameters as shown below if the switch is conforming to the IEEE 802.1Qbh standard. The values are network specific and should be provided by the network administrator. Since 0.8.2

profileid
The profile ID contains the name of the port profile that is to be applied to this interface. This name is resolved by the port profile database into the network parameters from the port profile, and those network parameters will be applied to this interface.
  ...
  <devices>
    ...
    <interface type='direct'>
      <source dev='eth0' mode='private'/>
      <virtualport type='802.1Qbh'>
        <parameters profileid='finance'/>
      </virtualport>
    </interface>
  </devices>
  ...
  
PCI Passthrough

A PCI network device (specified by the <source> element) is directly assigned to the guest using generic device passthrough, after first optionally setting the device's MAC address to the configured value, and associating the device with an 802.1Qbh capable switch using an optionally specified <virtualport> element (see the examples of virtualport given above for type='direct' network devices). Note that - due to limitations in standard single-port PCI ethernet card driver design - only SR-IOV (Single Root I/O Virtualization) virtual function (VF) devices can be assigned in this manner; to assign a standard single-port PCI or PCIe ethernet card to a guest, use the traditional <hostdev> device definition and Since 0.9.11

To use VFIO device assignment rather than traditional/legacy KVM device assignment (VFIO is a new method of device assignment that is compatible with UEFI Secure Boot), a type='hostdev' interface can have an optional driver sub-element with a name attribute set to "vfio". To use legacy KVM device assignment you can set name to "kvm" (or simply omit the <driver> element, since "kvm" is currently the default). Since 1.0.5 (QEMU and KVM only, requires kernel 3.6 or newer)

Note that this "intelligent passthrough" of network devices is very similar to the functionality of a standard <hostdev> device, the difference being that this method allows specifying a MAC address and <virtualport> for the passed-through device. If these capabilities are not required, if you have a standard single-port PCI, PCIe, or USB network card that doesn't support SR-IOV (and hence would anyway lose the configured MAC address during reset after being assigned to the guest domain), or if you are using a version of libvirt older than 0.9.11, you should use standard <hostdev> to assign the device to the guest instead of <interface type='hostdev'/>.

  ...
  <devices>
    <interface type='hostdev'>
      <driver name='vfio'/>
      <source>
        <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
      </source>
      <mac address='52:54:00:6d:90:02'>
      <virtualport type='802.1Qbh'>
        <parameters profileid='finance'/>
      </virtualport>
    </interface>
  </devices>
  ...
Multicast tunnel

A multicast group is setup to represent a virtual network. Any VMs whose network devices are in the same multicast group can talk to each other even across hosts. This mode is also available to unprivileged users. There is no default DNS or DHCP support and no outgoing network access. To provide outgoing network access, one of the VMs should have a 2nd NIC which is connected to one of the first 4 network types and do the appropriate routing. The multicast protocol is compatible with that used by user mode linux guests too. The source address used must be from the multicast address block.

  ...
  <devices>
    <interface type='mcast'>
      <mac address='52:54:00:6d:90:01'>
      <source address='230.0.0.1' port='5558'/>
    </interface>
  </devices>
  ...
TCP tunnel

A TCP client/server architecture provides a virtual network. One VM provides the server end of the network, all other VMS are configured as clients. All network traffic is routed between the VMs via the server. This mode is also available to unprivileged users. There is no default DNS or DHCP support and no outgoing network access. To provide outgoing network access, one of the VMs should have a 2nd NIC which is connected to one of the first 4 network types and do the appropriate routing.

  ...
  <devices>
    <interface type='server'>
      <mac address='52:54:00:22:c9:42'>
      <source address='192.168.0.1' port='5558'/>
    </interface>
    ...
    <interface type='client'>
      <mac address='52:54:00:8b:c9:51'>
      <source address='192.168.0.1' port='5558'/>
    </interface>
  </devices>
  ...
Setting the NIC model
  ...
  <devices>
    <interface type='network'>
      <source network='default'/>
      <target dev='vnet1'/>
      <model type='ne2k_pci'/>
    </interface>
  </devices>
  ...

For hypervisors which support this, you can set the model of emulated network interface card.

The values for type aren't defined specifically by libvirt, but by what the underlying hypervisor supports (if any). For QEMU and KVM you can get a list of supported models with these commands:

qemu -net nic,model=? /dev/null
qemu-kvm -net nic,model=? /dev/null

Typical values for QEMU and KVM include: ne2k_isa i82551 i82557b i82559er ne2k_pci pcnet rtl8139 e1000 virtio

Setting NIC driver-specific options
  ...
  <devices>
    <interface type='network'>
      <source network='default'/>
      <target dev='vnet1'/>
      <model type='virtio'/>
      <driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off' queues='5'/>
    </interface>
  </devices>
  ...

Some NICs may have tunable driver-specific options. These are set as attributes of the driver sub-element of the interface definition. Currently the following attributes are available for the "virtio" NIC driver:

name
The optional name attribute forces which type of backend driver to use. The value can be either 'qemu' (a user-space backend) or 'vhost' (a kernel backend, which requires the vhost module to be provided by the kernel); an attempt to require the vhost driver without kernel support will be rejected. If this attribute is not present, then the domain defaults to 'vhost' if present, but silently falls back to 'qemu' without error. Since 0.8.8 (QEMU and KVM only)
For interfaces of type='hostdev' (PCI passthrough devices) the name attribute can optionally be set to "vfio" or "kvm". "vfio" tells libvirt to use VFIO device assignment rather than traditional KVM device assignment (VFIO is a new method of device assignment that is compatible with UEFI Secure Boot), and "kvm" tells libvirt to use the legacy device assignment performed directly by the kvm kernel module (the default is currently "kvm", but is subject to change). Since 1.0.5 (QEMU and KVM only, requires kernel 3.6 or newer)
txmode
The txmode attribute specifies how to handle transmission of packets when the transmit buffer is full. The value can be either 'iothread' or 'timer'. Since 0.8.8 (QEMU and KVM only)

If set to 'iothread', packet tx is all done in an iothread in the bottom half of the driver (this option translates into adding "tx=bh" to the qemu commandline -device virtio-net-pci option).

If set to 'timer', tx work is done in qemu, and if there is more tx data than can be sent at the present time, a timer is set before qemu moves on to do other things; when the timer fires, another attempt is made to send more data.

The resulting difference, according to the qemu developer who added the option is: "bh makes tx more asynchronous and reduces latency, but potentially causes more processor bandwidth contention since the cpu doing the tx isn't necessarily the cpu where the guest generated the packets."

In general you should leave this option alone, unless you are very certain you know what you are doing.
ioeventfd
This optional attribute allows users to set domain I/O asynchronous handling for interface device. The default is left to the discretion of the hypervisor. Accepted values are "on" and "off". Enabling this allows qemu to execute VM while a separate thread handles I/O. Typically guests experiencing high system CPU utilization during I/O will benefit from this. On the other hand, on overloaded host it could increase guest I/O latency. Since 0.9.3 (QEMU and KVM only)

In general you should leave this option alone, unless you are very certain you know what you are doing.
event_idx
The event_idx attribute controls some aspects of device event processing. The value can be either 'on' or 'off' - if it is on, it will reduce the number of interrupts and exits for the guest. The default is determined by QEMU; usually if the feature is supported, default is on. In case there is a situation where this behavior is suboptimal, this attribute provides a way to force the feature off. Since 0.9.5 (QEMU and KVM only)

In general you should leave this option alone, unless you are very certain you know what you are doing.
queues
The optional queues attribute controls the number of queues to be used for the Multiqueue virtio-net feature. If the interface has <model type='virtio'/>, multiple packet processing queues can be created; each queue will potentially be handled by a different processor, resulting in much higher throughput. Since 1.0.6 (QEMU and KVM only)
Overriding the target element
  ...
  <devices>
    <interface type='network'>
      <source network='default'/>
      <target dev='vnet1'/>
    </interface>
  </devices>
  ...

If no target is specified, certain hypervisors will automatically generate a name for the created tun device. This name can be manually specified, however the name must not start with either 'vnet' or 'vif', which are prefixes reserved by libvirt and certain hypervisors. Manually specified targets using these prefixes will be ignored.

Specifying boot order
  ...
  <devices>
    <interface type='network'>
      <source network='default'/>
      <target dev='vnet1'/>
      <boot order='1'/>
    </interface>
  </devices>
  ...

For hypervisors which support this, you can set a specific NIC to be used for network boot. The order attribute determines the order in which devices will be tried during boot sequence. The per-device boot elements cannot be used together with general boot elements in BIOS bootloader section. Since 0.8.8

Interface ROM BIOS configuration
  ...
  <devices>
    <interface type='network'>
      <source network='default'/>
      <target dev='vnet1'/>
      <rom bar='on' file='/etc/fake/boot.bin'/>
    </interface>
  </devices>
  ...

For hypervisors which support this, you can change how a PCI Network device's ROM is presented to the guest. The bar attribute can be set to "on" or "off", and determines whether or not the device's ROM will be visible in the guest's memory map. (In PCI documentation, the "rombar" setting controls the presence of the Base Address Register for the ROM). If no rom bar is specified, the qemu default will be used (older versions of qemu used a default of "off", while newer qemus have a default of "on"). The optional file attribute is used to point to a binary file to be presented to the guest as the device's ROM BIOS. This can be useful to provide an alternative boot ROM for a network device. Since 0.9.10 (QEMU and KVM only).

Quality of service
  ...
  <devices>
    <interface type='network'>
      <source network='default'/>
      <target dev='vnet0'/>
      <bandwidth>
        <inbound average='1000' peak='5000' floor='200' burst='1024'/>
        <outbound average='128' peak='256' burst='256'/>
      </bandwidth>
    </interface>
  <devices>
  ...

This part of interface XML provides setting quality of service. Incoming and outgoing traffic can be shaped independently. The bandwidth element and its child elements are described in the QoS section of the Network XML.

Setting VLAN tag (on supported network types only)
  ...
  <devices>
    <interface type='bridge'>
      <vlan>
        <tag id='42'/>
      </vlan>
      <source bridge='ovsbr0'/>
      <virtualport type='openvswitch'>
        <parameters interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/>
      </virtualport>
    </interface>
    <interface type='bridge'>
      <vlan trunk='yes'>
        <tag id='42'/>
        <tag id='123' nativeMode='untagged'/>
      </vlan>
      ...
    </interface>
  <devices>
  ...

If (and only if) the network connection used by the guest supports vlan tagging transparent to the guest, an optional <vlan> element can specify one or more vlan tags to apply to the guest's network traffic Since 0.10.0. (openvswitch and type='hostdev' SR-IOV interfaces do support transparent vlan tagging of guest traffic; everything else, including standard linux bridges and libvirt's own virtual networks, do not support it. 802.1Qbh (vn-link) and 802.1Qbg (VEPA) switches provide their own way (outside of libvirt) to tag guest traffic onto specific vlans.) To allow for specification of multiple tags (in the case of vlan trunking), a subelement, <tag>, specifies which vlan tag to use (for example: <tag id='42'/>. If an interface has more than one <vlan> element defined, it is assumed that the user wants to do VLAN trunking using all the specified tags. In the case that vlan trunking with a single tag is desired, the optional attribute trunk='yes' can be added to the toplevel vlan element.

For network connections using openvswitch it is possible to configure the 'native-tagged' and 'native-untagged' vlan modes Since 1.1.0. This uses the optional nativeMode attribute on the <tag> element: nativeMode may be set to 'tagged' or 'untagged'. The id attribute of the element sets the native vlan.

Modifying virtual link state
  ...
  <devices>
    <interface type='network'>
      <source network='default'/>
      <target dev='vnet0'/>
      <link state='down'/>
    </interface>
  <devices>
  ...

This element provides means of setting state of the virtual network link. Possible values for attribute state are up and down. If down is specified as the value, the interface behaves as if it had the network cable disconnected. Default behavior if this element is unspecified is to have the link state up. Since 0.9.5

Input devices

Input devices allow interaction with the graphical framebuffer in the guest virtual machine. When enabling the framebuffer, an input device is automatically provided. It may be possible to add additional devices explicitly, for example, to provide a graphics tablet for absolute cursor movement.

  ...
  <devices>
    <input type='mouse' bus='usb'/>
    <input type='keyboard' bus='usb'/>
  </devices>
  ...
input
The input element has one mandatory attribute, the type whose value can be 'mouse', 'tablet' or (since 1.2.2) 'keyboard'. The tablet provides absolute cursor movement, while the mouse uses relative movement. The optional bus attribute can be used to refine the exact device type. It takes values "xen" (paravirtualized), "ps2" and "usb".

The input element has an optional sub-element <address> which can tie the device to a particular PCI slot, documented above.

Hub devices

A hub is a device that expands a single port into several so that there are more ports available to connect devices to a host system.

  ...
  <devices>
    <hub type='usb'/>
  </devices>
  ...
hub
The hub element has one mandatory attribute, the type whose value can only be 'usb'.

The hub element has an optional sub-element <address> with type='usb'which can tie the device to a particular controller, documented above.

Graphical framebuffers

A graphics device allows for graphical interaction with the guest OS. A guest will typically have either a framebuffer or a text console configured to allow interaction with the admin.

  ...
  <devices>
    <graphics type='sdl' display=':0.0'/>
    <graphics type='vnc' port='5904' sharePolicy='allow-exclusive'>
      <listen type='address' address='1.2.3.4'/>
    </graphics>
    <graphics type='rdp' autoport='yes' multiUser='yes' />
    <graphics type='desktop' fullscreen='yes'/>
    <graphics type='spice'>
      <listen type='network' network='rednet'/>
    </graphics>
  </devices>
  ...
graphics
The graphics element has a mandatory type attribute which takes the value "sdl", "vnc", "rdp" or "desktop":
"sdl"
This displays a window on the host desktop, it can take 3 optional arguments: a display attribute for the display to use, an xauth attribute for the authentication identifier, and an optional fullscreen attribute accepting values 'yes' or 'no'.
"vnc"
Starts a VNC server. The port attribute specifies the TCP port number (with -1 as legacy syntax indicating that it should be auto-allocated). The autoport attribute is the new preferred syntax for indicating autoallocation of the TCP port to use. The listen attribute is an IP address for the server to listen on. The passwd attribute provides a VNC password in clear text. The keymap attribute specifies the keymap to use. It is possible to set a limit on the validity of the password be giving an timestamp passwdValidTo='2010-04-09T15:51:00' assumed to be in UTC. The connected attribute allows control of connected client during password changes. VNC accepts keep value only. since 0.9.3 NB, this may not be supported by all hypervisors.
The optional sharePolicy attribute specifies vnc server display sharing policy. "allow-exclusive" allows clients to ask for exclusive access by dropping other connections. Connecting multiple clients in parallel requires all clients asking for a shared session (vncviewer: -Shared switch). This is the default value. "force-shared" disables exclusive client access, every connection has to specify -Shared switch for vncviewer. "ignore" welcomes every connection unconditionally since 1.0.6.

Rather than using listen/port, QEMU supports a socket attribute for listening on a unix domain socket path.Since 0.8.8 For VNC WebSocket functionality, websocket attribute may be used to specify port to listen on (with -1 meaning auto-allocation and autoport having no effect due to security reasons). Since 1.0.6
"spice"

Starts a SPICE server. The port attribute specifies the TCP port number (with -1 as legacy syntax indicating that it should be auto-allocated), while tlsPort gives an alternative secure port number. The autoport attribute is the new preferred syntax for indicating autoallocation of needed port numbers. The listen attribute is an IP address for the server to listen on. The passwd attribute provides a SPICE password in clear text. The keymap attribute specifies the keymap to use. It is possible to set a limit on the validity of the password be giving an timestamp passwdValidTo='2010-04-09T15:51:00' assumed to be in UTC. The connected attribute allows control of connected client during password changes. SPICE accepts keep to keep client connected, disconnect to disconnect client and fail to fail changing password. Since 0.9.3 NB, this may not be supported by all hypervisors. "spice" since 0.8.6. The defaultMode attribute sets the default channel security policy, valid values are secure, insecure and the default any (which is secure if possible, but falls back to insecure rather than erroring out if no secure path is available). "defaultMode" since 0.9.12.

When SPICE has both a normal and TLS secured TCP port configured, it can be desirable to restrict what channels can be run on each port. This is achieved by adding one or more <channel> elements inside the main <graphics> element. Valid channel names include main, display, inputs, cursor, playback, record (all since 0.8.6); smartcard (since 0.8.8); and usbredir (since 0.9.12).

  <graphics type='spice' port='-1' tlsPort='-1' autoport='yes'>
    <channel name='main' mode='secure'/>
    <channel name='record' mode='insecure'/>
    <image compression='auto_glz'/>
    <streaming mode='filter'/>
    <clipboard copypaste='no'/>
    <mouse mode='client'/>
    <filetransfer enable='no'/>
  </graphics>

Spice supports variable compression settings for audio, images and streaming, since 0.9.1. These settings are accessible via the compression attribute in all following elements: image to set image compression (accepts auto_glz, auto_lz, quic, glz, lz, off), jpeg for JPEG compression for images over wan (accepts auto, never, always), zlib for configuring wan image compression (accepts auto, never, always) and playback for enabling audio stream compression (accepts on or off).

Streaming mode is set by the streaming element, settings its mode attribute to one of filter, all or off, since 0.9.2.

Copy & Paste functionality (via Spice agent) is set by the clipboard element. It is enabled by default, and can be disabled by setting the copypaste property to no, since 0.9.3.

Mouse mode is set by the mouse element, setting its mode attribute to one of server or client , since 0.9.11. If no mode is specified, the qemu default will be used (client mode).

File transfer functionality (via Spice agent) is set using the filetransfer element. It is enabled by default, and can be disabled by setting the enable property to no , since since 1.2.2.

"rdp"
Starts a RDP server. The port attribute specifies the TCP port number (with -1 as legacy syntax indicating that it should be auto-allocated). The autoport attribute is the new preferred syntax for indicating autoallocation of the TCP port to use. The replaceUser attribute is a boolean deciding whether multiple simultaneous connections to the VM are permitted. The multiUser attribute is a boolean deciding whether the existing connection must be dropped and a new connection must be established by the VRDP server, when a new client connects in single connection mode.
"desktop"
This value is reserved for VirtualBox domains for the moment. It displays a window on the host desktop, similarly to "sdl", but using the VirtualBox viewer. Just like "sdl", it accepts the optional attributes display and fullscreen.

Rather than putting the address information used to set up the listening socket for graphics types vnc and spice in the <graphics> listen attribute, a separate subelement of <graphics>, called <listen> can be specified (see the examples above)since 0.9.4. <listen> accepts the following attributes:

type
Set to either address or network. This tells whether this listen element is specifying the address to be used directly, or by naming a network (which will then be used to determine an appropriate address for listening).
address
if type='address', the address attribute will contain either an IP address or hostname (which will be resolved to an IP address via a DNS query) to listen on. In the "live" XML of a running domain, this attribute will be set to the IP address used for listening, even if type='network'.
network
if type='network', the network attribute will contain the name of a network in libvirt's list of configured networks. The named network configuration will be examined to determine an appropriate listen address. For example, if the network has an IPv4 address in its configuration (e.g. if it has a forward type of route, nat, or no forward type (isolated)), the first IPv4 address listed in the network's configuration will be used. If the network is describing a host bridge, the first IPv4 address associated with that bridge device will be used, and if the network is describing one of the 'direct' (macvtap) modes, the first IPv4 address of the first forward dev will be used.

Video devices

A video device.

  ...
  <devices>
    <video>
      <model type='vga' vram='8192' heads='1'>
        <acceleration accel3d='yes' accel2d='yes'/>
      </model>
    </video>
  </devices>
  ...
video
The video element is the container for describing video devices. For backwards compatibility, if no video is set but there is a graphics in domain xml, then libvirt will add a default video according to the guest type. For a guest of type "kvm", the default video for it is: type with value "cirrus", vram with value "9216", and heads with value "1". By default, the first video device in domain xml is the primary one, but the optional attribute primary (since 1.0.2) with value 'yes' can be used to mark the primary in cases of multiple video device. The non-primary must be type of "qxl". The optional attribute ram (since 1.0.2) is allowed for "qxl" type only and specifies the size of the primary bar, while vram specifies the secondary bar size. If "ram" or "vram" are not supplied a default value is used.
model
The model element has a mandatory type attribute which takes the value "vga", "cirrus", "vmvga", "xen", "vbox", or "qxl" (since 0.8.6) depending on the hypervisor features available. You can also provide the amount of video memory in kibibytes (blocks of 1024 bytes) using vram and the number of screen with heads.
acceleration
If acceleration should be enabled (if supported) using the accel3d and accel2d attributes in the acceleration element.
address
The optional address sub-element can be used to tie the video device to a particular PCI slot.

Consoles, serial, parallel & channel devices

A character device provides a way to interact with the virtual machine. Paravirtualized consoles, serial ports, parallel ports and channels are all classed as character devices and so represented using the same syntax.

  ...
  <devices>
    <parallel type='pty'>
      <source path='/dev/pts/2'/>
      <target port='0'/>
    </parallel>
    <serial type='pty'>
      <source path='/dev/pts/3'/>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <source path='/dev/pts/4'/>
      <target port='0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/tmp/guestfwd'/>
      <target type='guestfwd' address='10.0.2.1' port='4600'/>
    </channel>
  </devices>
  ...

In each of these directives, the top-level element name (parallel, serial, console, channel) describes how the device is presented to the guest. The guest interface is configured by the target element.

The interface presented to the host is given in the type attribute of the top-level element. The host interface is configured by the source element.

The source element may contain an optional seclabel to override the way that labelling is done on the socket path. If this element is not present, the security label is inherited from the per-domain setting.

Each character device element has an optional sub-element <address> which can tie the device to a particular controller or PCI slot.

Guest interface

A character device presents itself to the guest as one of the following types.

Parallel port
  ...
  <devices>
    <parallel type='pty'>
      <source path='/dev/pts/2'/>
      <target port='0'/>
    </parallel>
  </devices>
  ...

target can have a port attribute, which specifies the port number. Ports are numbered starting from 0. There are usually 0, 1 or 2 parallel ports.

Serial port
  ...
  <devices>
    <serial type='pty'>
      <source path='/dev/pts/3'/>
      <target port='0'/>
    </serial>
  </devices>
  ...

target can have a port attribute, which specifies the port number. Ports are numbered starting from 0. There are usually 0, 1 or 2 serial ports. There is also an optional type attribute since 1.0.2 which has two choices for its value, one is isa-serial, the other is usb-serial. If type is missing, isa-serial will be used by default. For usb-serial an optional sub-element <address> with type='usb' can tie the device to a particular controller, documented above.

Console

The console element is used to represent interactive consoles. Depending on the type of guest in use, the consoles might be paravirtualized devices, or they might be a clone of a serial device, according to the following rules:

A virtio console device is exposed in the guest as /dev/hvc[0-7] (for more information, see http://fedoraproject.org/wiki/Features/VirtioSerial) Since 0.8.3

  ...
  <devices>
    <console type='pty'>
      <source path='/dev/pts/4'/>
      <target port='0'/>
    </console>

    <!-- KVM virtio console -->
    <console type='pty'>
      <source path='/dev/pts/5'/>
      <target type='virtio' port='0'/>
    </console>
  </devices>
  ...
  ...
  <devices>
    <!-- KVM s390 sclp console -->
    <console type='pty'>
      <source path='/dev/pts/1'/>
      <target type='sclp' port='0'/>
    </console>
  </devices>
  ...

If the console is presented as a serial port, the target element has the same attributes as for a serial port. There is usually only 1 console.

Channel

This represents a private communication channel between the host and the guest.

  ...
  <devices>
    <channel type='unix'>
      <source mode='bind' path='/tmp/guestfwd'/>
      <target type='guestfwd' address='10.0.2.1' port='4600'/>
    </channel>

    <!-- KVM virtio channel -->
    <channel type='pty'>
      <target type='virtio' name='arbitrary.virtio.serial.port.name'/>
    </channel>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/f16x86_64.agent'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0'/>
    </channel>
  </devices>
  ...

This can be implemented in a variety of ways. The specific type of channel is given in the type attribute of the target element. Different channel types have different target attributes.

guestfwd
TCP traffic sent by the guest to a given IP address and port is forwarded to the channel device on the host. The target element must have address and port attributes. Since 0.7.3
virtio
Paravirtualized virtio channel. Channel is exposed in the guest under /dev/vport*, and if the optional element name is specified, /dev/virtio-ports/$name (for more info, please see http://fedoraproject.org/wiki/Features/VirtioSerial). The optional element address can tie the channel to a particular type='virtio-serial' controller, documented above. With qemu, if name is "org.qemu.guest_agent.0", then libvirt can interact with a guest agent installed in the guest, for actions such as guest shutdown or file system quiescing. Since 0.7.7, guest agent interaction since 0.9.10 Moreover, since 1.0.6 it is possible to have source path auto generated for virtio unix channels. This is very useful in case of a qemu guest agent, where users don't usually care about the source path since it's libvirt who talks to the guest agent. In case users want to utilize this feature, they should leave <source> element out.
spicevmc
Paravirtualized SPICE channel. The domain must also have a SPICE server as a graphics device, at which point the host piggy-backs messages across the main channel. The target element must be present, with attribute type='virtio'; an optional attribute name controls how the guest will have access to the channel, and defaults to name='com.redhat.spice.0'. The optional address element can tie the channel to a particular type='virtio-serial' controller. Since 0.8.8
Host interface

A character device presents itself to the host as one of the following types.

Domain logfile

This disables all input on the character device, and sends output into the virtual machine's logfile

  ...
  <devices>
    <console type='stdio'>
      <target port='1'/>
    </console>
  </devices>
  ...
Device logfile

A file is opened and all data sent to the character device is written to the file.

  ...
  <devices>
    <serial type="file">
      <source path="/var/log/vm/vm-serial.log"/>
      <target port="1"/>
    </serial>
  </devices>
  ...
Virtual console

Connects the character device to the graphical framebuffer in a virtual console. This is typically accessed via a special hotkey sequence such as "ctrl+alt+3"

  ...
  <devices>
    <serial type='vc'>
      <target port="1"/>
    </serial>
  </devices>
  ...
Null device

Connects the character device to the void. No data is ever provided to the input. All data written is discarded.

  ...
  <devices>
    <serial type='null'>
      <target port="1"/>
    </serial>
  </devices>
  ...
Pseudo TTY

A Pseudo TTY is allocated using /dev/ptmx. A suitable client such as 'virsh console' can connect to interact with the serial port locally.

  ...
  <devices>
    <serial type="pty">
      <source path="/dev/pts/3"/>
      <target port="1"/>
    </serial>
  </devices>
  ...

NB special case if <console type='pty'>, then the TTY path is also duplicated as an attribute tty='/dev/pts/3' on the top level <console> tag. This provides compat with existing syntax for <console> tags.

Host device proxy

The character device is passed through to the underlying physical character device. The device types must match, eg the emulated serial port should only be connected to a host serial port - don't connect a serial port to a parallel port.

  ...
  <devices>
    <serial type="dev">
      <source path="/dev/ttyS0"/>
      <target port="1"/>
    </serial>
  </devices>
  ...
Named pipe

The character device writes output to a named pipe. See pipe(7) for more info.

  ...
  <devices>
    <serial type="pipe">
      <source path="/tmp/mypipe"/>
      <target port="1"/>
    </serial>
  </devices>
  ...
TCP client/server

The character device acts as a TCP client connecting to a remote server.

  ...
  <devices>
    <serial type="tcp">
      <source mode="connect" host="0.0.0.0" service="2445"/>
      <protocol type="raw"/>
      <target port="1"/>
    </serial>
  </devices>
   ...

Or as a TCP server waiting for a client connection.

  ...
  <devices>
    <serial type="tcp">
      <source mode="bind" host="127.0.0.1" service="2445"/>
      <protocol type="raw"/>
      <target port="1"/>
    </serial>
  </devices>
  ...

Alternatively you can use telnet instead of raw TCP. Since 0.8.5 you can also use telnets (secure telnet) and tls.

  ...
  <devices>
    <serial type="tcp">
      <source mode="connect" host="0.0.0.0" service="2445"/>
      <protocol type="telnet"/>
      <target port="1"/>
    </serial>
    ...
    <serial type="tcp">
      <source mode="bind" host="127.0.0.1" service="2445"/>
      <protocol type="telnet"/>
      <target port="1"/>
    </serial>
  </devices>
  ...
UDP network console

The character device acts as a UDP netconsole service, sending and receiving packets. This is a lossy service.

  ...
  <devices>
    <serial type="udp">
      <source mode="bind" host="0.0.0.0" service="2445"/>
      <source mode="connect" host="0.0.0.0" service="2445"/>
      <target port="1"/>
    </serial>
  </devices>
  ...
UNIX domain socket client/server

The character device acts as a UNIX domain socket server, accepting connections from local clients.

  ...
  <devices>
    <serial type="unix">
      <source mode="bind" path="/tmp/foo"/>
      <target port="1"/>
    </serial>
  </devices>
  ...
Spice channel

The character device is accessible through spice connection under a channel name specified in the channel attribute. Since 1.2.2

  ...
  <devices>
    <serial type="spiceport">
      <source channel="org.qemu.console.serial.0"/>
      <target port="1"/>
    </serial>
  </devices>
  ...
Nmdm device

The nmdm device driver, available on FreeBSD, provides two tty devices connected together by a virual null modem cable. Since 1.2.4

  ...
  <devices>
    <serial type="nmdm">
      <source master="/dev/nmdm0A" slave="/dev/nmdm0B"/>
    </serial>
  </devices>
  ...

The source element has these attributes:

master
Master device of the pair, that is passed to the hypervisor.
slave
Slave device of the pair, that is passed to the clients for connection to the guest console.

Sound devices

A virtual sound card can be attached to the host via the sound element. Since 0.4.3

  ...
  <devices>
    <sound model='es1370'/>
  </devices>
  ...
sound
The sound element has one mandatory attribute, model, which specifies what real sound device is emulated. Valid values are specific to the underlying hypervisor, though typical choices are 'es1370', 'sb16', 'ac97', and 'ich6' ( 'ac97' only since 0.6.0, 'ich6' only since 0.8.8)

Since 0.9.13, a sound element with ich6 model can have optional sub-elements <codec> to attach various audio codecs to the audio device. If not specified, a default codec will be attached to allow playback and recording. Valid values are 'duplex' (advertise a line-in and a line-out) and 'micro' (advertise a speaker and a microphone).

  ...
  <devices>
    <sound model='ich6'>
      <codec type='micro'/>
    <sound/>
  </devices>
  ...

Each sound element has an optional sub-element <address> which can tie the device to a particular PCI slot, documented above.

Watchdog device

A virtual hardware watchdog device can be added to the guest via the watchdog element. Since 0.7.3, QEMU and KVM only

The watchdog device requires an additional driver and management daemon in the guest. Just enabling the watchdog in the libvirt configuration does not do anything useful on its own.

Currently libvirt does not support notification when the watchdog fires. This feature is planned for a future version of libvirt.

  ...
  <devices>
    <watchdog model='i6300esb'/>
  </devices>
  ...
  ...
  <devices>
    <watchdog model='i6300esb' action='poweroff'/>
  </devices>
</domain>
model

The required model attribute specifies what real watchdog device is emulated. Valid values are specific to the underlying hypervisor.

QEMU and KVM support:

  • 'i6300esb' — the recommended device, emulating a PCI Intel 6300ESB
  • 'ib700' — emulating an ISA iBase IB700
action

The optional action attribute describes what action to take when the watchdog expires. Valid values are specific to the underlying hypervisor.

QEMU and KVM support:

  • 'reset' — default, forcefully reset the guest
  • 'shutdown' — gracefully shutdown the guest (not recommended)
  • 'poweroff' — forcefully power off the guest
  • 'pause' — pause the guest
  • 'none' — do nothing
  • 'dump' — automatically dump the guest Since 0.8.7

Note 1: the 'shutdown' action requires that the guest is responsive to ACPI signals. In the sort of situations where the watchdog has expired, guests are usually unable to respond to ACPI signals. Therefore using 'shutdown' is not recommended.

Note 2: the directory to save dump files can be configured by auto_dump_path in file /etc/libvirt/qemu.conf.

Memory balloon device

A virtual memory balloon device is added to all Xen and KVM/QEMU guests. It will be seen as memballoon element. It will be automatically added when appropriate, so there is no need to explicitly add this element in the guest XML unless a specific PCI slot needs to be assigned. Since 0.8.3, Xen, QEMU and KVM only Additionally, since 0.8.4, if the memballoon device needs to be explicitly disabled, model='none' may be used.

Example: automatically added device with KVM

  ...
  <devices>
    <memballoon model='virtio'/>
  </devices>
  ...

Example: manually added device with static PCI slot 2 requested

  ...
  <devices>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
      <stats period='10'/>
    </memballoon>
  </devices>
</domain>
model

The required model attribute specifies what type of balloon device is provided. Valid values are specific to the virtualization platform

  • 'virtio' — default with QEMU/KVM
  • 'xen' — default with Xen
period

The optional period allows the QEMU virtio memory balloon driver to provide statistics through the virsh dommemstat [domain] command. By default, collection is not enabled. In order to enable, use the virsh dommemstat [domain] --period [number] command or virsh edit command to add the option to the XML definition. The virsh dommemstat will accept the options --live, --current, or --config. If an option is not provided, the change for a running domain will only be made to the active guest. If the QEMU driver is not at the right revision, the attempt to set the period will fail. Since 1.1.1, requires QEMU 1.5

Random number generator device

The virtual random number generator device allows the host to pass through entropy to guest operating systems. Since 1.0.3

Example: usage of the RNG device:

  ...
  <devices>
    <rng model='virtio'>
      <rate period="2000" bytes="1234"/>
      <backend model='random'>/dev/random</backend>
      <!-- OR -->
      <backend model='egd' type='udp'>
        <source mode='bind' service='1234'/>
        <source mode='connect' host='1.2.3.4' service='1234'/>
      </backend>
    </rng>
  </devices>
  ...
model

The required model attribute specifies what type of RNG device is provided. Valid values are specific to the virtualization platform:

  • 'virtio' — supported by qemu and virtio-rng kernel module
rate

The optional rate element allows limiting the rate at which entropy can be consumed from the source. The mandatory attribute bytes specifies how many bytes are permitted to be consumed per period. An optional period attribute specifies the duration of a period in milliseconds; if omitted, the period is taken as 1000 milliseconds (1 second). Since 1.0.4

backend

The backend element specifies the source of entropy to be used for the domain. The source model is configured using the model attribute. Supported source models are:

  • 'random' — /dev/random (default) or /dev/hwrng device as source (for now, no other sources are permitted)
  • 'egd' — a EGD protocol backend
backend model='random'

This backend type expects a non-blocking character device as input. The only accepted paths are /dev/random and /dev/hwrng. The file name is specified as contents of the backend element. When no file name is specified the hypervisor default is used.

backend model='egd'

This backend connects to a source using the EGD protocol. The source is specified as a character device. Refer to character device host interface for more information.

TPM device

The TPM device enables a QEMU guest to have access to TPM functionality.

The TPM passthrough device type provides access to the host's TPM for one QEMU guest. No other software may be is using the TPM device, typically /dev/tpm0, at the time the QEMU guest is started. 'passthrough' since 1.0.5

Example: usage of the TPM passthrough device

  ...
  <devices>
    <tpm model='tpm-tis'>
      <backend type='passthrough'>
        <device path='/dev/tpm0'/>
      </backend>
    </tpm>
  </devices>
  ...
model

The model attribute specifies what device model QEMU provides to the guest. If no model name is provided, tpm-tis will automatically be chosen.

backend

The backend element specifies the type of TPM device. The following types are supported:

  • 'passthrough' — use the host's TPM device.
backend type='passthrough'

This backend type requires exclusive access to a TPM device on the host. An example for such a device is /dev/tpm0. The filename is specified as path attribute of the source element. If no file name is specified then /dev/tpm0 is automatically used.

NVRAM device

nvram device is always added to pSeries guest on PPC64, and its address is allowed to be changed. Element nvram (only valid for pSeries guest, since 1.0.5) is provided to enable the address setting.

Example: usage of NVRAM configuration

  ...
  <devices>
    <nvram>
      <address type='spapr-vio' reg='0x3000'/>
    </nvram>
  </devices>
  ...
spapr-vio

VIO device address type, only valid for PPC64.

reg

Device address

panic device

panic device enables libvirt to receive panic notification from a QEMU guest. Since 1.2.1, QEMU and KVM only

Example: usage of panic configuration

  ...
  <devices>
    <panic>
      <address type='isa' iobase='0x505'/>
    </panic>
  </devices>
  ...
address

address of panic. The default ioport is 0x505. Most users don't need to specify an address.

Security label

The seclabel element allows control over the operation of the security drivers. There are three basic modes of operation, 'dynamic' where libvirt automatically generates a unique security label, 'static' where the application/administrator chooses the labels, or 'none' where confinement is disabled. With dynamic label generation, libvirt will always automatically relabel any resources associated with the virtual machine. With static label assignment, by default, the administrator or application must ensure labels are set correctly on any resources, however, automatic relabeling can be enabled if desired. 'dynamic' since 0.6.1, 'static' since 0.6.2, and 'none' since 0.9.10.

If more than one security driver is used by libvirt, multiple seclabel tags can be used, one for each driver and the security driver referenced by each tag can be defined using the attribute model

Valid input XML configurations for the top-level security label are:

  <seclabel type='dynamic' model='selinux'/>

  <seclabel type='dynamic' model='selinux'>
    <baselabel>system_u:system_r:my_svirt_t:s0</baselabel>
  </seclabel>

  <seclabel type='static' model='selinux' relabel='no'>
    <label>system_u:system_r:svirt_t:s0:c392,c662</label>
  </seclabel>

  <seclabel type='static' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c392,c662</label>
  </seclabel>

  <seclabel type='none'/>
    

If no 'type' attribute is provided in the input XML, then the security driver default setting will be used, which may be either 'none' or 'dynamic'. If a 'baselabel' is set but no 'type' is set, then the type is presumed to be 'dynamic'

When viewing the XML for a running guest with automatic resource relabeling active, an additional XML element, imagelabel, will be included. This is an output-only element, so will be ignored in user supplied XML documents

type
Either static, dynamic or none to determine whether libvirt automatically generates a unique security label or not.
model
A valid security model name, matching the currently activated security model
relabel
Either yes or no. This must always be yes if dynamic label assignment is used. With static label assignment it will default to no.
label
If static labelling is used, this must specify the full security label to assign to the virtual domain. The format of the content depends on the security driver in use:
  • SELinux: a SELinux context.
  • AppArmor: an AppArmor profile.
  • DAC: owner and group separated by colon. They can be defined both as user/group names or uid/gid. The driver will first try to parse these values as names, but a leading plus sign can used to force the driver to parse them as uid or gid.
baselabel
If dynamic labelling is used, this can optionally be used to specify the base security label that will be used to generate the actual label. The format of the content depends on the security driver in use. The SELinux driver uses only the type field of the baselabel in the generated label. Other fields are inherited from the parent process when using SELinux baselabels. (The example above demonstrates the use of my_svirt_t as the value for the type field.)
imagelabel
This is an output only element, which shows the security label used on resources associated with the virtual domain. The format of the content depends on the security driver in use

When relabeling is in effect, it is also possible to fine-tune the labeling done for specific source file names, by either disabling the labeling (useful if the file lives on NFS or other file system that lacks security labeling) or requesting an alternate label (useful when a management application creates a special label to allow sharing of some, but not all, resources between domains), since 0.9.9. When a seclabel element is attached to a specific path rather than the top-level domain assignment, only the attribute relabel or the sub-element label are supported. Additionally, since 1.1.2, an output-only element labelskip will be present for active domains on disks where labeling was skipped due to the image being on a file system that lacks security labeling.

Example configs

Example configurations for each driver are provide on the driver specific pages listed below