xen 4 on debian squeeze

Xen on Testing/Squeeze and on Unstable/Sid as Dom0, to create a multitude of DomU's



aptitude -P install xen-hypervisor-4.0-amd64 linux-image-xen-amd64


aptitude -P install xen-hypervisor-4.0-i386 linux-image-xen-686

To get Xen HVM support Xen 4.0 Wiki

apt-get install xen-qemu-dm-4.0

OPTIONAL MANDATORY if not you will get error message like this:

WARNING!  Can't find hypervisor information in sysfs!
Error: Unable to connect to xend: No such file or directory. Is xend running?

Debian Squeeze and Sid use Grub 2, and the defaults are wrong for Xen. The Xen hypervisor (and not just a Xen-ready kernel!) should be the first entry, so do this:

mv -i /etc/grub.d/10_linux /etc/grub.d/50_linux

Then, disable the OS prober, so that you don’t get boot entries for each virtual machine you install on a volume group. Note that if you are running a computer with multi-boot with for example Windows, this will also remove the entries for it, which might not be what you wish for.
echo "" >> /etc/default/grub
echo "# Disable OS prober to prevent virtual machines on logical volumes from appearing in the boot menu." >> /etc/default/grub
echo "GRUB_DISABLE_OS_PROBER=true" >> /etc/default/grub

Per default, Xen tries to state-save the VMs upon shutdown. Sometimes there are problems with that and because it is also clean to just have the VMs shutdown upon host shutdown, set these parameters in /etc/default/xendomains to make sure they get shut down normally if you want that:


In /etc/xen/xend-config.sxp enable the network bridge by commenting out the line that was already there for that. (You may check XenNetworking page in Xen wiki.)

(network-script 'network-bridge antispoof=yes')

The antispoof=yes will activate Xen firewall to prevent that one of your VM uses an IP that it is not allowed to use (for example, if a domU was to use the gateway as its IP, it could seriously break your network, this will prevent it). In this case, you will need to specify the IP of your domU in the vif statement of your domUs.

This config file also have options to set the memory and CPU usage for your dom0, which you might want to change.

If you want, you can also use xen-tools for setting-up a domU (you will need to install it with "aptitude install xen-tools"). Note that the package dtc-xen also offers the same kind of functionality that xen-tools gives (eg: easy setup of VMs). You can use dtc-xen only for that, if you disable its SOAP daemon (you would disable it using: update-rc.d -f dtc-xen remove). DTC-Xen also offers installation of CentOS VMs using yum, which might be handy as well.

Then, to configure xen-tools, you can edit /etc/xen-tools/xen-tools.conf which contains default values that the xen-create-image script will use. These are some real-life examles of params that may need to be changed:

# Virtual machine disks are created as logical volumes in volume group 'universe' (hint: LVM storage is much faster than file)
lvm = universe

install-method = debootstrap

size   = 50Gb      # Disk image size.
memory = 512Mb    # Memory size
swap   = 2Gb    # Swap size
fs     = ext3     # use the EXT3 filesystem for the disk image.
dist   = `xt-guess-suite-and-mirror --suite` # Default distribution to install.

# Default gateway and netmask for new VMs
gateway    = x.x.x.x
netmask    =

# When creating an image, interactively setup root password
passwd = 1

# Prevents new VMs using some generic mirror, but actually uses the one from the Dom0.
mirror = `xt-guess-suite-and-mirror --mirror`

mirror_maverick = http://nl.archive.ubuntu.com/ubuntu/

# Ext3 had some weird settings per default, like noatime. If you want to change that, set it to 'defaults'
ext3_options     = defaults

# Let xen-create-image use pygrub, so that the grub from the VM is used, which means you no longer need to store kernels outside the VMs. Keeps things very flexible.

Now you should reboot. After that, you can create virtual machines with this command:

xen-create-image --hostname <hostname> --ip <ip> --scsi --vcpus 2 --pygrub --dist <lenny|maverick|whatever>

The —scsi makes sure the VM uses normal SCSI HD names like sda. When creating a Ubuntu Maverick image, for instance, it won't boot without this option, because the default is xvda. xvda is used to make it clear it is a virtualized disk, but a non-xen kernel, like a stock pv_ops one in Ubuntu, doesn't know what those are (see notes below about the xen-blkfront driver for this, though). You can also set scsi=1 in /etc/xen-tools/xen-tools.conf to make this default.

Once the guest is installed simply boot it using:

xm create -c xm-debian.cfg


Kernel versions

The new 2.6.32 kernel images have paravirt_ops-based Xen dom0 and domU support. When you create an image for Ubuntu Maverick, which includes a kernel that has pv_ops, it will therefore not use a Xen kernel, but the Ubuntu stock one, as it is capable of running on Xen's hypervisor.

For those who want to test the 2.6.32 kernel domU on an earlier dom0, you have to make sure that the xen-blkfront domU driver is loaded, and can find the root and other disk partitions. This is no longer the case if you still use the deprecated hda* or sda* device names in domU .cfg files. Switch to xvda* devices, which also work with 2.6.18 and 2.6.26 dom0 kernels.

There are also the backward-looking options:


Use DebianLenny's 2.6.26, which has forward-ported Xen 2.6.18 dom0 kernel code

Use custom 2.6.30 kernels with forward-ported Xen 2.6.18 dom0 kernel code, see

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License