Automating Xen Virtual Machine Deployment


by Kris Buytaert

Initially published October 2005 for Linux-Kongress,Hamburg, updated March 2006, UKUUG LISA Conference 2006 Durham,UK

Table of Contents
Abstract
Why Virtualisation Matters
Bootstrapping an Infrastructure, the Hybrid way
Xen
Why Regular installations don't work
Deploying Virtual Machines
Automating installs with Xen
Alternatives
Conclusions
About the Author
References

Abstract

While consolidating physical to virtual machines using Xen,we want to be able to deploy and manage virtual machines in the same way we manage and deploy physical machines. For operators and support people there should be no difference between virtual and physical installations.

Integrating Virtual Machines with the rest of the infrastructure, should have a low impact on the existing infrastructure. Typically, Virtual machine vendors have their own tools to deploy and manage virtual machines. Apart from the vendor lock-in to that specific virtual machine platform , it requires the administrators to learn yet another platform that they need to understand and manage, something we want to prevent.

This paper discusses how we integrated SystemImager with Xen, hence creating a totally open source deployment framework for the popular open source Virtual Machine monitor. We will document both development of our tools and go more in depth on other infrastructure related issues when using Xen

System Imaging environments in combination with Virtual machines can also be used to ensure safe production deployments. By saving your current production image before updating to your new production image, you have a highly reliable contingency mechanism. If the new production environment is found to be flawed, simply roll-back to the last production image on the virtual machines with a simple update command!

Xen has become one of the most popular virtualisation platforms over the last year, although not such a young project, it is now rapidly gaining acceptance in the corporate world as a valuable alternative to VMWare.


Why Virtualisation Matters

Over the past couple of months we ran into different projects that initially required a different approach however eventually we ended up with 1 alternative for both problems. Both problems are related to the deployment of multiple machines, whether it be physical or virtual. Our first problem dealt with creating a test environment for Large Scale System Installs, the second problem was how we deploy consolidated environments. The solution to both problems eventually seemed to be very similar.

Implementing a Mass System Install environment is a challenging task for every system administrator and infrastructure architect.

The first challenge we had to deal with was creating a test platform for Large Scale System Installs using SystemImager. Testing Large Scale Installs requires a similar environment with a sufficiently large amount of machines that are representative for the actual roll-out. In most organizations there is no budget to have a test platform for all the different type of machines that can be deployed.

In order to make testing of installs and upgrades of such an environment possible, whether it be a High Performance Cluster, a Telco Infrastructure or a Desktop infrastructure. We started looking at a platform where we could easily deploy and redeploy multiple machines with little or no investment and with no interruption of the actual already existing infrastructure. Therefore we started integrating a frequently used System install tool , SystemImager with a virtualisation platform. While initially we started using Qemu it quickly became clear that we could jump much further and faster with Xen.

On the other hand we have virtual machines such as VMWare, Xen, Qemu and others that are used more and more to consolidate multiple physical machines on virtual machines hence needing less physical machines.

As we are using Xen to consolidate multiple machines, we wanted to keep using the same environment we use for our physical machines to automate the installation of our virtual machines as well.

While we were consolidating some of the machines in a similar environment, we wanted to be able to deploy and manage a virtual machine in the same way we manage and deploy physical machines. For operators and support people there should be no difference between virtual and physical installations. But we went further, our infrastructure even can be rebuilt from scratch or better cvs, which we will detail later.

During the process of developing both environments we also benefited from different side effects.

System Imaging environments in combination with Virtual machines can also be used to ensure safe production deployments. By saving your current production image before updating to your new production image, you have a highly reliable contingency mechanism. If the new production environment is found to be flawed, simply roll-back to the last production image on the virtual machines with a simple update command!

During our projects we initially used Qemu and later moved to Xen. We integrated SystemImager with both platforms, however the general approach stayed the same. We will document the Xen developments and go more in depth on other infrastructure related issues when using the Xen virtual machine monitor,

Apart from our specific needs regarding to virtualisation people use Virtual Machines in order to migrate from different physical machines serving similar services to different virtual machines rather than to 1 big physical machine. They have a variety of reasons for doing so, most often the majority of their machines is just idling, and while disk, memory and CPU are cheap these days electricity and floorspace aren't. Often machines are idle for about 90% of the time only doing some work at certain intervals, or they are just over scaled by default. With constant mergers and acquisitions similar services in different departments are often on low usage Therefore people want to run as much services as possible on 1 physical machine. The reason for actually virtualising those services and not just throwing them all together on 1 big fat application server is most often security related. Users might be on different networks or users of one department should at no cost be able to see data from other departments. Therefore isolating their data on a different virtual machine saves in hardware but still gives them that comfortable feeling of security.

So rather than buying a new server for each service that is being added to the enterprise people now reuse the same hardware over and over again while deploying those services on virtual machine instances. As less hardware has to be bought, more budget stays available for making the environment redundant.

As virtual machines can be more easy relocated to different physical machines one can move a service to another server when it's primary server has to be taken down for maintenance hence avoiding actual service downtime. One can even dynamically re-balance workload when the load on certain physical servers rises.

Integrating Virtual Machines with the rest of the infrastructure, should have a low impact on the existing infrastructure. Some Virtual machine vendors have their own tools to deploy and manage virtual machine, Apart from the vendor lock-in to that specific virtual machine platform , it requires the administrators to learn yet another platform that they need to understand and manage, something we wanted to prevent.


Bootstrapping an Infrastructure, the Hybrid way

Numerous articles and papers have already been written on the topic of automating large scale system installs, the most referred to is probably the infrastructures.org paper. Key items to remember from those papers are that one should not look at a couple of machines as just a couple of machines but as a whole infrastructure, secondly when administrating a number of machines one should try to automate as much as possible in order to have an environment that is similar, easy to maintain and where one can reproduce the work that has been done to create the infrastructure without too much hassle. A machine part of an infrastructure should be able to pass "The 10th floor test", a term first used by Steve Traugott from www.infrastructures.org which refers to the ability to take a random machine in your infrastructure , drop it from the 10th floor, and be able to restore your infrastructure to working within 5-10 minutes.

When looking at bootstrapping an infrastructure, people opt either for taking, or creating full images of an existing environment , or for "scripting" automated installations. Both have their advantages and disadvantages. When we look at the advantages of pure imaging people are mostly concerned about the speed factor of an installation, copying an identical image to a system is faster than first copying packages with meta data to a system and then installing them. People argument that since less network bandwidth is used installations are more economical and when using multicasting even more bandwidth is saved. Off course in todays network economy bandwidth isn't that much of a real problem anymore. Pure imaging environments are mostly used with a really homogeneous hardware environment where all the hardware is identical, opponents of imaging claim that the moment you want to add different hardware you're image is obsolete and you have to start maintaining different images. In an imaging environment people typically also don't do incremental upgrades but they just scratch the system again from the start. When images are being used people also tend to take less interest in making clean packages of the software they install.

When installing, packaging software is one of the most important priorities, as for each file one needs to be able to identify in which package it belongs to that an upgrade path can be define. This results in a much more fine-grained installation that also suits for heterogeneous environments, both regarding different hardware platforms as regarding different services on different machines. The most mentioned argument for automating installations is probably the advanced methods for automated hardware detection which means that one isn't hardware dependent anymore.

Off course everything isn't black and white. Tools such as SystemImager take away a couple of the defacto issues with imaging by adding a tool such as SystemConfigurator after the copying of the image. Traditional images tended to be huge file-system images where as now one looks at a file based three that is being transferred.

As both options have their benefits a "new" trend is integrating both. Basic images are being created (generated) from a defined list of packages. On a per machine or machine group basis the differences between the base image and the required services is listed in in a list of extra packages then then will be installed after the installation of the image. This way one only needs to maintain 1 image and a limited list of packages, no need for keeping multiple images around , and one also only needs to update service specific package lists when upgrades arise as typically basic required packages such as ssh, glib, the kernel etc .. reside in the centrally managed base image. Thus one gets the benefits of imaging while not having to worry about the issues of imaging and for added modularity one gets the benefits of installing.

"SystemImager is software that automates Linux installs, software distribution, and production deployment. SystemImager makes it easy to do automated installs (clones), software distribution, content or data distribution, configuration changes, and operating system updates to your network of Linux machines. You can even update from one Linux release version to another! "

We opted for SystemImager because it could be used for different Linux distributions either based on RPM or DEB, SystemImager doesn't care about the differences. If we would have chosen for FAI or a Kick-start based platform we would have been locked to a certain distribution. SystemImager gives us the freedom to migrate to a different distribution with no more effort than upgrading to a new image version.

However we didn't stop by using just SystemImager , we build a hybrid environment around SystemImager that uses the best of both worlds, both installing and imaging.

We create a basic image using mksiimage that basic image is then being used for all machines. After the installation of the image we have a machine specific set of packages that need to be installed on top of this basic image. These packages are being stored in a central repository, which can be either an apt, yum or other repository. Other tools such as current from tigris.org also make this possible. The list of extra packages is maintained in CVS together with the extra overrides required for SystemImager. This way we can reproduce the whole config of a machine from 3 parts, the basic image, the list of extra packages that are being installed from a repository and the overrides.

Using a package repository also means that you create a package based upgrade path, when the need arises to upgrade a package your apt-get or yum will automagically also solve the dependencies for your new package.

This way we took our first step into upgrades and change management integrated with the rest of the infrastructure.

Configuration management on the other hand typically tends to be done with Isconf, Cfengine or more recently similar tools such as puppet. We are using CFengine to centrally manage configuration files and redistribute them amongst groups of servers in a secure way.


Xen

Then came the virtual machines :), as mentioned before one of the problems we had was testing the whole bootstrap procedure. We were sure the hardware part was working etc. But we needed a platform where we could test our work with repositories, the configuration management environment etc.

Initially we were using Qemu to accomplish this goal, but as our needs grew into both a higher level of automation and real production environments that needed to be consolidated we moved to Xen.

Xen is a virtual machine monitor for x86 that supports execution of multiple guest operating systems with unprecedented levels of performance and resource isolation. Xen is Open Source software, released under the terms of the GNU General Public License.

Xen has become one of the most popular virtualisation platforms during the last 6 months, although not such a young project, it is now gaining acceptance in the corporate world as a valuable alternative to VMWare.

Adding Xen to your machine changes it from an ordinary x86 alike machine to a totally new platform. It's not an x86 anymore, it's a Xen architecture now. All the operating systems that you want to run on your machine won't work anymore if they only know about x86, they need to know about Xen. Off course the Xen and x86 architecture are really similar , so for the end user and the applications that run on a platform ported to Xen there is almost no difference.

When Xen is activated it will also need to boot it's first virtual machine, called Domain0, Domain0 has more privileges than the other virtual machines and typically is only used for managing the other (less privileged) virtual machines. Domain0 is also responsible for managing the hardware. Porting a platform to Xen changes almost nothing to the drivers which means that most drivers supported in traditional Linux kernels is also supported in Xen.

Within Domain0 the "xend" daemon handles the management of the virtual machines and can be controlled via the "xm" command line utility

From there we can start to create other Virtual Machines aka Domains.

All of the Xen configuration details live in the /etc/xen/ directory. Per virtual machine a config file (in our case) hostname is created that contains the specific configuration regarding the setup of a virtual machine.

An example config file looks like this:

Subian-Host:/etc/xen # cat Subian-1A
kernel = "/boot/vmlinuz-2.4.29-xen0"
memory = 64
name = "Subian-1A"
nics = 2
vif = ['ip = "172.16.33.160", bridge=xen-br0',
        'ip = " 172.16.41.160", bridge=xen-br1']


disk = ['phy:vm_volumes/root-Subian-1A,sda1,w'
        ,'phy:vm_volumes/tmp-Subian-1A,sda3,w'
        ,'phy:vm_volumes/var-Subian-1A,sda4,w'
        ,'phy:vm_volumes/usrlocal-Subian-1A,sda5,w'
        ,'phy:vm_volumes/swap-Subian-1A,sda2,w'
        ]
root = "/dev/sda1 rw"
Where we define the kernel (yes you can also use a domain0 kernel for other domains). We give the virtual machine instance 64Mb of memory, a domain name. We define it to have 2 different network interfaces both with an ip address and a dedicated bridge. We also define which LVM volumes (cfr later) we will export to the virtual machine and how the disks will be named in the virtual machine. And also which will be the root volume.

Booting such a virtual machine is as easy as running "xm create -c $filename" which will also directly connect you to the Virtual Machine's console where you can follow the output of the boot process.

There are 2 init scripts that are important and should be started at boot time.

/etc/init.d/xend 
/etc/init.d/xendomains
The xendomains script takes care of automatically booting all the domains listed in /etc/xen/auto/


Why Regular installations don't work

A typical Xen installation is done by copying an existing installation into a chrooted environment. One doesn't boot from an install CD when installing a Virtual machine in Xen as they all are built for an X86 based platforms and not for Xen.

There are different ways of installing a distribution into a chrooted environment, Debian has a tool called debootstrap, Fedora users can use

yum  --installroot=/path/ -y groupinstall Base 
, mandrake users can use
urpmi --root=/path basesystem urpmi ssh-server
and users of recent Suse distributions even haven an option in Yast where they can choose to install a chroot specifically for Xen. All of these installations are mostly being started from an up and running system.

This kind of copying and environment into a chrooted environment looked extremely similar to the way SystemImager does it's installations. The only problem was to find a medium from where the virtual machine could boot at install time.

Regular automated installs depend on an existing infrastructure such as booting over network or another shared medium.

Apart from the fact that not all Virtual machines support Virtual network booting we would also like to be able to fully automate the installation hence requiring no manual intervention.

We would also like to integrate the bootstrapping of physical machines hosting virtual machines together with the installation of the virtual machines them selves.

While experimenting with Qemu we had build a SystemImager autoinstallcd that booted within a Qemu environment, the changes required for that kernel and initrd were minimal, however building an ISO that supported Xen seemed to be a much bigger effort, so we chose another alternative.


Deploying Virtual Machines

When consolidating to or deploying Virtual machines one runs in to different types of issues.

From Where does one boot ? Does one boot at all while installing the environment ?
Virtual sparce Disks or Real Disks
How do you copy or replicate disks ?
What about different geometry or types of disks (SCSI vs IDE) in the real environment ?

Initial work has been done to create a virtual environment with automated installs in Qemu, during our first tests with Qemu we already had created a SystemImager Autoinstall CD with a customized kernel, one that could be booted from Qemu and support for the hardware Qemu provided. This approach obviously doesn't scale, it requires custom images to be made per environment (regarding image server configuration etc) therefore we abandoned the approach quickly.

Let's take a step back and look how SystemImager does an installation. SystemImager boots a kernel from the network, does the initial, minimal hardware detection in order to be able to connect to the network from Linux and to access the disk. It boot into Brian's Own Embedded Linux (boel) and then downloads an install script from the image server based on it's hostname

This script will start partitioning tools to create the required partition tables and appropriate file-systems, it mounts these newly created filesystems on /a/ after which it will use rsync to actually transfer the system image

The final step is to run Systemconfigurator chrooted from /a/ to set up the machine specific configs such as network and kernel and boot configuration. SystemImager introduced the concept of Overrides, it assumes that you are installing X similar machines that might have different but similar configs, with these Overrides you can override the default configs with the host specific one.

The server will then reboot into it's freshly installed operating system and will be ready for operation.

So with this in mind, do we really want to install our software on a loopback device created on a sparce file , residing on a huge filesystem next to different other virtual images, waiting to be flooded. We already know that copying these kind of files will be slow and that there is no easy way to replicate , grow or shrink them.. When thinking about these kind of problems you realize LVM is one of the best ways to solve these issues. LVM gives you the opportunity to manage growing volumes over different physical disks. It gives you the opportunity to take snapshots of those disks (easy for backups) and it also solves the issue of maximum 15 partitions on a SCSI disk.

When using Multiple Virtual Machines on a machine the problem isn't anymore how do I manage 5 partitions on a disk, you are managing a such set of partitions for each virtual machine and you need to be able to add virtual machines and therefore partitions on the fly. Thus you want to use LVM.


Automating installs with Xen

Xen can use the physical disk/partitions to install the different virtual machines or use Shared Storage.

Since in this case you can easily run into the 15 partition limit we are forced to use LVM . We use naming conventions to make clear which partition is supposed to be used for what this also makes the setup scriptable e.g /dev/vm_volumes/root-Subian02 will be the root partition for the host named Subian02. The names /dev/vm_volumes/swap-Subian02 and /dev/vm_volumes/usr-Subian02 should be selfexplaining regarding their usage.

We created tools that automatically create modified autoinstall scripts in order to integrate with LVM. We use the same images for physical machines and virtual machines. Only the kernels are Xen specific.

These tools base themselves on the Virtual Machine description files that are commonly located in /etc/xen/ in order to determine partitioning schema's.

Initially we were using a 2 phase virtual machine installation

After a normal bootstrap of the Guest Operating System, grub is configured to

default 1
# The splash image (this line will be empty if nothing was found)

fallback 0


# kernel 0
title sis-2.4.27
        kernel (hd0,0)/vmlinuz-2.4.27-3 ro root=/dev/sda2 root=/dev/sda2 showopts ide=nodma apm=off acpi=off vga=normal n
        initrd (hd0,0)/sc-initrd-2.4.27-3.gz

# kernel 1
title Xen-2.4.30
        kernel /boot/xen.gz dom0_mem=131072
        module /boot/vmlinuz-2.4.30-xen0 root=/dev/sda2 ro
        module /boot/initrd-2.4.30-xen0
boot kernel 1 (aka the Xen kernel) by default and the sis-2.4.27 kernel as fallback. As there is no initrd-2.4.30-xen0 yet , the xen kernel won't boot thus we'll fail back to the 2.4.27-3 kernel.

The hosts post install scripts create a symlink in rc3.d to xenstrap. Xenstrap is the script that will trigger the installation of the actual virtual machines. It will first try to make sure that there are no known LVM volumes available anymore by scratching the lvm partition and reformatting it as a file system. The second step will be define /dev/sda9 as a physical lvm volume and add it to the vm_volumes group (virtual machine volumes) .

echo y | mkreiserfs /dev/sda9
pvcreate /dev/sda9
vgcreate vm_volumes /dev/sda9
From there it loops the create_vhost script over the preprovisioned list of vhosts in /etc/xen/auto.

After the actual provisioning of the vhosts , the appropriate initrd is created thus that the Xen kernel can boot correctly and in order to prevent this from happening again the symlink of the xenstrap script is removed, the machine then is rebooted again.

As with more recent versions of SystemImager LVM is supported we could move the creation of the LVM volumes into the initial bootstrap phase and skip the intermediate boot. This means that in the postinstall sections we can now create the additional logical volumes and accompanied virtual machines. Once these instances have been installed we then can reboot the machine directly into a Xen enabled environment.

Upon rebooting the machine thus comes up with a working Xen kernel and xen-domains automatically boots the appropriate virtual machines.

The create_vhost script is actually almost identical to the master autoinstall scripts from SystemImager. Apart from the fact that we replaced parted with lvcreate and that we use /vhost/$VHOSTNAME rather than /a/ as a chrooted environment. Using and creating a $VHOSTNAME three means that we can always to maintenance on a chrooted environment when this Virtual Machine is down. Scripts mount the appropriate partitions in their chrooted environment so no confusing of which virtual machine is mounted where is possible. A shortened example of such a generated script is below:

#!/bin/bash
VHOSTNAME=$1
IMAGESERVER=$IMAGESERVER
IMAGENAME=$VHOSTNAME
HOSTNAME=$1

mkdir -p /vhosts/$VHOSTNAME
lvcreate -L768 -nroot-$VHOSTNAME vm_volumes
mke2fs -j /dev/vm_volumes/root-$VHOSTNAME
mount /dev/vm_volumes/root-$VHOSTNAME /vhosts/$VHOSTNAME

lvcreate -L512 -nvar-$VHOSTNAME vm_volumes
mke2fs  -j /dev/vm_volumes/var-$VHOSTNAME
mkdir -p /vhosts/$VHOSTNAME/var/log
mount /dev/vm_volumes/var-$VHOSTNAME /vhosts/$VHOSTNAME/var/log

lvcreate -L128 -nswap-$VHOSTNAME vm_volumes
mkswap /dev/vm_volumes/swap-$VHOSTNAME
rsync -av --exclude=lost+found/ --numeric-ids $IMAGESERVER::sis-BASE/ /vhosts/$VHOSTNAME/
rsync -av --numeric-ids $IMAGESERVER::overrides/$VHOSTNAME/ /vhosts/$VHOSTNAME/ 

extrapackages="/vhosts/$VHOSTNAME/etc/sis/extrapackages"
chroot /vhosts/$VHOSTNAME/ apt-get update 
chroot /vhosts/$VHOSTNAME/ apt-get upgrade --assume-yes

for package in `cat $extrapackages`; do chroot /vhosts/$VHOSTNAME/ apt-get install $package --assume-yes; done

chroot /vhosts/$VHOSTNAME/ systemconfigurator --configsi --excludesto=/etc/systemimager/systemconfig.local.exclude --stdin <<EOL 

[NETWORK]
HOSTNAME = $HOSTNAME
GATEWAY = $GATEWAY

EOL
umount /vhosts/$VHOSTNAME/var/log
umount /vhosts/$VHOSTNAME/
The above script takes a hostname as parameter and creates the environment for an virtual machine. This can still be modified for different host types based on the partition information.

An image on a physical machine does not differ from an image in a virtual machine, only minor configuration changes are different. We don't run hardware dependent daemons such as gpm etc and the inittabs are also modified not to use ttyS0 as console. But those changes are minor and are also often required on different physical machines.

Overrides that might need changes are e.g the inittab, if you are using a serial console on your physical machine that you don't have on your virtual machines. Specific network related configs might also need changes. Vlan configurations e.g might require a lower MTU than you are used to in order to be able to tunnel the packets trough your host machine. And you also want to have a close look at bonding configurations, do you want to bridge a bonded interface , or bond a bridged interface ? But there is e.g no need to modify the fstab compared to a real machine as the mapping in the Xen configuration takes care of that. Package installation on a physical or virtual machine instance is also identical. We however disable hardware detection in the virtual machines, and most of our servers tend not to run X.

As LVM was not supported in the SystemImager stable series we used to reboot the machine into an LVM aware kernel before we could create the LVM volumes.

Bootstrapping an extra virtual machine is as easy as running create_vhost HOSTNAME as root. The script will prepare the lvm partitions and download the appropriate packages. On then can either manually create the virtual machine or wait till the next reboot.

As mentioned before most of our installations are generated from scratch. This means that we build the image from packages using mksiimage from the SystemImager suite.

But we could also use the above described standard tools to create chroots and use the technique described above to create different virtual machines fast and identical

We are in the process of building both xen config create_vhost scripts and fstab files from the same source template that mksiimage uses

Off course Installing a golden client image is no different than a generated image, this means that consolidating a machine that is currently running on a physical machine to a virtual machine take no more time than creating a SystemImager copy of it using the si_prepareclient ans si_getimage tools and redeploying it on either a physical or virtual machine


Alternatives

When looking at similar work regarding automating virtual machines deployments we run into 2 types.

On one side we have the platform and distribution specific alternatives such as using the Preseeding environment in the Debian installer , or using FAI. Which would have lead us to a lock in on Debian in this case. Other alternatives also tend to be very distro specific.

Or as described in Using the Xen Hypervisor to Supercharge OS Deployment people can build on different shared disks within an infrastructure exporting them read only to other virtual machines. .Enhancing Copy on Write or Snapshotting functionality indeed is an interesting method into using less disk-space compared to using separate volumes for each disk.

As we wanted to integrated our current infrastructure with our virtual machines, such an approach would mean that we would also be using a network filesystem to share these shared disks in the physical machines.

In Enterprise IT environments it is also common practice to have a Test, Development, Staging and Production environment. These multiple environments are often on different versions of an operating system. When using a partly shared disk infrastructure this might lead to more complexity than actually wanted.

We feel that this approach is beneficial when deploying Virtual Machines in an isolated environment, the lack of integration with the already existing infrastructure however makes it yet another platform to maintain.


Conclusions

In this paper we showed the readers how to set up a virtual machine in the same way as the rest of their infrastructure, how to save on hardware and time and keep their infrastructure managed and sane.

We explained the concept of Hybrid installations being a combination of the best of both Imaging in and Installing , how to apply those to both physical and virtual machine deployments and succeed in the "10th floor test" as a nice extra while using this structured approach.


About the Author

Kris Buytaert


References

Bootstrapping An Infrastructure by Steve Traugot

Preseeding Debian GNU/Linux for automated installations, UKUUG Linux Technical Conference, Philip Hands - hands.com

FAI, the Fully Automated Installation , by Thomas Lange

System Imager Suite by Brian Finley etc

Xen 3.0 and the Art of Virtualisation , proceedings of Ottawa Linux Symposium 2005 by Ian Pratt

Using a the Xen Hypervisor to Supercharge OS Deployment, proceedings of Ottawa Linux Symposium by Mike D. Day