Virtualisation is a hot buzzword in the IT industry right now, with major players including Microsoft and IBM making multi-million pound investments into the technology. In essence the idea of virtualisation is that you allow one server, with one set of hardware to masquerade as a number of separate servers with 'virtual hardware', each of which can run its own operating system and set of applications. As you might imagine, the details of this technology are somewhat complex and its potential uses are myriad, but we'll return to those points later. What you might find more surprising is that the history of this idea goes right back to the beginnings of modern computing - all that's happened is that it's become a hot topic again, boosted of course with the assistance of the latest hardware.
The first mention of virtualisation has been traced back to a paper published in 1959 that intended to address the problem of a computer mounting peripheral devices at the same time as a programmer was working on the machine . The original concept, which emerged out of the scientific community was called "time sharing". It was proposed as a means of getting the vastly expensive computers of the day to run more efficiently by allowing multiple users to input data simultaneously. The reason that this eventually morphed into the modern concept of virtualisation is that the processes set in motion by the various users had to be run separately from one another.
The first genuine time-sharing systems were built in 1961 at MIT and christened CTSS (Compatible Time Sharing System) . Having set up the first instance of the emerging technology, academics at MIT and elsewhere began to put pressure on commercial vendors to produce improved versions for wider use. As a direct result of this pressure, IBM produced an experimental product called CP-40 - time-sharing software which made the first successful use of virtualised hardware. This evolved through a number of increasingly successful iterations until other companies started to muscle in on the technology, leading to the crowded marketplace that can be seen today.
What Is Virtualisation?
As noted above, the end goal of virtualisation is to have a single piece of hardware running a number of discrete instances of operating systems. To do this a piece of software called a Virtual Machine Monitor or Hypervisor is installed on the server.
Hypervisors come in two flavours . The first (Type I) runs directly on the server hardware itself and provides the capability to run instances of operating systems above that. This is a three-layer model - the hardware forms the base layer, then the hypervisor sits above that with the operating systems on the top layer.
Type 2 hypervisors are designed to run within an initial operating system (OS) installed on the box forming a 4-layer model: the hardware, then the initial OS, then the hypervisor followed by the virtualised operating systems. Type I is usually the preferred approach because for obvious reasons it's the more efficient of the two. However, the overheads for developing and deploying Type 2 systems are lower and so they are sometimes used in situations where cost factors outweigh the need for efficiency.
The number of different operating systems supported by a hypervisor varies. Some, such as industry leader VMWare will support almost any operating system. Others are limited to running instances of Linux and/or Solaris. Again, it is usual to find that the scope of supported OSs is dependent on the price of your Virtualisation software. It's important to realise that a virtual machine can run a variety of different operating systems on a single server, so long as the OS is supported. You could, for example, have a single server running Windows, Red Hat Linux, Debian and Solaris, all as independent entities.
Approaches to Virtualisation
There are various broad methods adopted by virtualisation software to achieve the desired goal. Each has various advantages and disadvantages that need to be understood in order to pick the best solution. In reality only two of the paths taken toward virtualisation actually achieve the aim covered by this article: running multiple operating systems at near full speed on a single server.
The first of these is called 'native' or 'full' virtualisation. In this scenario the software attempts to simulate enough of the essential hardware of the server to enable multiple OSs to run, providing they are all designed to run on the same style of processor. Given the dominance of the Intel x86 chipset in the current market this means that software is available that can run almost any of the commonly installed operating systems on a single server. The popular commercial package VMWare  was created on this basis.
Unfortunately for advocates of full virtualisation the very thing that gives strength to this approach - the ubiquity of the x86 processor architecture - is also its downfall. The basis of this architecture has been around for over 20 years and in its first instance it wasn't designed to be virtualisation-friendly . Up until very recently it still wasn't, meaning that you can have trouble trying to run full virtualisation packages on anything except the latest hardware. If you're worried, check with your hardware vendor that what you're ordering implements IVT (Intel Virtualization Technology - sometimes called Vanderpool) for Intel chips, or AMD-V (AMD Virtualization - sometimes called Pacifica) for AMD chips.
The other technique is called paravirtualisation, which does not attempt to simulate hardware but rather offers a set of functions for the operating system to use to enable it to do the jobs normally associated with interacting with hardware . In order to do this the operating systems running on the virtual machine need to be modified to work with the virtualisation software - vendors usually make patches available for this purpose. Virtualisation software that takes this approach includes Xen  and the current version of Sun's virtualisation software, Logical Domains. There are some suggestions that paravirtualisation is a dead-end technology; partly because the x86 compatibility issues that made it necessary in the first place have now been solved, and partly because the ever-increasing computing power available makes native virtualisation faster and faster .
The chief differentiator between these techniques is one of speed versus convenience. Paravirtualisation is faster because the virtual machine is doing a relatively simple job compared with trying to simulate hardware, as in full virtualisation. This leaves more server resources free to run the operating systems. However there is clearly an issue raised by the need to modify the operating system software to run under paravirtualisation - not least that the OS vendor may refuse to allow the changes. Many paravirtualisation software packages cannot run Windows for this very reason.
Applications and Advantages
By now you may well be asking why you'd want to go to all this trouble just to make different operating systems run on the same server when you've already got different servers doing the job perfectly well right now. It's a good question and it's about time we addressed it.
Probably the primary and most obvious reason is one of resources. VMWare, one of the leading suppliers of virtual software, estimates that, on average, modern servers run at around 5-10% of capacity at any one time . If that's correct then it means that you could effectively replace ten of your existing servers with a single server running virtualisation software. The potential savings in terms of space, power and time spent working on server infrastructure should be obvious. Even if your servers are running at a higher capacity, or you need to save capacity in case of a sudden spike in demand, you can still make significant savings by switching to virtual machines.
Another potential application is the ability to run legacy software. Some older software is simply not designed to run on modern operating systems or even on modern hardware architecture. By using virtualisation the problem is easily solved and in addition you're not committed to using an entire server to run a single old and most probably resource-light application.
There are a large number of other uses for this technology when it comes to modern software production environments. This may be of no immediate concern to you but you may find that by encouraging your IT departments to experiment with virtualisation they can drastically improve the service they offer - as well as earning the undying gratitude of techies who are usually champing at the bit to try out the latest ideas in computing!
We've started to make use of virtualisation at UKOLN, currently as a test to see how we can set it up and what it can do for us. What follows is a case study of our experiences: we hope you find it useful.
Server Virtualisation at UKOLN
At any one time UKOLN staff are engaged in a number of different projects, with particular specific technical software requirements. It isn't always convenient (particularly if the project requires a significant amount of software development, or (re)configuration of existing code) to install packages onto existing servers.
In the past UKON had one (SUN) development server, called scoop, which struggled to serve everyone's needs. Software was installed by one person, reconfigured, tested, changed. Then new software would be added to it, and so on. Coupled with the increase in and variety of new projects proposed, this machine rapidly became very unwieldy to administer. (Not to mention the security implications - such as external user accounts remaining on the machine, the chance that ports might be inadvertently left open once a project was completed, etc.)
So an alternative approach is needed to provide this service to staff engaged in this kind of work.
One solution might be to buy a new dedicated machine for each project that needed it with a vanilla set-up that could be reconfigured and worked on without any concern that such work might affect other existing projects running on that machine. There are some problems with doing this; for example some projects can be fairly short-lived (a couple of years) and once the project has finished, we can be left with unwanted hardware to dispose of. There are also concerns in respect of power usage and physical storage space if we simply continued to add new servers to the UKOLN rack. Its clearly a wasteful approach, and can be complicated to support for the systems team.
So long as the virtualisation technology is sound and our single hosting machine is robust and fast enough, it could be entirely suitable to use a smaller number of 'virtual' servers, rather than many more physical servers - and entirely supportive of green initiatives taking place at UKOLN. I think this could be a more efficient and streamlined approach, as compared with maintaining many physical machines. So it seemed an ideal process to investigate.
After considering the options we chose to take a look at Xen .
The Current Choice
Xen has many of the features that we might need. It has a central administration area and supports Linux and Windows servers. It has the ability to create a virtual 'version' of an already existing server; so there is the potential to move current servers over to run as a Xen virtual server and then decommission the original computer. It also provides the capacity to back up and recover servers running on it, apparently quite simply. Although these features are offered by other Virtualisation software, there are advantages. Xen offers a free version, it runs on Linux, (which means it could fit into our current server set-up quite easily, as UKOLN servers predominantly run Slackware Linux) and there is a is a good active online user community. Running an open source solution is quite an attractive option in our view, supportive as it is of our inclination to encourage this kind of R&D, as well as possibly providing opportunities to participate in some way in the Xen project.
As a trial I wanted to try creating 4 different virtual servers. One would be a standard Linux server, maybe running no more than Apache and serving some Web pages. Another would be Linux, which we could configure and test and break if need be, to see how robust the Xen software is. The third would be a Windows 2003 server, and finally the fourth could be a server copied from one of our existing machines, to see how easily (or not) it would be to migrate existing servers and services to a Xen Virtual Server, thereby being able to decommisson that machine).
A quick search for Xen turns up a lot of information . Xen produces 3 different versions of its software . We are only interested at the moment in Xen Express, which supports 4 different virtual machines, on one box. The potential here is the option to upgrade to the other versions pretty easily if need be, and it might give us a chance to tweak the code to customise it to our needs.
Xen Express is available as a single download. You can download the CD image from the Xen Web site, and then boot from this and install it cleanly onto your server, it overwrites the disk with its own cut-down version of Linux. You then install an Administrator's Console onto another machine and administer your virtual machines from there.
We had an unused Slackware Linux server waiting to go for this, and originally I just tried to download the open source package of Xen. (You install this yourself onto your server like any other applications) and install it onto the available server. It seemed sensible given that our other servers run Slackware Linux. However I had problems getting it to run and configuring it, and so decided eventually to scrap the Slackware install, download their CD and install from there. When I installed it I got an error saying that the hardware didn't support virtualisation, did I want to proceed? This seemed pretty final.
There are hardware issues that should be considered when running a virtual platform. Our SUN Fire v20Z server has 2 Opteron 250 processors. These processors do not fully support the form of virtualisation that Xen uses. This means that although we may be able to get some Linux Virtual Servers running, we would not be able to run Windows Servers on this machine. There is more information on Opteron processors and virtualisation available . Xen also maintains a wiki  to discuss these kinds of issues (and more) and where some hardware issues can be also be found.
Other than that the installation was completely painless. You configure your server as before with an IP address, and hostname, get it connected to the network and reboot. You now have a server primed to add some virtual services.
The Xen server is divided into two parts. The Xen server host (the machine onto which you just installed Xen) and the Administrator Console. The Administrator Console is a remote machine, from which you create, edit, remove, etc. your virtual servers.
Once you have created a virtual server (installed it, given it a name and IP) you can connect to it through the Admin console (or ssh to it from elsewhere) to work on it. The Admin Console then becomes your main access to the host server and the virtual servers on it.
Since our hardware doesn't support virtualisation, I could only install the supported versions of Linux as virtual servers. It wouldn't run Windows. This means I couldn't proceed with my original plan of creating 4 servers on this machine. So knowledge of the operating systems that Xen supports   was a compulsory stop on any plan to try this approach out. I tried experimenting with others, I created a new VM (virtual machine) and tried installing Linux Fedora, and Slackware Linux which both failed to install. I also attempted to install Windows XP; this failed as expected.
The Xen distribution comes with templates to install Debian Linux and Red Hat Enterprise Linux. I (quite) effortlessly created 2 virtual machines on the server with the Debian images. I then gave these machine IP addresses and hostnames and added users to allow access to log on. To date they appear to work fine.
Much remains to be done. I have yet to try creating a virtual server from an existing server, and it's obviously vital to set this up on a machine with processors that will support Windows Virtualisation. But we now have two extra servers at UKOLN, waiting to be used and tested.
Although our testing of this technology is far from complete I think that there may be a lot of mileage in UKOLN pursuing Xen for our Virtualisation solution. Certainly for development and testing machines, although Xen boast a number of production servers already in place (for well-known organisations) running on Xen hosts.
I am a bit disappointed in the current OS support but perhaps I am expecting too much from free software! I hope this will not impede our use of it. In addition to this Xen employs paravirtualisation, rather than a full virtualisation. There is some argument that paravirtualisation is not the way forward (particularly when taking into account advances in hardware technology  ) so if we choose to use Xen this may need to be revisited.
I think the technology of virtualisation appears to have considerable potential for UKOLN; Xen seems a good starting point, with some easily available tools and support .
Consolidating servers onto one piece of hardware seems very sensible (so long as backups are reliable and the machines run quickly enough) but it may be the commercial VMware, might prove a preferable solution for us.
We hope to write further articles about this and how we fare with virtualisation.
- C. Strachey, "Time Sharing in Large Fast Computers," Proceedings of the International Conference on Information Processing, UNESCO, June 1959, paper B. 2. 19.
- F. J. Corbató, M. Merwin-Daggett, and R. C. Daley, "An Experimental Time-sharing System," Proc. Spring Joint Computer Conference (AFIPS) 21, pp. 335-344 (1962)
- IBM Systems Virtualization, IBM Corporation, Version 2 Release 1 (2005) available at
- VMW http://www.vmware.com/
- Lawton, K. P., "Running multiple operating systems concurrently on an IA32 PC using virtualization techniques", Floobydust, Nov. (1999).
- A. Whitaker, M. Shaw, and S. D. Gribble, "Denali: Lightweight Virtual Machines for Distributed and Networked Applications", Univ. of Washington Technical Report 02-02-01, (2002
- Xen http://www.xensource.com/
- Virtualization Blog - Paravirtualization is a Dead-End Approach. Posted by Tim Walsh, 7 July 2006
- ZDNet.com: At the Whiteboard: What is virtualization? Dan Chu, 2005 http://news.zdnet.com/html/z/wb/6058678.html
- Xen documentation http://download.esd.licensetech.com/xensource/xe310r1/pro/UserGuide-3.1.0.pdf
and the original home of the Xen research team: University of Cambridge Computer Laboratory - Xen virtual machine monitor
- XenSource Products http://www.xensource.com/products/
- XenSource: Delivering the Power of Xen http://www.xensource.com/?gclid=CLmRvraB14sCFQcXEAodfFgXVA
- Introducing AMD Virtualization http://www.amd.com/us-en/Processors/ProductInformation/0,,30_118_8796_14287,00.html
- HVM Compatible Processors-Xen Wiki http://wiki.xensource.com/xenwiki/HVM_Compatible_Processors
- XenSource Knowledge Base : What versions of Linux can be run as XenVMs on a XenServer Host?
- XenSource Knowledge Base : What versions of Windows can be run as XenVMs on a XenServer Host
- Useful software downloads and Xen discussion available at: http://jailtime.org/