The Virtual Machine Advantage

By on
The Virtual Machine Advantage

It could save your business money and your data in case of disaster. Ian Murphy explains how to get started with virtualisation.

Reduce the costs of power and cooling; improve the use of assets; do more with less hardware. These have become the underlying goals of IT departments
over the past few years, and one of the primary technologies for achieving these goals is virtualisation. So what is virtualisation and why is it so important for businesses?

Virtualisation is a range of technologies that allow you to run operating systems and applications in their own virtual machine (VM). Each VM is a separate environment, so any problems with an application are contained within it, leaving other VMs completely unaffected. This containment means you can deploy multiple VMs on the same physical box and so reduce the number of computers your business needs. This obviously reduces your hardware expenditure, as well as the space needed in the datacentre, and generates more value for money from the hardware you own.

Another benefit of VMs that’s often overlooked is that they consist of only a few files. Provided you shut down the applications and the VM properly, you can copy these files to another machine or back them up to tape quickly and easily. This provides a simpler, more robust and reliable backup and disaster recovery option than conventional backup approaches.

We’ll explain the different types of virtualisation and discuss their real-world pros and cons, to help you decide whether it will suit your organisation.

Hardware virtualisation
There are two primary approaches to machine virtualisation. The first is hardware virtualisation, where the VM sits almost directly on the physical hardware, and the second is native virtualisation, in which the VM sits on a host operating system. Irrespective of which solution you choose, a VM contains its own operating system along with the application.

Hardware virtualisation is where there’s little more than a thin layer of software on top of the physical hardware, which is known as a hypervisor. This is accessed by the VMs, which have full access to the underlying hardware and its resources. Whenever a VM needs a change of resources (such as more memory or access to another processor), it can be done in real-time without having to stop and start the VM.

The key advantage here is flexibility, and the fact that each VM has access to the native hardware. This ensures no resources are wasted by the host operating system hogging them itself. On the flip side, the hardware must be supported by the hypervisor – this isn’t only restrictive, but it also makes the underlying computer more expensive. Thankfully, this situation is improving, as more vendors enter the market.

IBM and VMware are the two biggest vendors in this space, but Microsoft is working on its own, long-awaited hypervisor solution. All three vendors see virtualisation as being essential to the next wave of hardware and operating systems. They aren’t alone. Intel and AMD are both enhancing support for virtualisation in forthcoming processors.
Virtualisation allows IT managers to map resources effectively.
Virtualisation allows IT managers to map resources effectively.
Native virtualisation
The most common form of virtualisation in use today is native virtualisation. It relies on two things: a host operating system and an environment into which VMs can be placed.

With native virtualisation, VMs are created and managed through a console interface, but if you need to adjust the resources used by a VM it can take some effort. First, you have to make resources available by either shutting down or taking them from another VM. That VM then has to be restarted before the resources are put back into the general pool. Resources can then be added to the VM that needs them, but it too needs to be shut down and restarted. This isn’t as time-consuming as it sounds, but it does take planning and can affect the uptime of servers and applications.

One common problem with the use
of native virtualisation is that there’s a limit to the size of the VMs. This is often created by the virtualisation software, but if you’re intending to make copies of your VMs for disaster recovery and backup, you don’t want to exceed the size of the media on which you’re going to make your copies. This means a good VM contains only the operating system and application, with the data being stored elsewhere.

VMware and Microsoft dominate the native virtualisation market (although there are plenty of other players covering Linux and other operating systems). Which is better? Much depends on the operating systems you’re running. VMware has more experience in virtualisation and has managed to dominate the market for some time by hosting a range of operating systems, while Microsoft has, naturally, focused on its own products. While that’s changing, the competitive landscape shows that those with a mixed environment tend towards VMware.

Microsoft is gradually opening up to hosting other operating systems and, by allowing those who use Windows Server Enterprise Edition and Virtual Server Enterprise to install multiple copies without purchasing additional licences, is offering increased value for money.

VMware has stolen a march over Microsoft by creating a third-party software market in fully functional VMs. Vendors load their software into the VM and then distribute it to customers. All they need to do is license it or just use it on a time-restricted basis as a complete trial environment.

This is a perfect solution for software vendors and customers alike. It allows the software to be correctly pre-installed and configured before shipping. Customers are isolated from the problems of machine requirements or the complexity of some installations, while technical support teams no longer need to attend a course to get the software running and then spend time trying to tune it.

Microsoft is fighting back and using its own Virtual Server technology to allow customers and developers access to its own software. Customers can get fully configured trial versions, while the developer community gets access to early versions of server and operating systems preconfigured in a VM. This means evaluation is simpler, nothing is left on your servers to cause problems later, and you don’t need a pile of high-spec machines just to test products.

Virtualising your datacentre
The three biggest uses of VMs in the datacentre are for server consolidation (bringing multiple VMs onto the same physical hardware), disaster recovery, and creating test machines for patches and new software pre-deployment. In each of these cases, the speed with which a VM can be provisioned and the fact they can be moved between physical machines makes them efficient and very cost effective.

One of the most talked about uses of virtualisation has been for datacentre power and cost reduction. It might seem that wringing every last drop of performance from a server is efficient, but that isn’t always the case. There’s a point where the more you stress a server the less efficient in terms of power and cooling it becomes. The solution is to set a workload limit for servers and then move VMs around to keep a balance across hardware. This reduces hotspots and the need for excessive cooling, and reduces the amount of power drawn by servers. Blade-server vendors such as HP and IBM see this as the next phase of the datacentre.

Disaster recovery and backup becomes so much easier with VMs too. A simple copy command to another machine or write to a dual-layer DVD should ensure most VMs are swiftly copied without the need for potentially troublesome backup software.

When something happens to the hardware, you simply copy the VM onto the backup machine, adjust the memory settings if necessary, and you’re up and running with almost no downtime. With traditional backup methods, you’d need to find an appropriate tape drive, load the tapes and hope they can be read.

The same is true in terms of data. Rather than rely on the disaster-recovery site to load data, the use of SANs allows for virtual datacentres to be constantly updated with the latest data. Should any single datacentre be unavailable, users will still have up-to-date data to hand – no waiting for tapes to be loaded and data to be restored. This allows for VMs to be moved to different locations easily, quickly and without fear about their data, which is always available through virtual drives.

Calculating the savings is difficult, but simply keeping a large pool of machines updated or maintaining a room at a hosting company can easily exceed $200,000 per annum. Short-term leases, purchase of commodity servers and reloading VMs can slash that cost.

It isn’t only the datacentre where VMs are being deployed. The use of VMs for deploying secure applications to key users or to allow developers to test software has become quite popular. Giving selected users new versions of software to test in a VM allows for a wider test audience with no risk to core corporate data.

Virtualisation also helps to overcome the nightmare of supporting business-critical legacy applications. As companies upgrade their software, they often find there are older operating systems and applications they can’t do without. Upgrading the hardware isn’t a solution, because newer hardware doesn’t have the drivers required for old operating systems. This creates the need to constantly maintain older hardware just for these applications and hope it doesn’t break down.

Installing the operating system and applications into a VM allows them to be migrated to a new generation of hardware without loss of access or data. This extends the life of critical applications without increasing ongoing maintenance costs.
Linux and Windows can run side by side on VMware.
Linux and Windows can run side by side on VMware.


Right for every business?
It might look like moving everything to a VM is a must for any IT environment. Be careful, it isn’t. Traditional VMs have their own interface, use macros
for common key combinations and require user training.

Application virtualisation gets away from many of the problems of using VMs for end-user applications, but this is a technology in its infancy. Implementation takes time and thought, and you still need to plan your licensing program to ensure you don’t run more copies than licences. Get it right, however, and virtualisation of your software assets provides resilience, makes backups simpler and will give you greater longevity for applications across multiple generations of hardware.

Virtual machines are managed through a browser in Microsoft Virtual Server 2005.
Virtual machines are managed through a browser in Microsoft Virtual Server 2005.


Application virtualisation
The concept of virtual applications has been given a boost with Microsoft’s recent acquisition of Softricity. The company’s SoftGrid works by breaking down applications into packages that don’t need to be installed on users’ desktops or servers.

Take Microsoft Visio, for example. The entire installation needs more than 150MB of hard disk space, yet the actual amount of code required to run the application is just over 18MB. With SoftGrid, it’s possible to plug a computer into your network and run that 18MB to the user across the LAN, or even over the internet to a home user. As the package is lightweight, it can even be distributed to branch offices and streamed off local servers.

This saves IT departments time and money, and also helps with software management – SoftGrid allows IT managers to patch software when necessary, and
keep tabs on how many copies of each program you’re running across a company. For organisations with a mobile workforce or with a significant investment in home workers, this solution allows the IT department to keep control of the software licence.

Microsoft claims that unlike thin-client solutions such as Terminal Services and Citrix, SoftGrid can support several hundred users per processor. At the time of writing, we were unable to validate this, but we’ll be reviewing this over the coming months when Microsoft makes it more generally available.

Software vendor Altiris has its own approach to running multiple versions of an application on the same machine, Virtual Software Packages (VSPs). Applications are installed into a VSP, where they’re independent of the OS and other applications. They’re also lightweight, as there’s no need for an operating system in the VM.

There are a couple of things that make VSPs very interesting. The first is that they can only be used if they’re turned on. On computers that are shared by different users, applications can be kept “hidden” from people who don’t need access. A second advantage is that you can have multiple versions of an application installed on a computer. This means users can have two versions of an application open side-by-side while they become comfortable with how an upgrade version works. Technical support can then “turn off” the older version when necessary.

This leads to a third advantage. Software migration is fraught with difficulties and when an upgrade fails, the uninstall program can cause even more problems. Contained within a VSP there’s no clash, no incompatibility with other installed software and, when you tire of the software, you just turn it off.

Altiris allows a machine to run multiple versions of an application.
Altiris allows a machine to run multiple versions of an application.


Multi page
Copyright © Alphr, Dennis Publishing
Tags:

Most Read Articles

Poll

What would you like to see more of on BiT?
News
Reviews
Features
How To's
Lollies
Photo Galleries
Videos
Opinion
View poll archive

Log In

Email:
Password:
  |  Forgot your password?