The Virtual Machine Advantage

By on
The Virtual Machine Advantage
Page 2 of 3  |  Single page
Native virtualisation
The most common form of virtualisation in use today is native virtualisation. It relies on two things: a host operating system and an environment into which VMs can be placed.

With native virtualisation, VMs are created and managed through a console interface, but if you need to adjust the resources used by a VM it can take some effort. First, you have to make resources available by either shutting down or taking them from another VM. That VM then has to be restarted before the resources are put back into the general pool. Resources can then be added to the VM that needs them, but it too needs to be shut down and restarted. This isn’t as time-consuming as it sounds, but it does take planning and can affect the uptime of servers and applications.

One common problem with the use
of native virtualisation is that there’s a limit to the size of the VMs. This is often created by the virtualisation software, but if you’re intending to make copies of your VMs for disaster recovery and backup, you don’t want to exceed the size of the media on which you’re going to make your copies. This means a good VM contains only the operating system and application, with the data being stored elsewhere.

VMware and Microsoft dominate the native virtualisation market (although there are plenty of other players covering Linux and other operating systems). Which is better? Much depends on the operating systems you’re running. VMware has more experience in virtualisation and has managed to dominate the market for some time by hosting a range of operating systems, while Microsoft has, naturally, focused on its own products. While that’s changing, the competitive landscape shows that those with a mixed environment tend towards VMware.

Microsoft is gradually opening up to hosting other operating systems and, by allowing those who use Windows Server Enterprise Edition and Virtual Server Enterprise to install multiple copies without purchasing additional licences, is offering increased value for money.

VMware has stolen a march over Microsoft by creating a third-party software market in fully functional VMs. Vendors load their software into the VM and then distribute it to customers. All they need to do is license it or just use it on a time-restricted basis as a complete trial environment.

This is a perfect solution for software vendors and customers alike. It allows the software to be correctly pre-installed and configured before shipping. Customers are isolated from the problems of machine requirements or the complexity of some installations, while technical support teams no longer need to attend a course to get the software running and then spend time trying to tune it.

Microsoft is fighting back and using its own Virtual Server technology to allow customers and developers access to its own software. Customers can get fully configured trial versions, while the developer community gets access to early versions of server and operating systems preconfigured in a VM. This means evaluation is simpler, nothing is left on your servers to cause problems later, and you don’t need a pile of high-spec machines just to test products.

Virtualising your datacentre
The three biggest uses of VMs in the datacentre are for server consolidation (bringing multiple VMs onto the same physical hardware), disaster recovery, and creating test machines for patches and new software pre-deployment. In each of these cases, the speed with which a VM can be provisioned and the fact they can be moved between physical machines makes them efficient and very cost effective.

One of the most talked about uses of virtualisation has been for datacentre power and cost reduction. It might seem that wringing every last drop of performance from a server is efficient, but that isn’t always the case. There’s a point where the more you stress a server the less efficient in terms of power and cooling it becomes. The solution is to set a workload limit for servers and then move VMs around to keep a balance across hardware. This reduces hotspots and the need for excessive cooling, and reduces the amount of power drawn by servers. Blade-server vendors such as HP and IBM see this as the next phase of the datacentre.

Disaster recovery and backup becomes so much easier with VMs too. A simple copy command to another machine or write to a dual-layer DVD should ensure most VMs are swiftly copied without the need for potentially troublesome backup software.

When something happens to the hardware, you simply copy the VM onto the backup machine, adjust the memory settings if necessary, and you’re up and running with almost no downtime. With traditional backup methods, you’d need to find an appropriate tape drive, load the tapes and hope they can be read.

The same is true in terms of data. Rather than rely on the disaster-recovery site to load data, the use of SANs allows for virtual datacentres to be constantly updated with the latest data. Should any single datacentre be unavailable, users will still have up-to-date data to hand – no waiting for tapes to be loaded and data to be restored. This allows for VMs to be moved to different locations easily, quickly and without fear about their data, which is always available through virtual drives.

Calculating the savings is difficult, but simply keeping a large pool of machines updated or maintaining a room at a hosting company can easily exceed $200,000 per annum. Short-term leases, purchase of commodity servers and reloading VMs can slash that cost.

It isn’t only the datacentre where VMs are being deployed. The use of VMs for deploying secure applications to key users or to allow developers to test software has become quite popular. Giving selected users new versions of software to test in a VM allows for a wider test audience with no risk to core corporate data.

Virtualisation also helps to overcome the nightmare of supporting business-critical legacy applications. As companies upgrade their software, they often find there are older operating systems and applications they can’t do without. Upgrading the hardware isn’t a solution, because newer hardware doesn’t have the drivers required for old operating systems. This creates the need to constantly maintain older hardware just for these applications and hope it doesn’t break down.

Installing the operating system and applications into a VM allows them to be migrated to a new generation of hardware without loss of access or data. This extends the life of critical applications without increasing ongoing maintenance costs.
Linux and Windows can run side by side on VMware.
Linux and Windows can run side by side on VMware.


Previous PageNext Page
1 2 3 Single page
Copyright © Alphr, Dennis Publishing
Tags:

Most Read Articles

Poll

What would you like to see more of on BiT?
News
Reviews
Features
How To's
Lollies
Photo Galleries
Videos
Opinion
View poll archive

Log In

Email:
Password:
  |  Forgot your password?