.

.

.

Virtual Riders Headder - Briefings

Virtualisation
This document has been broken down to allow you to firstly acquaint yourself with the theory and practicality of virtualisation and then to look at how to plan, install and develop a virtualisation environment. This can involve many different operating systems and environments.

What is Virtualisation?
Virtualisation is simply the ability to run multiple operating systems within a single computer environment such as a desktop computer, laptop or even server. It is made up of a base hardware and software installation called a host system and many virtual hardware and software layers. The host operating system and hardware do not “know” that the virtual layer even exists.

Virtualisation affects hardware as much as it does software. The basic requirements of the virtualisation system is slightly higher of that of a normal computer system. This is mainly increased RAM (Random Access Memory – System Memory) and increased Hard Disk space (Storage Memory). This is technically referred to as the systems overhead.

In most modern systems today the hardware is sufficient for most of the virtualisation requirements, so there is normally little or no need to change the current specification.

In a normal computer system/workstation there is a system drive (hard disk) and memory allocated to running the operating system (for example, Windows or Linux). Most of the workstations around today are about 3 times the power required by the end user. A single processor within a computer/workstation is normally under high load (the amount of processes running at any one time in a period of high use) when it is using  around 45% of the workstations actual usable resources. There are very few applications that can cause a workstation to use all of its resources. Normally when a system/workstation runs it is caused by mis-configuration or bad file housekeeping.

On a multiple core system such as the Dual core or Quad core Intel processors, the system overheads are considerably higher as most computer processes are single processor capable. This means that a single core can perform all required functions adequately, so leaving considerable processing capacity.

There are a small group of functions that are not suited to virtualisation.  These include games and video editing which place a demand on graphics adapters to perform their functions. This can impact on the way that virtualisation works.

How does virtualisation work?
Virtualisation works by configuring a host operating system with a specific piece of software that allows guest operating systems being installed on top of it. These are called the system layers.

The host system works independently from the guest systems. Guest systems also work independently of each other. This can be beneficial within multi network environments or systems that need to fulfil different tasks.

The layers are the key to the virtualisation system. Hardware and software layers make up a virtualised system and form the basis for all further systems to be incorporated.

Virtualisation: A New Breed of Servers.
Creating servers in a standalone environment can be seriously complex and expensive; after you have created various workstation environments you are ready to explore servers.

The initial objective of virtualisation was to create multiple sustainable environments on a single workstation. It has now achieved its own niche in the world of servers.

One can see the development of virtualisation as a natural progression in server deployment.

When servers were firstly deployed they had a single function as the server was only powerful enough to operate a single function reliably. These were called Dedicated servers. This practice allowed organisations and corporate giants to allocate specific tasks to a single server environment, for example E-mail, Backup, Domain Controllers, and Web Services. This was desirable as each server could be configured for its exact task allowing it to perform within a reliable environment. If an email server failed the rest of the services would remain working, if more resources were needed to perform a specific function then the single server environment could be upgraded and so the costly upgrade could be applied to a single element of the server setup. This kept server groups within budget but the power consumption was definitely very high.. This approach is costly.

A new way of thinking was to deploy within a single server that could perform many roles. As technology improved, so did the use of the multi role server. Due to larger and larger amounts of storage space and memory plus processing power venturing into multicore technology the servers became powerhouses with immense possibilities, allowing Email servers, file servers and

Domain controllers to function on a single server. With cut down server operating systems compounded into a single operating system for small organisation the costs were considerably reduced.

These dedicated compound servers or Small Business Server products changed the way organisations attached their services to a single serving product. Within this approach are the first examples of the thinking behind virtualisation.

Low Cost setup and overall power consumption with one server serving many functions is the natural conclusion in the process which is what the original virtualisation objective was all about.

Virtual Servers allow us to return to dedicated single function servers but within a single computer environment.

Previously, in most cases, if one function of the server fails within a Small Business Server (SBS) it will cause all the other functions of the server to fail as well. With dedicated single function servers the role that server is undertaking simply breaks whilst the other servers continue to work.  You therefore only need to restore the single server and its function.

Dedicated devices or servers are definitely the best practices employed today in any server environment.It allows an organisation to reduce server roles that would not utilised cutting down on potential security or system risks and even failure.

Virtual Servers also have a very powerful advantage in that they , can migrate the servicing environments across hardware very quickly and efficiently. Hardware can breakdown from time to time and in a server environment this can have a catastrophic impact on productivity.
If there is server failure, user data, the ability to log onto workstations, email and data shares all cease. This can cause an organisation to stop dead. In the past even updates from Microsoft have crippled organisations. Placing your server functions in individualised virtual servers removes many of these issues.  Virtualisation is definitely a cure for 99% of all organisation server problems. Software problems are definitely the cause of a high percentage of server failure but hardware failure can cause catastrophic down time.  Many servers have specific components that are bespoke to the manufacture. HP, Compaq and even Intel servers have parts that may be days away to bring the server back online.

Additionally, hardware is definitely a hard area to get right in the server systems. But now with the help of virtualisation it is becoming less of a problem. Virtualisation systems have a high degree of interchange ability with other servers as they have less hardware dependency. Hardware non-dependency has changed the way we think about a server group and how we even deploy our server system.

The server model
A server normally consists of many parts and the hardware and software are split into many parts. Software as already stated can be split into groups, for example, server operating systems dealing with functions such as mail, storage, backup and internet access. Hardware can also be looked at in parts; storage groups such as hard disks and hard disk arrays (which many hard drives that are built in a series of groups that form a group of disks either functioning as one or backing up each other for the event of failure.), multi-core processors and banks of memory, network adapters and external storage devices such as tape backup and digital stores.

Virtualisation looks at all the parts of a server and turns them from a complicated array of components performing each task via an operating driver assisted software engine into its core and basic function. If it’s a hard disk, it doesn’t care about make or model just about viable size and storage capacity. If it’s a network adapter, it’s function is to communicate and manage the speed in which it can do so rather than the make or model of the adaptor itself.

Hardware virtualisation is basically an amazing addition to any computing environment but is revolutionary to servers and uptime (the time in which the services are available to the users -normally a percentage).

So, the way in which a server would normally function has dramatically changed. With virtualisation the server can simply be moved from one machine to another and set up in minutes rather than being locked to the machine it’s running on. The whole software environment, settings, user data and even hardware settings are contained within a storage file and a settings file. This can simply be moved from one server to another and the whole environment moves. The end user would not even know the difference. Storage can now exist on a different server or storage group and dedicated servers can be built with migration in mind.  This is a bonus when operating a server group and is a considerable advancement in the reliability and dependability of storage and serving environments.

In extreme cases even a workstation or group of basic machines can be used as a standby server so that the end users can carry on working while the failure is dealt with.

The switch to virtualisation is definitely as big as that of valve technology to the micro processor. The impact on the end user/organisation user is negligible but in the server rooms it will be like a tidal wave of fresh and inspiring ideas impacting on a working environment and a more steadfast future.
The possibilities are now evermore widened... Virtualisation is the future and the future is here.

Getting started with virtualisation.
Developing the right system for your organisation to use can be difficult to establish but it will be rewarding in productivity and very cost effective.  To begin we need to layout the basics of what we are trying to do.

Both workstation Virtualisation, and server Virtualisation will be considered with more focus on the server element.
Surprisingly we start in the same place. Only in the long term use of the systems do things take different paths.

Creating a virtual machine.
For these examples we are going to use Microsoft Virtual PC 2007

Workstation or servers are created in the same way; we use a piece of software from Microsoft called Virtual PC2007.

We download the setup file and install it onto our system. After a few minutes it will be installed and ready to use.  (If you are using windows 7 then you will need Virtual PC 2007 R2 and also to remove the virtualisation updates).  You will receive a compatibility warning if you have installed Windows Virtual PC instead .  If you get this, and you want to use Virtual PC 2007 instead of Windows

Virtual PC, you will need to uninstall Windows Virtual PC. 

Note – it will not work if you just uncheck Windows Virtual PC under the Windows Features dialog.  You need to actually uninstall the Windows Virtual PC update.

Virtual operating systems require most things that the host operating system requires. If it is from Microsoft it will require licensing and this licence will be unique.  It is an operating system in its own right  and so this also applies to any software installed such as

Office or Antivirus Software.) In most cases the virtual system is the one with all the features installed and the underneath host is normally bare. The host system is only there to act as a base for the virtual systems and should be used only for this purpose.

Once the software is loaded then you simply click on the virtual pc software and it will produce a window ready for you to start building your machine.
  • Click New - and select Virtual Machine
  • Click Next
  • Name the Virtual Machine “Test bed
  • Click Next
  • Allocate the amount of RAM (Random Access Memory) you wish the Virtual Machine to use (this can be adjusted anytime).
  • Select create a new Virtual Hard Disk
  • Select where the Virtual hard disk is to be created and stored.(The type of hard disk created is called Dynamic.  This is the standard type for VHDD and will dynamically expand to the maximum of 130,557 MB or 130 GB. The starting size is up to you; default is 65536. For XP set it to about 20000Mb or 20GB)
  • Once you click next 
  • Click finish you should see a new virtual machine on the left of the window. 
Highlighting any virtual machine and as long as it isn’t running you can adjust its settings by clicking on the settings buttons. For this example we are going to be installing WINDOWS XP. We will need at least 256mb RAM available for the system to be useable.
  • Click on the settings button and select memory, on the right hand side adjust the amount of memory with the slider
  • Then Click OK.
  • If you double click on the virtual machine it should start!

To load an operating system from cd rom select cd on the top menu and select the physical drive that your disk is in, once you have done this select from the action menu, reset and then the virtual machine will start the installation, treat the installation as though it was being installed an another machine.

If you are virtualising in Windows 7 then you can use Windows 7 X-P mode.  (Contact Virtual Riders for details of how to do this.) You can build your base server installations this way.Once you have built your virtual machine you can simply click on the virtual machine to run it and then click anywhere on the windows screen to capture the mouse.

To add extra functionality Click on action and virtual machines additions (update or install).

Please note USB is not virtualised in this version, USB is only accessible through Windows 7 XP-Mode.

You can add various installations of operating systems including linux projects for example.

Multiple operating systems can be used on a single machine and the benefits can be easily explored.For the technically minded, setting up a virtual PC is not complex.  However, there are a number of issues which can arise.  There are observations that Virtual PC 2007 does not support DVD writing or USB.  Whilst these are sure to be overcome in time, it is worthwhile checking against your specific needs before taking the plunge.



Virtual PC 2007:
 
Creating Virtual Machines with Microsoft Virtual PC 2007
http://www.petri.co.il/virtual_create_virtual_machines_virtual_pc_2007.htm

Installing a new OS on a new VM with Microsoft VirtualPC 2007

Windows Virtual PC (don’t use for setting up a virtual server at the moment)
http://www.microsoft.com/windows/virtual-pc/