A review of Virtualization
INTRODUCTION
Virtualizing a computing systemâs physical resources to achieve improved sharing and utilization has been well established for decades.1 Full virtualization of all system resourcesâincluding processors, memory, and I/O devicesâmakes it possible to run multiple operating systems on a single physical platform.
In a non-virtualized system, a single OS controls all hardware platform resources. A virtualized system includes a new layer of software, the virtual machine monitor. The VMMâs principal role is to arbitrate accesses to the underlying physical host platformâs resources so that multiple operating systems (which are guests of the VMM) can share them. The VMM presents to each guest OS a set of virtual platform interfaces that constitute a virtual machine (VM).
Once confined to specialized, proprietary, high-end server and mainframe systems, virtualization is now becoming more broadly available and is supported in off-the-shelf systems based on Intel architecture (IA) hardware. This development is due in part to the steady performance improvements of IA-based systems, which mitigates traditional virtualization performance overheads. Other factors include new creative software approaches that address the difficulties inherent to IA virtualization 2-4 and the emergence of novel applications for virtualization in both industry and academia.
VIRTUALIZATION
Virtualization is technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system. Software called a hypervisor connects directly to that hardware and allows you to split 1 system into separate, distinct, and secure environments known as virtual machines (VMs). These VMs rely on the hypervisorâs ability to separate the machineâs resources from the hardware and distribute them appropriately. Virtualization helps you get the most value from previous investments.
The physical hardware, equipped with a hypervisor, is called the host, while the many VMs that use its resources are guests. These guests treat computing resourcesâlike CPU, memory, and storageâas a pool of resources that can easily be relocated. Operators can control virtual instances of CPU, memory, storage, and other resources, so guests receive the resources they need when they need them.
HOW VIRTUALIZATION WORKS
Software called hypervisors separate the physical resources from the virtual environmentsâthe things that need those resources. Hypervisors can sit on top of an operating system (like on a laptop) or be installed directly onto hardware (like a server), which is how most enterprises virtualize. Hypervisors take your physical resources and divide them up so that virtual environments can use them.

Resources are partitioned as needed from the physical environment to the many virtual environments. Users interact with and run computations within the virtual environment (typically called a guest machine or virtual machine). The virtual machine functions as a single data file. And like any digital file, it can be moved from one computer to another, opened in either one, and be expected to work the same.
When the virtual environment is running and a user or program issues an instruction that requires additional resources from the physical environment, the hypervisor relays the request to the physical system and caches the changesâwhich all happens at close to native speed (particularly if the request is sent through an open source hypervisor based on KVM, the Kernel-based Virtual Machine).
TYPES OF VIRTUALIZATION
Data virtualization

Data thatâs spread all over can be consolidated into a single source. Data virtualization allows companies to treat data as a dynamic supplyâproviding processing capabilities that can bring together data from multiple sources, easily accommodate new data sources, and transform data according to user needs. Data virtualization tools sit in front of multiple data sources and allows them to be treated as single source, delivering the needed dataâin the required formâat the right time to any application or user.
Desktop virtualization

Easily confused with operating system virtualizationâwhich allows you to deploy multiple operating systems on a single machineâdesktop virtualization allows a central administrator (or automated administration tool) to deploy simulated desktop environments to hundreds of physical machines at once. Unlike traditional desktop environments that are physically installed, configured, and updated on each machine, desktop virtualization allows admins to perform mass configurations, updates, and security checks on all virtual desktops.
Server virtualization

Servers are computers designed to process a high volume of specific tasks really well so other computersâlike laptops and desktopsâcan do a variety of other tasks. Virtualizing a server lets it to do more of those specific functions and involves partitioning it so that the components can be used to serve multiple functions.
Operating system virtualization

Operating system virtualization happens at the kernelâthe central task managers of operating systems. Itâs a useful way to run Linux and Windows environments side-by-side. Enterprises can also push virtual operating systems to computers, which:
- Reduces bulk hardware costs, since the computers donât require such high out-of-the-box capabilities.
- Increases security, since all virtual instances can be monitored and isolated.
- Limits time spent on IT services like software updates.
Network functions virtualization

Network functions virtualization (NFV) separates a network's key functions (like directory services, file sharing, and IP configuration) so they can be distributed among environments. Once software functions are independent of the physical machines they once lived on, specific functions can be packaged together into a new network and assigned to an environment. Virtualizing networks reduces the number of physical componentsâlike switches, routers, servers, cables, and hubsâthat are needed to create multiple, independent networks, and itâs particularly popular in the telecommunications industry.
REFERENCES
- Barham et al., âXen and the Art of Virtualization,â Proc. 19th ACM Symp. Operating Systems Principles, ACM Press, 2003, pp. 164-177
- T.C. Bressoud and F.B. Schneider, âHypervisor-Based Fault Tolerance,â Proc. 15th ACM Symp. Operating Systems Principles, ACM Press, 1995, pp. 1-11.
- G.W. Dunlap et al., âReVirt: Enabling Intrusion Analysis through Virtual-Machine Logging and Replay,â Proc. 5th Symp. Operating Systems Design and Implementation, Usenix, 2002, pp. 211-224.
- https://www.redhat.com/en/topics/virtualization/what-is-virtualization
- Intel Corp., âIntel Virtualization Technology Specification for the Intel Itanium Architecture;â www.intel.com/technology/vt/.