- 1 Tracking Active Directory Operations with PowerShell Commands
- 2 Azure Automation DSC Configures from the Cloud
- 3 AD Key Health Checks, Part 4: Backing Up AD Partitions
- 4 AD Key Health Checks, Part 3: Designating Bridgehead Servers
- 5 Keeping Active Directory Running Smoothly - Key Health Checks, Part 2
Hyper-V and VMware vSphere Architectures: Pros and Cons
Both VMware and Microsoft have been in the server virtualization scene for a number of years — VMware for more than a decade now, while Microsoft entered into it relatively recently.
It is imperative for IT workers or organizations to understand the differences between the Microsoft Hyper-V and VMware vSphere architectures as well as the advantages and disadvantages each technology offers before they propose the virtualization solutions to their customers or employees — or before using it in a production environment.
There are a number of important components to consider when choosing either VMware vSphere or Microsoft Hyper-V, but from an architecture standpoint of view, the following components play an important role when it comes to choosing the right server virtualization product:
- Device Driver Location in the architecture
- Controlling Layer components
- Hypervisor Layer components
In general, there are three types of virtualization architectures virtualization vendors refer to. They are:
- Type 2 VMM
- Type 1 VMM
- Hybrid VMM
While explaining all three types of virtualization architecture is out of the scope of this article, the one that we'll primarily focus on in this article is the Type 1 VMM. Type 1 VMM is what Microsoft Hyper-V and VMware are using to implement their server virtualization technologies.
Type 1 VMM can be further divided into two subcategories: Monolithic Hypervisor Design and Microkernelized Hypervisor Design. Both designs have three layers in which different components of virtualization product operate.
The lowest layer is called the "Hardware Layer" and is virtualized by the "Hypervisor layer" running directly on top of the "Hardware Layer." The top layer is called "Controlling Layer." The overall objective of the "Controlling Layer" is to control the components running in this layer as well as provide the necessary components for virtual machines to communicate with the "Hypervisor Layer."
Note: The "Hypervisor layer" is sometimes referred as "VMM Layer" or "VM Kernel Layer."
Microkernelized Hypervisor Design
The Microkernelized Hypervisor Design is used by Microsoft Hyper-V. This design does not require the device drivers to be part of the Hypervisor layer — the device drivers operate independently and run in the "Controlling Layer" as shown in the image below:
The Microkernelized Hypervisor Design provides the following advantages:
- Device drivers are not needed for each device to be incorporated in the "Hypervisor Layer" or VMM Kernel
- Since Microsoft does not provide Application Programming Interfaces (APIs) to access the "Hypervisor Layer," the attack surface is minimized. No one can inject foreign code in the "Hypervisor Layer."
- Device Drivers do not need to be hypervisor-aware. So a wide-range of devices can be used with the Microkernelized Hypervisor Design.
- There is no need to shut down "Hypervisor Layer" to include the device drivers. The Device Drivers can be installed in the Operating System running in the "Controlling Layer" (Windows Server 2008, R2, and Windows Server 2012) and used by the Virtual Machines to access the hardware in the "Hardware Layer."
- "Hypervisor Layer" has less overhead for maintaining and managing the Device Drivers.
- The Microkernelized Hypervisor Design allows you to install any other server roles in the "Controlling Layer" apart from Server Virtualization role (Hyper-V).
- There is less initialization time required. The Microsoft Hypervisor code is only 600 KB or so in size. As a result, the "Hypervisor Layer" does not require more time to initialize its components.
Read more on "Server Virtualization Spotlight" »