What is the term for the hardware used to allow multiple operating systems to run concurrently in a virtual network?

An Introduction to Virtualization

In Virtualization for Security, 2009

Frequently Asked Questions

Q:

What is virtual machine technology used for?

A:

Virtual machine technology serves a variety of purposes. It enables hardware consolidation, simplified system recovery, and the re-hosting of earlier applications because multiple operating systems can run on one computer. One key application for virtual machine technology is cross-platform integration. Other key applications include server consolidation, the automation and consolidation of development and testing environments, the re-hosting of earlier versions of applications, simplifying system recovery environments, and software demonstrations.

Q:

How does virtualization address a CIO's pain points?

A:

IT organizations need to control costs, improve quality, reduce risks and increase business agility, all of which are critical to a business' success. With virtualization, lower costs and improved business agility are no longer trade-offs. By enabling IT resources to be pooled and shared, IT organizations are provided with the ability to reduce costs and improve overall IT performance.

Q:

What is the status of virtualization standards?

A:

True open standards for getting all the layers talking and working together aren't ready yet, let alone giving users interoperable choices between competitive vendors. Users are forced to rely on de facto standards at this time. For instance, users can deploy two different virtualization products within one environment, especially if each provides the ability to import virtual machines from the other. But that is about as far as interoperability currently extends.

Q:

When is a product not really virtualization but something else?

A:

Application vendors have been known to overuse the term and label their product “virtualization ready.” But by definition, the application should not be to tell whether it is on a virtualized platform or not. Some vendors also label their isolation tools as virtualization. To isolate an application means files are installed but are redirected or shielded from the operating system. That is not the same as true virtualization, which lets you change any underlying component, even network and operating system settings, without having to tweak the application.

Q:

What is the ideal way to deploy virtualization?

A:

Although enterprises gain incremental benefits from applying virtualization in one area, they gain much more by using it across every tier of the IT infrastructure. For example, when server virtualization is deployed with network and storage virtualization, the entire infrastructure becomes more flexible, making it capable of dynamically adapting to various business needs and demands.

Q:

What are some of the issues to watch out for?

A:

Companies beginning to deploy virtualization technologies should be cautious of the following: software costs/licensing from proliferating virtual machines, capacity planning, training, high and unrealistic consolidation expectations, and upfront hardware investment, to name a few. Also, sufficient planning upfront is important to avoid issues that can cause unplanned outages affecting a larger number of critical business applications and processes.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597493055000013

Software, Operating Systems, and Enterprise Applications

Bill Holtsnider, Brian D. Jaffe, in IT Manager's Handbook (Third Edition), 2012

Multiple Operating Systems

It's common now to have multiple OSes in one company, despite corporate efforts to keep everyone on one platform. A 2011 InformationWeek survey showed that 85 percent of IT organizations officially support more than one OS (with the average supporting three), and 65 percent of IT organizations allow one or more additional OSes that are not officially supported. A big part of this has been the growth of mobile devices with their own operating systems. Users sometimes go their own way. Supporting three platforms in one environment, of course, can be a difficult task. Here are some ways of dealing with mixed environments.

Operating System Emulators

A number of tools and products have been developed to allow applications developed for one operating system to run on another operating system. This can help eliminate the need for users to have two different workstations. While emulators are very convenient, they can create additional challenges for support. Just as there are nuances and idiosyncrasies with applications and operating systems (version, components, setting, hardware, etc.), an emulator can complicate things further by adding another layer into the mix. For example, when an application isn't working right, is it because of the application, the core operating system, or the emulator? Perhaps it's how these are configured or how they interact with each other.

Virtual Machines

Virtualization allows you to take a single physical device (e.g., one workstation or server) and run multiple instances of operating systems. Each of these instances looks and operates as its own device, but because they coexist on a single physical device, they are considered to be virtual machines. Even if one of the instances should crash, the remaining operating system instances will continue to run. With virtual machine technology, all instances of the various operating systems run simultaneously, and switching among them is fast and easy.

Uses

The virtual machine offerings designed for clients are convenient for a number of scenarios. If your environment has to support multiple operating systems, virtual machine software can be used by:

Your Help Desk and support staff so that they can use a single piece of hardware to run different operating systems comparable to what different users have. In this way, the support staff can replicate the user's environment when providing support.

End users during operating system migration. If a particular business application isn't yet supported by more current versions of an operating system, it can challenge efforts to move the whole organization to the later OS version. By using virtual machine software, all users can be migrated to the new OS, and legacy applications can be run on an instance of an older OS.

The training room PCs can run various instances of operating systems, each configured similarly to various user scenarios. This allows the trainers to easily provide training that replicates the various user environments.

Software developers and testers can use virtual machine technology to test their applications easily from varying workstation configurations and environments.

Virtual machine offerings for servers also have distinct benefits and are quickly being embraced by organizations as a way to efficiently consolidate servers. By consolidating servers, the investment in hardware can be reduced. But, perhaps one of the most significant advantages is that it requires less physical space in the data center, and also less cooling and electricity. Virtual machines also let IT quickly create new environments for testing and development without having to buy new hardware. Server environments that are virtualized can be moved easily and quickly to different hardware, which greatly simplifies the effort of optimizing the resources and moving applications to more current hardware.

Also, legacy applications can be run on virtual servers without having to set aside dedicated hardware environments for them. It is important to note that with the growing popularity of virtualization, many software vendors have included specific references to this technology in their licensing agreements so that if you have multiple copies of their software running on virtualized servers, you need to be sure that each instance is licensed properly. The most popular solutions for virtualization are VMware from EMC and Hyper-V from Microsoft.

Virtual Desktop Infrastructure (VDI)

An offshoot of the virtual machines is the virtual desktop infrastructure (VDI). With VDI, end-users are given “thin” clients (e.g., low-end workstations, netbooks, and even tablet devices). The real horsepower resides on the VM sessions that have the tools and software the users need. They use these devices to access a virtual workstation environment that is hosted in the data center in a server farm. The benefit behind VDI is that the user-devices are inexpensive, with little configuration and software associated with them, almost like dumb-terminals of mainframe days, and the VMs are more easily managed since they are centralized. The user has the identical experiences with access to files and programs regardless of where they are, and the device in front of them. The number of VMs can be easily scaled to adjust to a changing workforce so that the environment and investment is optimized.

However, so far the ROI on VDI has been somewhat illusory. Case studies have shown that the upfront costs can be very high, and that there have been problems with user acceptance, performance, and issues with software application compatibility in virtual environments.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124159495000053

Configuring Windows Server Hyper-V and Virtual Machines

Tony Piltzecker, Brien Posey, in The Best Damn Windows Server 2008 Book Period (Second Edition), 2008

Understanding Virtualization

At times, you will need a method to hide the physical characteristics of the host system from the way in which other systems and applications interact with it. This includes the way end users will interact with those resources as well. For example a single server, operating system, application, or storage device may need to appear to function as multiple logical resources. You may also want to make multiple physical resources (such as storage devices or servers) appear as a single logical resource as well. This method is known as virtualization and is achieved in Windows Server 2008 by the installation of Hyper-V and the creation of virtual machines.

Virtualization is a broad term that deals with many aspects of its use. The trouble is being able to understand different types of virtualization. This includes what these types offer for you and how they can help accomplish your individual goals. There are currently three major types of virtualization that are available for use with Windows Server 2008 with Hyper-V role installed. Here is a simple explanation of each:

Server Virtualization is the when the physical server hardware is separated from Guest operating systems (servers).

Network Virtualization involves creating the illusion that all resources on the network are part of the user's desktop computer by moving network applications, localizing them in a seamless fashion through virtualization.

Storage Virtualization involves hiding disk storage complexities by creating a Storage Area Network, which is responsible for redirecting storage requests from end users through virtualization.

Hyper-V allows virtual machine technology to be applied to both server and client hardware. It enables multiple operating systems to run concurrently on a single machine. Hyper-V specifically, is a key feature of Windows Server 2008. Until the release of Windows Server 2008, many x86-based operating systems seeking to use a Microsoft solution achieved virtualization via Virtual PC 2007 and Virtual Server 2005.

News and Noteworthy…

Understanding Virtual PC and Server

Before Windows Server 2008 there were other options for virtualization and emulation. Two of the most common that you should be aware of are Virtual PC and Virtual Server 2005. Both are virtualization suites for the PC and emulations suites for use in Mac OS X. They allow the cross-platform use of a variety of PC applications and operating systems. Virtual Server 2005 and Virtual PC 2007 mimic standard Intel Pentium 4 processors with an Intel 440BX chipset. Thus, they can be used to emulate certain applications for the PC on a Mac PowerPC. Virtual PC 2007 is a recent release that allows the emulation and virtualization of Microsoft Vista. However, issues can arise when trying to install uncommon operating systems that have not been specifically targeted in the development of Virtual PC or Virtual Server.

Virtual PC for the Mac uses dynamic recompilation in order to translate the x86 code used by a standard PC. It translates this code into the equivalent PowerPC code used by a Mac. Virtual PC for Windows also uses the same type of dynamic recompilation. Instead it takes kernel mode and real mode x86 code and translates it into x86 user mode code. The original user mode and virtual 8086 mode code runs natively, allowing fluidity.

Both Virtual PC and Virtual Server were useful in the development of the virtualization trend but received some complaints. Both seemed to fall short in specific areas of compatibility, especially in regards to Linux and uncommon application usage. Another drawback is that Virtual Server 2005 and Virtual PC 2007 are compatible for hosts with x64 processors but cannot run guests that require x64 processors running a 64 bit OS. Both of these products are commonly utilized but have been slowly declining in popularity due to free virtualization software such as VMware and Xen. Windows Server 2008 is Microsoft's solution to this issue by offering more options than its free competitors via the new features available through Hyper-V.

Virtualization creates virtual machines, each of which are capable of running a different operating system on a single physical machine while the virtual machines run without interference behind partitions. This can be used to simulate multiple native environments and most of the benefits that would normally require multiple machines and deployments. Virtualization is a growing trend that can greatly reduce deployment costs. It allows a single machine to run multiple operating systems simultaneously while allowing you to dynamically add physical and virtual resources to those machines as needed. For example, you can install multiple virtualized operating systems on the host computer and each virtualized operating system will run in isolation from the other operating systems installed.

News and Noteworthy…

Expanding the Limits of Hyper-V Virtualization

Hyper-V has many features that surpass previous virtualization software packages as a stand alone virtualization suite installed on Windows Server 2008. Hyper-V allows for greater application compatibility than previous software of its kind. Because of this, Hyper-V can utilize and take advantage of other tools and applications to create an even more versatile and dynamic experience. This greatly increases the options for administrators to customize their virtual network to their own specific needs for their company.

There are many scenarios that can test the limits of Hyper-V that Microsoft is only beginning to speculate on. As an example of the options available with Hyper-V, consider this scenario. Many of you may be familiar with Virtual PC 2007 and use it as a solution for running a virtualized Windows 98 operating system. You may find it interesting to know that although this scenario is not supported by Microsoft using Hyper-V, it is possible to run a Windows Server 2008 with Hyper-V with Windows 2003 installed on a virtual machine. By then installing Virtual PC 2007 onto that virtual machine you can in turn use Virtual PC 2007 to run Windows 98. Both Virtual PC and Virtual Server work in this scenario and have better than expected performance. Microsoft plans on expanding their support of Hyper-V to eventually cover more of these unique scenarios.

Although virtualization has existed in many incarnations it is not a catch-all fix for all networking needs. There are still many precautions that are required when working with Hyper-V or any virtualization suite. For example, if the machine acting as a host for your virtual network is not part of some failover group or you do not have a method of back up such as a hot spare, you could find yourself with a total network outage if that computer fails. There will always be a need for redundancy of some kind for no other reason than to be thorough and prove due diligence. Also note that the Hyper-V requires heavier system requirements such as enterprise class hardware and training that must also be factored into your company's cost considerations.

The potential of virtual machine technology is only beginning to be harnessed and can be used for a wide range of purposes. It enables hardware consolidation, because multiple operating systems can run on one computer. Key applications for virtual machine technology include cross-platform integration as well as the following:

Server consolidation Consolidating many physical servers into fewer servers, by using host virtual machines. Virtual machine technology can be used to run a single server with multiple operating systems on it side by side. This replaces the need for multiple servers that were once required to run the same applications because they required different operating systems to perform their tasks. The physical servers are transformed into virtual machine “guests” who occupy a virtual machine host system. This can be also called Physical-to-Virtual or P2V transformation.

Disaster recovery Physical production servers can use virtual machines as “hot standby” environments. Virtual machines can provide backup images that can boot into live virtual machines. This allows for application portability and flexibility across hardware platforms. It changes the traditional “backup and restore” mentality. These VMs are then capable of taking over the workload of a production server experiencing an outage.

Consolidation for testing and development environments Virtual machines can be used for testing of early developmental versions of applications without fear of destabilizing the system for other users. Because each virtual machine is its own isolated testing environment it reduces risks incurred by developers. Virtual machines can be used to compare different versions of applications for different operating systems and to test those applications in a variety of different configurations. Kernel development and operating system test course can greatly benefit from hardware virtualization, which can give root access to a virtual machine and its guest operating system.

Upgrading to a dynamic datacenter Dynamic IT environments are now achievable via the use of virtualization. Hyper-V, along with systems management solutions enables you to troubleshoot problems more effectively. This also creates IT management solution that is efficient and self-managing.

In past Windows versions there were specific limitations on these virtual machines created. In Windows Server 2008 with Hyper-V installed each virtual operating system is a full operating system with access to applications. You can also run non-Microsoft operating systems in the virtualized environment of Windows Server 2008.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492737000100

Cloud Computing and the Forensic Challenges

Diane Barrett, Gregory Kipper, in Virtualization and Forensics, 2010

Cloud Computing, Virtualization, and Security

Cloud services at the infrastructure, platform, or software level often go hand-in-hand with virtualization in order to create economies of scale. However, combining these technologies does create some additional security concerns. We have discussed several types of virtualization so far in this book, but for this section, we will be focusing on the virtualized OS and some of the specific security risks associated with it, which are compartmentalization, network security, and centralization. We'll look at each in turn:

Compartmentalization: specifically, if VM technology is being used in cloud services, it is very important to ensure that appropriate compartmentalization of virtual environments is happening and that the VM systems themselves are being hardened.

Network security: since VMs communicate over hardware that simulates a network environment, standard network security controls have no capability of seeing this traffic and cannot perform monitoring functions. Network security needs to take on a new form in the virtual environment.

Centralization: while centralizing services and storage has security benefits to be sure, in this instance, there is now the problem of centralizing risk that increases the consequences of a breach. Another concern with this kind of centralization is the different levels of sensitivity and security that each of the VMs may need. In cloud environments, typically the lowest common level of security is the standard in a multitenant environment, which can quickly lead to insufficient security for VMs requiring more.

Some questions to consider when looking at virtualization from a cloud provider include the following:

What types of virtualization do they use, if any?

What security controls are in place on the VM beyond just hypervisor isolation?

What security controls are in place external to the VM?

What is the integrity of the VM image being offered by the cloud provider?

What reporting mechanisms are in place providing evidence of isolation?

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495578000102

Cloud Computing Architecture

Rajkumar Buyya, ... S. Thamarai Selvi, in Mastering Cloud Computing, 2013

4.2.1 Architecture

It is possible to organize all the concrete realizations of cloud computing into a layered view covering the entire stack (see Figure 4.1), from hardware appliances to software systems. Cloud resources are harnessed to offer “computing horsepower” required for providing services. Often, this layer is implemented using a datacenter in which hundreds and thousands of nodes are stacked together. Cloud infrastructure can be heterogeneous in nature because a variety of resources, such as clusters and even networked PCs, can be used to build it. Moreover, database systems and other storage services can also be part of the infrastructure.

What is the term for the hardware used to allow multiple operating systems to run concurrently in a virtual network?

Figure 4.1. The cloud computing architecture.

The physical infrastructure is managed by the core middleware, the objectives of which are to provide an appropriate runtime environment for applications and to best utilize resources. At the bottom of the stack, virtualization technologies are used to guarantee runtime environment customization, application isolation, sandboxing, and quality of service. Hardware virtualization is most commonly used at this level. Hypervisors manage the pool of resources and expose the distributed infrastructure as a collection of virtual machines. By using virtual machine technology it is possible to finely partition the hardware resources such as CPU and memory and to virtualize specific devices, thus meeting the requirements of users and applications. This solution is generally paired with storage and network virtualization strategies, which allow the infrastructure to be completely virtualized and controlled. According to the specific service offered to end users, other virtualization techniques can be used; for example, programming-level virtualization helps in creating a portable runtime environment where applications can be run and controlled. This scenario generally implies that applications hosted in the cloud be developed with a specific technology or a programming language, such as Java, .NET, or Python. In this case, the user does not have to build its system from bare metal. Infrastructure management is the key function of core middleware, which supports capabilities such as negotiation of the quality of service, admission control, execution management and monitoring, accounting, and billing.

The combination of cloud hosting platforms and resources is generally classified as a Infrastructure-as-a-Service (IaaS) solution. We can organize the different examples of IaaS into two categories: Some of them provide both the management layer and the physical infrastructure; others provide only the management layer (IaaS (M)). In this second case, the management layer is often integrated with other IaaS solutions that provide physical infrastructure and adds value to them.

IaaS solutions are suitable for designing the system infrastructure but provide limited services to build applications. Such service is provided by cloud programming environments and tools, which form a new layer for offering users a development platform for applications. The range of tools include Web-based interfaces, command-line tools, and frameworks for concurrent and distributed programming. In this scenario, users develop their applications specifically for the cloud by using the API exposed at the user-level middleware. For this reason, this approach is also known as Platform-as-a-Service (PaaS) because the service offered to the user is a development platform rather than an infrastructure. PaaS solutions generally include the infrastructure as well, which is bundled as part of the service provided to users. In the case of Pure PaaS, only the user-level middleware is offered, and it has to be complemented with a virtual or physical infrastructure.

The top layer of the reference model depicted in Figure 4.1 contains services delivered at the application level. These are mostly referred to as Software-as-a-Service (SaaS). In most cases these are Web-based applications that rely on the cloud to provide service to end users. The horsepower of the cloud provided by IaaS and PaaS solutions allows independent software vendors to deliver their application services over the Internet. Other applications belonging to this layer are those that strongly leverage the Internet for their core functionalities that rely on the cloud to sustain a larger number of users; this is the case of gaming portals and, in general, social networking websites.

As a vision, any service offered in the cloud computing style should be able to adaptively change and expose an autonomic behavior, in particular for its availability and performance. As a reference model, it is then expected to have an adaptive management layer in charge of elastically scaling on demand. SaaS implementations should feature such behavior automatically, whereas PaaS and IaaS generally provide this functionality as a part of the API exposed to users.

The reference model described in Figure 4.1 also introduces the concept of everything as a Service (XaaS). This is one of the most important elements of cloud computing: Cloud services from different providers can be combined to provide a completely integrated solution covering all the computing stack of a system. IaaS providers can offer the bare metal in terms of virtual machines where PaaS solutions are deployed. When there is no need for a PaaS layer, it is possible to directly customize the virtual infrastructure with the software stack needed to run applications. This is the case of virtual Web farms: a distributed system composed of Web servers, database servers, and load balancers on top of which prepackaged software is installed to run Web applications. This possibility has made cloud computing an interesting option for reducing startups’ capital investment in IT, allowing them to quickly commercialize their ideas and grow their infrastructure according to their revenues.

Table 4.1 summarizes the characteristics of the three major categories used to classify cloud computing solutions. In the following section, we briefly discuss these characteristics along with some references to practical implementations.

Table 4.1. Cloud Computing Services Classification

CategoryCharacteristicsProduct TypeVendors and Products
SaaS Customers are provided with applications that are accessible anytime and from anywhere. Web applications and services (Web 2.0)

SalesForce.com (CRM)

Clarizen.com (project management)

Google Apps

PaaS Customers are provided with a platform for developing applications hosted in the cloud.

Programming APIs and frameworks

Deployment systems

Google AppEngine

Microsoft Azure

Manjrasoft Aneka

Data Synapse

IaaS/HaaS Customers are provided with virtualized hardware and storage on top of which they can build their infrastructure.

Virtual machine management infrastructure

Storage management

Network management

Amazon EC2 and S3

GoGrid

Nirvanix

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124114548000048

A survey on resource allocation in high performance distributed computing systems

Hameed Hussain, ... Ammar Rayes, in Parallel Computing, 2013

2.4.9 VM migration

The VM technology has emerged as a building block of data centers, as it provides isolation, consolidation, and migration of workload. The purpose of migrating VM is to seek improvement in performance, fault tolerance, and management of the systems over the cloud. Moreover, in large scale systems the VM migration can also be used to balance the systems by migrating the workload from overloaded or overheated systems to underutilized systems. Some hypervisors, such as Vmware [89] and Xen, provides “live” migration, where the OS continues to run while the migration is performed. VM migration is an important aspect of the cloud towards achieving high performance and fault tolerance.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S016781911300121X

On cloud security attacks: A taxonomy and intrusion detection and prevention as a service

Salman Iqbal, ... Kim-Kwang Raymond Choo, in Journal of Network and Computer Applications, 2016

2.1 Levels of attacks in CC

The following section defines the different types of attacks which occur in a CC environment. Each of these attacks is described as follows:

2.1.1 VM-to-VM attacks

Virtual machines are considered as a container which contains applications and guest operating systems. Cloud providers use hypervisor and VM technologies in cloud multi-tenant environment which consists of potential vulnerabilities. CC based on VM technology includes hypervisors such as VMWare vSphere, Microsoft Virtual PC, Xen etc. Attacks occurs due to the vulnerabilities in these technologies (Sabahi, 2011).

2.1.2 Client-to-client attacks

Client attacks on other client's machines by gaining the benefits of vulnerabilities in client applications which runs on a malicious server. As there is one physical server over several VMs, one malicious VM can infect all the other VMs working on the same physical machine as illustrated in Fig. 1. Here, the attack occurs on one client VM escapes to other client's VMs that are hosted over a single physical machine. As a result, the entire virtualize environment could become compromised and malicious clients can escape the hypervisor and can access the VM environment. As a result, the attackers can get the administrative privileges of the virtualized environment and can officially get access to all the VMs. Hence, the “client to client attacks” is a major security risk to the virtualized environment (Sabahi, 2011).

What is the term for the hardware used to allow multiple operating systems to run concurrently in a virtual network?

Fig. 1. Client to client attacks.

2.1.3 Guest-to-guest attack

To secure the host machine from attacks is an important factor because if an attacker gains administrative access to hardware, then most probably the attacker can break into the VMs. This scenario is called the guest-to-guest attacks which are illustrated in Fig. 2. As a result, the attackers can hop from one VM to another because the underlying security framework is compromised (Reuben, 2007).

What is the term for the hardware used to allow multiple operating systems to run concurrently in a virtual network?

Fig. 2. VM to VM/ Guest to guest attacks.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1084804516301771

Resource provisioning in edge/fog computing: A Comprehensive and Systematic Review

Ali Shakarami, ... Mohammad Faraji-Mehmandar, in Journal of Systems Architecture, 2022

2.4.3 Granularity

Generally, granularity could be defined as the system's volume broken down into small parts. In the provisioning process, it could be referred to as the size of the resource to be provisioned. In this respect, granularity is considered as a virtual machine, container, and application. Virtual machine technology has been recognized as a perfect solution for most problems such as service deployment in distributed systems and specifically in edge/fog environments. However, due to high degree of coherency and dependency of virtual machines to the host operating systems, container-based services have been proposed. These types of services have improved flexibilities in delivering micro-service structures and efficient resource consumption compared with virtual machines. Despite its advantages, container technology suffers from some critical weaknesses including incapability in running at bare-metal speed, its complication, and incomprehensive platform for running all applications [24].

Application provisioning and task provisioning are the primary and still underutilization methods applying by researchers. The mentioned applications and tasks can be run on virtual machines, containers, or physical machines to process users’ requests efficiently. Since these applications and tasks are running on remote servers located in distributed systems, real-time processing for life-critical applications’ requests is challenging in the mentioned systems. Application provisioning and task provisioning are the techniques to alleviate some challenges by which the applications and tasks are provisioned to be present during request fluctuations [25].

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1383762121002526

A comprehensive survey of hardware-assisted security: From the edge to the cloud

Luigi Coppolino, ... Luigi Romano, in Internet of Things, 2019

6.3 Issues

During the years, vulnerabilities of Intel VT and AMD SVM were demonstrated. On one hand, HW-assisted VM isolation can ensure protection against a set of rootkits. However, on the other hand, there were new advanced rootkits like BluePill [28] and Vitriol, which were specifically designed for HW-assisted virtualization. BluePill relies on AMD SVM technology and is installed without modifications to the BIOS or other boot sectors. BluePill manipulates kernel mode memory paging and the VMRUN and related SVM instructions that control the interaction between the hypervisor and guest. This permits undetected, on-the-fly placement of the host operating system in its own secure virtual machine allowing for complete control of the system including manipulation by other malware. Attacks like BluePill and Vitriol motivated the creation of new mechanisms of hypervisor detection on Intel VT and AMD SVM such as signature-based, behavior-based, detection based techniques.

Moreover, Wojtczuk and Rutkowska [29] demonstrated that Intel VT-d can be attacked when Interrupt Remapping is disabled. The attacks they describe work by forcing the corresponding device to generate a so called Message Signaled Interrupt (MSI), i.e., in-bound mechanism for interrupt signaling. What makes the MSI-based attacks especially interesting, is that in most cases it is possible to mount such an attack without cooperating hardware (malicious device), using entirely innocent and regular device. An additional attack to IOMMU came from Morgan et al. [30] who showed that it can be violated exploiting a weakness in the typical design of Intel VT-d and AMD-Vi. The weakness is related to the configuration tables of the IOMMU, which are initialized in a DRAM region which is not protected from DMA accesses. A malicious peripheral may benefit from this weakness to modify these tables in memory just before the hardware setup of the IOMMU.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S2542660519300101

Federated cloud resource management: Review and discussion

Misbah Liaqat, ... Rana Liaqat Ali, in Journal of Network and Computer Applications, 2017

1 Introduction

Cloud computing (Armbrust et al., 2010; Buyya et al., 1969; Nurmi et al., 2009; Vaquero et al., 2008), a user-centric computational model, is a flexible paradigm of deploying and sharing distributed services and resources with the pay-per-use model. With virtual machine (VM) technology (Smith and Nair, 2005) and data centers (DCs), computational resources, such as memory, central processing unit (CPU), and storage, are dynamically reassembled and partitioned to meet the specific requirements of end users. The cloud not only considers flexible platform-independent access to resources and information anywhere and anytime but also changes the way of designing, deploying, building, expanding, and running applications (Cao et al., 2009). The demand’s growth for cloud services is presenting considerable challenges for cloud providers to meet the requirements and satisfaction of end users (Talia, 2012).

Research by Bakshi (2009) and Bernstein et al. (2009) showed that the trend in cloud computing pattern will shift from a single provider to federated clouds, which are expected to include numerous distributed public and private cloud platforms (Lopez-Rodriguez and Hernandez-Tejera, 2011). Federated clouds enable public and private clouds to share their resources with each other to scale up their resource pools at peak times. The promises of nearly infinite computing power, concomitant storage, and economies of scale can only truly be achieved by cloud federation. The reason behind cloud federation is the finite physical resources in the resource pool of a single provider.

In the federated cloud environment, a cloud provider acts as both infrastructure provider and consumer. The egocentric and rational behavior of federation members focuses on maximizing their revenue and resource utilization by serving as many consumers as possible. Here, consumers can be either federation members who rent resources from one another or regular cloud users. The role of efficient resource management is prominent in such a scenario to guarantee the service request of both the federation members and direct consumers. The resource management functions in the federated cloud environment ensure the objective of federation members and the aggregate utility of the federation, which is necessary for the continuation of the federation.

This study focuses on the resource management functions of federated cloud in which an individual cloud provider provides and consumes Infrastructure as a Service (IaaS) to and from other federation members. The terms inter-cloud, federated cloud, and multi-cloud are interchangeably used in this article for the federation of cloud providers. Three surveys (Grozev and Buyya, 2014; Petcu, 2014; Toosi et al., 2014) were previously published on cloud federation. However, the targets of these surveys outlined the taxonomies, terminologies, definition, and challenges for inter-cloud systems. This survey differs from them in the essence that we focus only on the resource management aspects of inter-cloud computing and study the proposed solutions for inter-cloud resource management issues. Our contributions are as follows: (1) the federated resource management mechanism is classified into resource pricing, resource discovery, resource selection, resource monitoring, resource allocation, and disaster management. (2) Rigorous works on resource pricing, resource monitoring, resource discovery, resource selection, resource allocation, and disaster management are given. The operation and drawbacks of the mechanism are presented with a comparative analysis in terms of different performance metrics for each considered class. (3) Open challenges in each considered class of federated resource management is indicated. (4) Novice researchers are encouraged to work on the research problems.

The remainder of this article is divided into three sections. Section 2 presents a general background of cloud computing, federated cloud computing environments, and resource management in federated cloud environments. Section 3 provides the classification of resource management functions along with the discussion of the state-of-the-art literature. Section 4 concludes the paper.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1084804516302387

What is the name of the component that lets you run multiple operating systems on one physical server?

A Hypervisor or VMM(virtual machine monitor) is a layer that exists between the operating system and hardware. It provides the necessary services and features for the smooth running of multiple operating systems.

What are the 3 types of virtualization?

There are three main types of server virtualization: full-virtualization, para-virtualization, and OS-level virtualization.

Which type of computers run on multiple operating system?

Unix, VMS and mainframe operating systems, such as MVS, are examples of multiuser operating systems.

What is virtualization and hypervisor?

A hypervisor, also known as a virtual machine monitor or VMM, is software that creates and runs virtual machines (VMs). A hypervisor allows one host computer to support multiple guest VMs by virtually sharing its resources, such as memory and processing.