Virtualization

Refers to virtualizing a computer into multiple logical computers through virtualization technology. Running multiple logical computers on a single computer, each running a different operating system, and the applications can run in separate spaces without affecting each other, significantly improving the efficiency of the computer.
effect

Virtualization is a broad term that refers to computing components running on a virtual basis rather than on a real basis. It is a solution to simplify management and optimize resources. Like an empty, transparent office building, there is no fixed wall on the entire floor. Users can build a more self-contained office space at the same cost, thereby saving costs and maximizing space utilization. This idea of ​​re-planning limited fixed resources according to different needs to achieve maximum utilization is called virtualization technology in the IT field.

Virtualization technology can expand the capacity of hardware and simplify the software reconfiguration process. CPU virtualization technology can simulate multiple CPU parallelism with a single CPU, allowing one platform to run multiple operating systems at the same time, and applications can run in separate spaces without affecting each other, thus significantly improving the efficiency of the computer.

Virtualization technology is completely different from multitasking and hyperthreading. Multitasking means that multiple programs run simultaneously in parallel in one operating system. In virtualization technology, multiple operating systems can be run simultaneously, and each operating system has multiple programs running, each operating system. Running on a virtual CPU or virtual host; Hyper-Threading technology is just a single CPU to simulate dual CPU to balance program running performance, the two simulated CPUs can not be separated, and can only work together.

Virtualization technology, unlike software such as VMware Workstation that can achieve virtual effects, is a huge technological advancement, both in terms of reducing software virtual machine-related overhead and supporting a wider range of operating systems.

There are many definitions of virtualization technology, and some of these definitions are given below.

"Virtualization is the process of representing computer resources in a way that users and applications can easily benefit from them, rather than representing them based on their implementation, geographic location, or physical packaging." In other words, it provides a logical view of data, computing power, storage resources, and other resources, rather than physical views.” — Jo nathan Eunice, Illuminata Inc.

“Virtualization is the process of representing logical groups (or subsets) of computer resources so that they can be accessed in a way that benefits from the original configuration. The new virtual view of such resources is not implemented, geographically or underlying resources. The limitations of the physical configuration." - Wikipedia

"Virtualization: Provides a common set of abstract interfaces to a set of similar resources, hiding the differences between attributes and operations, and allowing resources to be viewed and maintained in a common way." -- Open Grid Services Architecture Glossary of Terms.
purpose

The main purpose of virtualization is to simplify the IT infrastructure. It simplifies access to resources and to resource management.

A consumer can be an end user, an application, accessing resources, or interacting with resources. A resource is a function-providing implementation that accepts input and provides output based on a standard interface. Resources can be hardware, such as servers, disks, networks, instruments; or they can be software, such as Web services.

The operating systems supported by virtualization are: Windows and Linux systems. [1]

Consumers access resources through standard interfaces supported by virtual resources. Using standard interfaces, consumer damage can be minimized when the IT infrastructure changes. For example, end users can reuse these techniques because the way they interact with virtual resources has not changed, and even if the underlying physical resources or implementations have changed, they will not be affected. In addition, the application does not need to be upgraded or patched because the standard interface has not changed.

The overall management of IT infrastructure can also be simplified because virtualization reduces the degree of coupling between consumers and resources. Therefore, consumers do not rely on specific implementations of resources. With this loose coupling, administrators can manage their IT infrastructure while ensuring that management has minimal impact on consumers. Management operations can be done manually, semi-automatically, or automatically through Service Level Agreement (SLA) drivers.

On this basis, grid computing can make extensive use of virtualization technology. Grid computing can virtualize your IT infrastructure. It handles the sharing and management of IT infrastructure, dynamically provides resources that meet user and application needs, and provides simplified access to infrastructure.
Software Description

It seems that, like all disruptive technologies, server virtualization technology first appeared quietly, then suddenly burst out, and was finally recognized for the energy-saving merger plan. Today, many companies use virtual technology to increase the utilization of hardware resources, disaster recovery, and improve office automation. This article describes how to eliminate physical hardware limitations from three perspectives: server, storage, application, and desktop virtualization technologies.

With virtualization technology, users can dynamically enable virtual servers (also called virtual machines), each of which actually allows the operating system (and any applications running on it) to mistake the virtual machine for actual hardware. Running multiple virtual machines can also leverage the computing power of physical servers to quickly respond to the changing needs of data centers.

The concept of virtualization is not a new concept. As early as the 1970s, large computers had been running multiple operating system instances at the same time, and each instance was independent of each other. But until today, advances in hardware and software have made virtualization technology likely to appear on popular x86 servers based on industry standards.

In fact, today's data center managers face a wide variety of virtualization solutions, some of which are proprietary and some of which are open source. In general, each is based on one of three basic technologies, but which one works best, depending on the specific workload to be virtualized and the priority business objectives.

Fully virtual

The most popular virtualization method uses a software called hypervisor to create an abstraction layer between the virtual server and the underlying hardware. VMware and Microsoft's VirtualPC are two commercial products that represent this approach, while core-based virtual machines (KVM) are open source products for Linux systems.

The hypervisor can capture CPU instructions and act as an intermediary for instruction access to hardware controllers and peripherals. As a result, full virtualization technology allows almost any operating system to be installed on a virtual server without modification, and they don't know they are running in a virtualized environment. The main drawback is that the hypervisor brings overhead to the processor.

In a fully virtualized environment, the hypervisor runs on bare hardware and acts as the host operating system; the virtual server managed by the hypervisor runs the guest operating system (guest OS).

IBM also has its own virtualization product, Z/VM.

Quasi-virtual

Full virtualization is a processor-intensive technology because it requires the hypervisor to manage individual virtual servers and keep them independent of each other. One way to alleviate this burden is to change the guest operating system to think that it is running in a virtual environment and can work with the hypervisor. This method is called para-virtualization.

Xen is an example of open source paravirtualization technology. Before the operating system runs as a virtual server on the Xen hypervisor, it must make some changes at the core level. Therefore, Xen is suitable for BSD, Linux, Solaris and other open source operating systems, but it is not suitable for virtualizing proprietary operating systems like Windows because they cannot be changed.

The advantage of paravirtualization technology is high performance. The paravirtualized server works with the hypervisor and is as responsive as a server that has not been virtualized. The benefits of paravirtualization compared to full virtualization are so obvious that both Microsoft and VMware are developing this technology to complement their products.

System virtual

Another way to implement virtualization is to add virtual server functionality at the operating system level. An example of this is the Solaris Container, Virtuozzo/OpenVZ is a software solution for Linux.

As far as virtualization of the operating system layer is concerned, there is no separate hypervisor layer. Instead, the host operating system itself is responsible for allocating hardware resources among multiple virtual servers and making them independent of each other. One obvious difference is that if operating system layer virtualization is used, all virtual servers must be running the same operating system (although each instance has its own application and user account).

Although the flexibility of the operating system layer virtualization is relatively poor, the native speed performance is relatively high. In addition, because the architecture uses a single, standard operating system across all virtual servers, it is easier to manage than heterogeneous environments.

Desktop virtual

Server virtualization is mainly for servers, and virtualization is closest to users. It is still the desktop virtualization. The main function of desktop virtualization is to centrally store and manage distributed desktop environments, including centralized delivery of desktop environments. , centralized update, centralized management. Desktop virtualization makes desktop management simple, without having to maintain each terminal individually, and each terminal is updated. Terminal data can be stored centrally in the central office, and security is much higher than traditional desktop applications. Desktop virtualization can enable a person to have multiple desktop environments, or a desktop environment for multiple people, saving the license. In addition, desktop virtualization relies on server virtualization. Without server virtualization, the benefits of this desktop virtualization will be completely gone. Not only that, but also a lot of management capital wasted.
Hardware assist software

Unlike the mainframe, the hardware of the PC was not designed with virtualization in mind, and it was not until recently that it was completely under the responsibility of software. With AMD and Intel introducing the latest generation of x86 processors, the first time to add virtualization support at the CPU level.

Unfortunately, the technologies of the two companies are developed independently, which means their code is not compatible. However, hardware virtualization support allows the hypervisor to move away from extremely heavy management transactions. In addition to improving performance, the operating system can be run in a paravirtualized environment without modification, including the Windows environment.

CPU layer virtualization technology does not automatically work. In order to specifically support it, virtualization software must be developed. However, because the advantages of this technology are very attractive, it is expected that various types of virtualization software will be continuously developed.
Vendor

As virtualization applications become more popular, simple analysis of the pros and cons of several major virtualization vendors

Citrix: Citrix is ​​a company that has grown very fast in the past two years. He benefited from the rise of cloud computing. Citrix has three major products, server virtualization XenServer, the advantages are cheap, management in general; application virtualization XenAPP, desktop Virtualize Xendesktop. The latter two are by far the most mature desktop virtualization and application virtualization vendors. Many of the enterprise VDI solutions use Citrix's Xendesktop in conjunction with Xenapp.

IBM: At the IBM Virtual Technology Conference in November 2007, IBM proposed the concept of "new generation virtualization." Just nowadays, the successful cases are rare. The failure cases of China Shenhua Branch in Yulin District of Shaanxi Province are quite numerous. However, the author believes that IBM virtualization still has the following two advantages: First, IBM's rich product line; good compatibility with its own brand, second, strong R & D strength, can provide a more comprehensive consulting program, but the cost is too High, not every customer is so rich. In addition, its compatibility with third-party support is poor, and operation and maintenance operations are more complicated. It is a double-edged sword for enterprises. And what IBM calls virtualization is server virtualization, not real virtualization.

VMware: As the leading virtualization vendor in the industry, VMware has always been recognized for its ease of use and management. Only limited by the impact of its architecture, VMware is mainly on the X86 platform server, rather than the real IT information virtualization. Plus, it's just a software solution provider, not a vendor like IBM and Microsoft that has its own user base. So currently, there will be many challenges for VMware, including Microsoft, XenSource (acquired by Citrix), and Parallels and IBM. Therefore, it is hard to say whether the road to virtualization will continue to go smoothly for VMware in the future.

Microsoft: In 2008, with the official launch of Microsoft Virtualization, Microsoft has a complete product line from desktop virtualization, server virtualization to application virtualization, and presentation virtualization. At this point, its comprehensive attack virtualization strategy has completely surfaced. Because, in the eyes of Microsoft, virtualization is not simply a hardening of the server and reducing the cost of the data center. He also means helping more IT departments maximize ROI and reduce costs across the enterprise while enhancing business continuity. That's why Microsoft has developed a range of products to support the entire physical and virtual infrastructure.

Moreover, with the rapid development of virtualization technology in the past two years, virtualization technology has moved out of the LAN and extended to the entire WAN. Agents of several major vendors are increasingly focusing on customer analysis of the need for virtualization solutions, and are therefore not limited to proxying virtualization products with only one vendor.
Evaluation

Each virtualization method has its own advantages, and which one to choose depends on the user's specific situation. A set of servers is based on the same operating system, which is ideal for merging through the operating system layer.

The paravirtualization technology set is the best of both worlds. If deployed with processors that support virtualization technology, the benefits are even more pronounced. Not only does it provide good performance, it also provides the ability to run multiple heterogeneous client operating systems.

Full virtualization performance is the most affected of the three approaches, but offers the advantage of completely isolating the client operating systems from each other and completely isolating them from the host operating system. It is ideal for software quality assurance and testing, as well as the widest range of client operating systems.

A fully virtualized solution offers other unique features. For example, they can take a "snapshot" of a virtual server, retain state, and help with disaster recovery. This virtual server image can be used to quickly configure a new server instance. More and more software companies are even starting to offer evaluation versions as downloadable, pre-packaged virtual server images.

Just like physical servers, virtual servers need constant support and maintenance. Increasingly popular server virtualization has created a thriving market for third-party tools, from physical to virtual environment migration utilities to virtualization system-wide system management consoles, all designed to simplify The process of migrating traditional IT environments to efficient, cost-effective virtual environments.
maintain

A very important aspect of any virtualized environment is the need to reduce the management and maintenance of dynamic and complex IT infrastructure. In addition, these management tasks are supported by patterns and technologies implemented through software and tools. The combination of these modes and technologies enables the following functions:

Provides a single, secure interface for administrative access to all resources in the IT infrastructure, allowing administrators to diagnose all resources, configure and modify management of all resources, discover and maintain available resource catalogs, monitor resources and record their usual health Status, when a condition reaches the established upper limit, the trigger will perform the corresponding operation; the operation performed at this time may include notifying the administrator to respond manually, or may automatically respond according to the correct conditions according to the resource. Usage, availability, and service level requirements provide resources or reclaim resources; provisioning of resources can be done manually, semi-automatically, or automatically based on established policies to obtain and maintain resource usage and detection information, and provide appropriate reporting, such as resource consumption Records provide security mechanisms that complement end-user or application security, and an overview of performance information patterns that record all resources for end-user and application SLAs

Virtualization can be confirmed in many ways. It is not a separate entity, but a collection of patterns and technologies that provide the functionality needed to support the logical representation of resources and the functionality required by consumers who present them to these resources through a standard interface. These patterns are themselves recurring of the various virtual forms introduced earlier.

Here are some of the patterns and techniques that are often used to implement virtualization:

Multiple logical representations of a single resource

This model is one of the most widely used modes of virtualization. It contains only one physical resource, but it presents a logical representation to the consumer as if it contained multiple resources. Consumers interact with this virtual resource as if they were the only consumer, regardless of whether they are sharing resources with other consumers.

Single logical representation of multiple resources

This pattern contains multiple composite resources to represent these resources as a single logical representation that provides a single interface. This is a very useful model when leveraging multiple, less powerful resources to create powerful and rich virtual resources. Storage virtualization is an example of this pattern. On the server side, clustering technology can provide the illusion that consumers only interact with one system (the head node), which in fact can contain many processors or nodes. In fact, this is what the grid can do from the perspective of IT technology facilities.

Provide a single logical representation between multiple resources

This pattern includes a virtual resource represented in the form of one of multiple available resources. The virtual resource selects a physical resource implementation based on the specified criteria, such as resource utilization, response time, or proximity. Although this pattern is very similar to the previous one, there are some subtle differences between them. First, each physical resource is a complete copy that does not come together on the logical presentation layer. Second, each physical resource can provide all the functionality needed for a logical representation, rather than providing some functionality as in the previous model. A common example of this pattern is the use of application containers to balance task load. When submitting a request or transaction to an application or service, the consumer does not care which copy of the application is executing in several containers to service the request or transaction. Consumers just want the request or transaction to be processed.

Single logical single representation of a single resource

This is a simple pattern used to represent a single resource, as if it were something else. A common example is the web-enabled enterprise backend application. In this case, instead of modifying the background application, we create a front end to represent the web interface, which maps to the application interface. This mode allows some basic functionality to be reused with minimal modifications to the background application (or no modifications at all). You can also build services using the same schema based on components that cannot be modified.

Composite or layered virtual

This pattern is a combination of one or more of the patterns just introduced, which uses physical resources to provide a rich feature set. Information virtualization is a good example of this model. It provides the underlying functionality needed to manage global naming and referencing of resources, metadata about how to process and use information, and operations that process information. For example, Open Grid Services Architecture (OGSA) or Grid Computing Components are actually different levels of virtualization or virtualization.
Evaluation

1. Usage habits and feelings: The centralized management of a large number of decentralized PCs using centralized computing combined with virtualization technology has indeed provided a good solution for our company. However, we know that computing resources are indispensable, but they exist in different locations. We can't reduce them on the terminal or on the server. Suppose that the computing resources of each of our desktops today is equivalent to (cpu dual-core 2.0 memory 2G). In this environment, the user experience we are accustomed to transfer to the server, the concurrency of 100 users requires the server to provide 100X2X2 equal to 400 CPU. The computing power and 200G of memory can satisfy the user's accustomed experience environment (redundancy is not calculated). Then we calculate if our company has 500 users, if it is 1000 users. In fact, we can't provide such computing power to our users, so we must sacrifice the user experience from this one-sided. Second, in each user has to go to the server to download the operating system used by it, the bandwidth pressure is huge, the more users, the more obvious this factor, so this factor also needs to be considered by business management personnel. The user experience is not as good as before, will it affect the advancement of IT staff?

2, equipment and software compatibility: we are accustomed to plug-and-play peripheral equipment, the new system will not affect the daily work habits, if the impact is how to avoid it? The workload of our IT managers has increased or decreased, has efficiency increased or decreased? We need specific considerations from our business managers.

3. Cost: Every penny spent by the company will specifically consider the input-output ratio. A good management tool should be promoted. At the very least, it can give the enterprise more than double the benefits in a time. The input cost brought by the centralized virtualization solution is mainly the virtualization software license fee, the authorization of the genuine operating system, the authorization of the genuine office software, the purchase cost of the thin client, the replacement cost of the network equipment, and the increase of the new storage device. , server cluster hardware and software procurement costs, and even some network transformation costs. As the new technology improves the technical requirements of the management personnel, there will be training and learning expenses for the technicians, and new management personnel fees will be added. Due to the multiplication of equipment, the cost of the electric power consumption of the organic room renovation is also increased. Taking into account the sum of various costs, the calculated cost of a single point of transformation, but also to consider whether the future virtualization platform upgrade costs and compatibility can match the internal system upgrades. Then you can develop a complete financial plan to deal with the expenses and upgrades of the entire project. This plan requires all departments of the entire group company to participate in coordination.

4. Multimedia and large-scale program experience: In some design departments and related departments, large-scale design programs are needed. Because the technology of today's graphics card virtualization is not mature, it cannot be applied in this scenario.

5. Software and hardware architecture changes: new systems and new applications, are we prepared to deal with changes in management architecture to deal with changes in team organization? Finally, we recommend that our business people introduce virtualization products into the cloud computing model. The plan asks: Are we ready?

Software solution

There are many limitations to pure software virtualization solutions. The "customer" operating system is virtualized in many cases.

Virtualization

The Virtual Machine Monitor (VMM) communicates with the hardware, and the VMM determines its access to all virtual machines on the system. (Note that most processor and memory accesses are independent of the VMM, and VMMs are only involved in certain events, such as page faults.) In a pure software virtualization solution, the location of the VMM in the software suite is traditionally The location of the operating system, and the location of the operating system is where the application is located. This additional communication layer requires a binary conversion to simulate the hardware environment by providing an interface to physical resources such as processors, memory, storage, graphics cards, and network cards. This conversion will inevitably increase the complexity of the system. In addition, guest operating system support is limited by the capabilities of the virtual machine environment, which can hinder the deployment of specific technologies, such as 64-bit guest operating systems. In a pure software solution, the added complexity of the software stack means that these environments are difficult to manage, thus increasing the difficulty of ensuring system reliability and security.

Hardware solution

The CPU virtualization technology is a hardware solution. The CPU supporting the virtual technology has a specially optimized instruction set to control the virtual process. Through these instruction sets, the VMM can easily improve the performance, compared with the virtual implementation of the software. Great range

Virtualization

Improve performance. Virtualization technology provides chip-based capabilities that enable pure software solutions with compatible VMM software. Because the virtualization hardware provides a new architecture that allows the operating system to run directly on it, eliminating the need for binary conversion, reducing the associated performance overhead, greatly simplifying the VMM design, and thus enabling the VMM to be written to common standards for even better performance. powerful. In addition, in pure software VMM, there is a lack of support for 64-bit guest operating systems, and with the increasing popularity of 64-bit processors, this serious shortcoming has become increasingly prominent. In addition to supporting a wide range of traditional operating systems, CPU virtualization technology also supports 64-bit guest operating systems.

Virtualization technology is a solution. The complete case requires support for the CPU, motherboard chipset, BIOS, and software, such as VMM software or some operating systems themselves. Even if the CPU only supports virtualization technology, in the case of software with VMM, it will have better performance than the system that does not support virtualization technology at all.

The two major CPU giants Intel and AMD are trying to take the lead in virtualization, but AMD's virtualization technology is a few months behind Intel. Intel has been promoting Intel Virtualization Technology (Intel VT) virtualization technology in its processor line since late 2005. Intel has released a range of processor products with Intel VT virtualization technology, including the Pentium 4 6X2 series for desktop platforms, the Pentium D 9X0 series and the Pentium EE 9XX series, as well as some products in the Core Duo series and the Core Solo series. And the Xeon LV series, Xeon 5000 series, Xeon 5100 series, Xeon MP 7000 series and Itanium 2 9000 series on the server/workstation platform; and most of Intel's next generation mainstream processors, including Merom

The core mobile processor, Conroe core desktop processor, Woodcrest core server processor, and Itanium 2 high-end server processor based on Montecito core will support Intel VT virtualization technology.

AMD has also released a series of processor products supporting AMD Virtualization Technology (AMD VT) virtualization technology, including the Turion 64 X2 series of Socket S1 interface and the Athlon 64 X2 series and Athlon 64 FX series of Socket AM2 interface. And most of AMD's next-generation mainstream processors, including the upcoming Socket F interface Opteron, will support AMDVT virtualization technology.

Privacy Screen Protector

The ultra-thin precision cutting of the Anti-Peep Screen Protector means that you can enjoy a perfect touch screen experience without allowing anything on the screen to be peeped. Whether you place your phone horizontally or vertically, Privacy Screen Protector can protect your personal Information and sensitive information are protected from harm by strangers. People around you cannot see the contents of your phone, so your details are safe.

The use of soft TPU material can really cover the entire screen.

With self-healing function, it can automatically repair bubbles and scratches within 24 hours.

The 0.14mm Ultra-Thin Protective Film can maintain the sensitivity of the touch screen to accurately respond to your touch.

The oleophobic and waterproof coating prevents fingerprints, oil stains and other substances from adhering to it and keeps the screen clean.

If you want to know more about Privacy Screen Protector products, please click Product Details to view the parameters, models, pictures, prices and other information about Privacy Screen Protector products.

Whether you are a group or an individual, we will try our best to provide you with accurate and comprehensive information about Privacy Screen Protector!

Anti-Spy Hydrogel Screen Protector, Privacy Protection Film, Protection Film, Privacy Film, Privacy Screen Protective Film, Soft Film

Shenzhen Jianjiantong Technology Co., Ltd. , https://www.morhoh-sz.com