no way to compare when less than two revisions
Differences
This shows you the differences between two versions of the page.
— | openvz_vs_kvm [2022/10/13 09:11] (current) – created - external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== OpenVZ vs KVM ====== | ||
+ | Choosing between OpenVZ and KVM is a decision which must be made based on your needs. Neither is outright better than the other but one may be preferable depending on your application. | ||
+ | ===== OpenVZ ===== | ||
+ | |||
+ | OpenVZ is an OS level virtualization technology. This means the OS is partitioned into compartments with resources assigned to each. In OpenVZ there are two types of resources, dedicated and burst. A dedicated resource is one the vps is guaranteed to get if requested; these are " | ||
+ | |||
+ | With providers other than EidolonHost, | ||
+ | |||
+ | As it is an OS level virtualization, | ||
+ | |||
+ | ===== KVM ===== | ||
+ | KVM is a hardware virtualization technology. This means the main OS simulates hardware for another OS to run on top of it. It also acts as a hypervisor, managing and fairly distributing the shared resources like disk and network IO and CPU time. The KVM VPS does not have burst resources; they are all dedicated or shared. This means a VPS's RAM allocation is 100% owned by the VPS that is to say it does not and cannot loan RAM out and it is very difficult to overcommit. The same is true for disk space. The downside being if the limit is hit, the VPS must either swap, incurring a major performance penalty, or start killing its processes. Unlike OpenVZ, KVM VPSes cannot get a temporary reprieve by borrowing from their peers as their dedicated resources are completely isolated. | ||
+ | |||
+ | Because KVM simulates hardware, you can run whatever kernel you like on it (within limits). This means the KVM is not limited to whichever linux kernel is installed in the root node and can run most x86 operating systems like a BSD or even Windows. Having a fully independent kernel means the VPS can make kernel modifications or load its own modules. This may be important because there are some more obscure features that OpenVZ does not support. It also adds the complexity of maintaining a complete operating system and all the pitfalls thereof. This is in contrast to OpenVZ which is very resiliant since it is merely allocating resources from the already running kernel to the VPS. This is not to say OpenVZ has no maintenance required, just that it has less that a person managing the VPS is responsible for. | ||
+ | |||
+ | ===== Which should I get? ===== | ||
+ | Both OpenVZ and KVM are mature technologies with advantages and disadvantages to each. Selecting the appropriate technology at the outset may save you significant future headache. To that end, please review the following list to see where you may fall. | ||
+ | |||
+ | ==== OpenVZ: ==== | ||
+ | |||
+ | * Only intend to run userspace applications in linux for example LAMP/LNMP stack webhosting | ||
+ | * Typically better performance per dollar with a smaller disk and memory footprint for equivalent solutions. | ||
+ | * Lower management complexity for VPS users | ||
+ | |||
+ | ==== KVM: ==== | ||
+ | * Intend to run Windows Server or OS other than linux. | ||
+ | * Solution requires custom kernel modifications, | ||
+ | * Needs advanced netfilter firewall configuration (exceptionally so, as most iptables features are supported) for example ipset or nfnetlink. | ||
+ | * SELinux. Within the VPS only, it does not prevent inspection from the parent hardware node. | ||
+ | |||
+ | ==== Neither (need a dedicated server): ==== | ||
+ | * Full disk encryption/ | ||
+ | * Specific hardware, for example a gpu for bitcoin mining | ||
+ | * Heavy IO loads for extended periods of time. | ||
+ | |||
+ | ===== Things known not to work on OpenVZ: ===== | ||
+ | * netfilter' | ||
+ | * netfilter' | ||
+ | * netfilter' | ||
+ | * cachefs (potentially in post-2.6.19 kernels?) | ||
+ | * selinux | ||
+ | * cifs filesystem | ||
+ | * file acls (setfacl/ | ||
+ | * loopback mount (mount -o loop) |