Difference between revisions of "Installation Considerations"

From Linux-VServer

Jump to: navigation, search
 
(+cat)
 
(7 intermediate revisions by 5 users not shown)
Line 17: Line 17:
 
* x86
 
* x86
 
* x86_64
 
* x86_64
 +
 +
See [[Tested Configurations]] for details.
  
 
== Hardware Availability ==
 
== Hardware Availability ==
  
The host system availability is more critical than the availability of a typical server. Since it runs multiple Virtual Private Servers providing a number of critical services each, the outage of the host system be very costly. Additionally outage can be as disastrous as the simultaneous outage of a number of servers running critical services.
+
The host system availability is more critical than the availability of a typical server. Since it runs multiple Virtual Private Servers providing a number of critical services each, the outage of the host system may be very costly. Additionally outage can be as disastrous as the simultaneous outage of a number of servers running critical services.
  
For production systems the following considerations should be followed to have a clean and secure host system:
+
Discussing all aspects of high-availability is out of the scope of this document, however the following considerations should be followed to have a clean and secure host system:
  
* Use RAID storage for guest filesystems
+
* Use RAID storage for guest filesystems.
 
* Do not run software on the host system. Instead create guest systems where you can host necessary services. The only needed service on the host system is probably sshd.
 
* Do not run software on the host system. Instead create guest systems where you can host necessary services. The only needed service on the host system is probably sshd.
 
* Do not create users on the host system. You can create as many users as you need in any guest system.
 
* Do not create users on the host system. You can create as many users as you need in any guest system.
Line 30: Line 32:
 
== Hardware Requirements ==
 
== Hardware Requirements ==
  
The exact hardware configuration depends on how many Virtual Private Servers you are going to run on the computer and what load these VPSs are going to produce. Thus, in order to choose the right configuration, please follow the recommendations below:
+
The exact hardware configuration depends on how many Virtual Private Servers you are going to run on the computer and what load these VPSs are going to produce. Thus, in order to choose the right configuration, you should follow the recommendations below:
  
 
; CPUs
 
; CPUs
Line 59: Line 61:
 
The Linux-VServer project maintains several branches of the kernel patch. Since version 1.00 the versioning is similar to the kernel versioning scheme. Even numbered releases (a.X.z with even X) are stable, reasonably well tested and expected not to change feature-wise. Odd numbered (a.Y.z with odd Y) releases are development releases. The last digit/number (z) is a subversion identifier. Experimental versions and Release Candidates might add a fourth identifier to that scheme.
 
The Linux-VServer project maintains several branches of the kernel patch. Since version 1.00 the versioning is similar to the kernel versioning scheme. Even numbered releases (a.X.z with even X) are stable, reasonably well tested and expected not to change feature-wise. Odd numbered (a.Y.z with odd Y) releases are development releases. The last digit/number (z) is a subversion identifier. Experimental versions and Release Candidates might add a fourth identifier to that scheme.
  
Basically the stable and development releases should be similar in functionality, but the development releases will include features and enhancements not present in the stable branch. Once those features mature (and get well tested), they will be incorporated by the stable branch.
+
Basically the stable and development releases should be similar in functionality, but the development releases will include features and enhancements not present in the stable branch. Once those features mature (and get well tested), they will be incorporated by the stable branch.Check out the [[Feature Matrix]] for a comparison.
  
 
For example the first stable release (1.00) uses two systemcalls as the previous releases did. However, the vserver system calls have been changed in the first development release (1.1.0). Linus assigned the vserver project a single system call, so a [[System Call Switch]] has been implemented. Running a development release usually requires using recent (latest) tools from the util-vserver development branch.
 
For example the first stable release (1.00) uses two systemcalls as the previous releases did. However, the vserver system calls have been changed in the first development release (1.1.0). Linus assigned the vserver project a single system call, so a [[System Call Switch]] has been implemented. Running a development release usually requires using recent (latest) tools from the util-vserver development branch.
Line 66: Line 68:
  
 
All downloads are available in the [[Downloads]] section. Also take a look at the [[ChangeLogs]].
 
All downloads are available in the [[Downloads]] section. Also take a look at the [[ChangeLogs]].
 
=== Feature Comparison ===
 
 
The following table ''tries'' to give an overview of features available in different releases.
 
 
{| class="wikitablenowrap"
 
! Feature
 
! 1.00
 
! 1.20
 
! 1.2.10
 
! 1.3.8
 
! 0.09
 
! 1.9.0
 
! 1.9.2
 
! 2.0
 
! 2.1
 
|-
 
! Legacy Kernel API (vs1.2x)
 
|    yes
 
|    yes
 
|    yes
 
|    part
 
|    part
 
|    part
 
|    part
 
|    part
 
|    part
 
|-
 
! Legacy Proc Filesystem
 
|    yes
 
|    yes
 
|    yes
 
|    part
 
|    part
 
|    part
 
|    part
 
|    part
 
|    part
 
|-
 
! Immutable Link Invert
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|-
 
! Initpid and Fakeinit
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|-
 
! Syscall Command Switch
 
|    -
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|-
 
! Syscall vkill commands
 
|    -
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|-
 
! Syscall rlimit commands
 
|    -
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|-
 
! Syscall ctx_wait support
 
|    -
 
|    -
 
|    -
 
|    -
 
|    -
 
|    -
 
|    -
 
|    yes
 
|    yes
 
|-
 
! Syscall iattr commands
 
|    -
 
|    -
 
|    -
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|-
 
! Syscall namespace commands
 
|    -
 
|    -
 
|    -
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|    yes
 
|-
 
! Syscall context flags
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
| yes
 
| yes
 
|-
 
! Syscall context caps
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
| yes
 
| yes
 
|-
 
! Syscall scheduler tuning
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
| yes
 
|-
 
! Syscall 32bit compat
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
|-
 
! Next gen. Proc Filesystem
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
| yes
 
| yes
 
| yes
 
|-
 
! Chroot Barrier Flag
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
| yes
 
| yes
 
| yes
 
|-
 
! Proc Security Flags
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
| yes
 
| yes
 
| yes
 
| yes
 
|-
 
! Userspace Reboot Helper
 
| -
 
| yes
 
| yes
 
| yes
 
| yes
 
| yes
 
| yes
 
| yes
 
| yes
 
|-
 
! Userspace Startup/Shutdown Helper
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
|-
 
! VRoot Device
 
| -
 
| yes
 
| yes
 
| yes
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
|-
 
! Advanced IP Selection
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
| yes
 
| yes
 
| yes
 
|-
 
! Advanced uts_name config
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
| yes
 
| yes
 
| yes
 
|-
 
! Fake Memory Display
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
| yes
 
| yes
 
|-
 
! Hard CPU limits
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
| yes
 
|-
 
! Context File Tagging
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
| yes
 
|-
 
! Context ID Propagation
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
|-
 
! NFS based File Tagging
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
|-
 
! Per Context Disk Limits
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
|-
 
! Network Context Support
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
| yes
 
|-
 
! Socket Accounting
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
| yes
 
|-
 
! Advanced Sysctl Debug System
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
|-
 
! Extended proc/devpts Security
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
|-
 
! Flag: Pause Context
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
| yes
 
|-
 
! Flag: Hide proc/mounts
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
| yes
 
|-
 
! CCap: Secure Mount
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
| yes
 
|-
 
! CCap: Change host/domainname
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
| yes
 
|-
 
! CCap: Modify rlimits
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
| yes
 
|-
 
! CCap: Raw ICMP for Ping
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
| yes
 
|-
 
! Bind Mount Extensions
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
| yes
 
|-
 
! Copy on Write Link Breaking
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
|-
 
! Quota Hashes
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
|-
 
! Persistent Context Support
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| -
 
| yes
 
|}
 
 
  
 
== Disk Partitioning ==
 
== Disk Partitioning ==
  
Since each guests is a seperate root filesystem somewhere in the host system filesystem hierarchy, it is advisable to create a partitioning scheme that fits your needs. Discussing all possible disk configuration is surely out of the scope of this document, however you should take the following guidelines into account while setting up disk space for your guest systems:
+
Since each guest is a seperate root filesystem somewhere in the host system filesystem hierarchy, it is advisable to create a partitioning scheme that fits your needs. Discussing all possible disk configurations is surely out of the scope of this document, however you should take the following guidelines into account while setting up disk space for your guest systems:
  
 
* Generally, one big partition for ''all'' guest systems should suffice
 
* Generally, one big partition for ''all'' guest systems should suffice
* If you don't want to use [[Disk Limits]] you can use one partition per guest system to limit its available disk space. However this will (in most cases) prevent an easy enlargement of disk space later on.
+
* If you don't want to use [[Disk Limits and Quota|Disk Limits]] you can use one partition per guest system to limit its available disk space. However this will (in most cases) prevent an easy enlargement of disk space later on.
* If you want to use quota ''inside'' your guest system you have to use seperate partitions per guest system. This will probably change in the future
+
* If you want to use [[Disk Limits and Quota|Quota]] ''inside'' your guest system you have to use seperate partitions per guest system. This will probably change in the future
 
* You should take care of hard disk failure, e.g. use RAID systems, make regular backups, etc.
 
* You should take care of hard disk failure, e.g. use RAID systems, make regular backups, etc.
 +
* To obtain more flexible space management consider using some Volume Management solution like lvm, lvm2 or evms.
 +
 +
== Networking ==
 +
 +
As the host and guest system both share the same physical networkconnection it is advisable to
 +
have as little as possible networkdaemons using the same port on both the host and guest system. If, as with ssh, it is inavoidable to have both daemons share a port, one has to make sure the daemon on the host only listens to the hosts IP address. The guestsytem is isolated by default.
  
 
== Final Notes ==
 
== Final Notes ==
  
 
If you think the information provided in this document is too vague, feel free to [[Communicate|contact]] the Linux-VServer community and ask for help on your specific setup.
 
If you think the information provided in this document is too vague, feel free to [[Communicate|contact]] the Linux-VServer community and ask for help on your specific setup.
 +
 +
[[Category:Installation| ]]

Latest revision as of 20:10, 21 October 2011

This guide will give you an idea about pre-requesites and installation considerations of the host system. It is targeted towards production systems, so some of the information provided here may or may not be important to your setup. Decide for your own demand.

Contents

[edit] Hardware Compatibility

The Linux-VServer kernel runs on many platforms, including those listed below:

  • alpha
  • arm
  • ia64
  • m68k
  • mips
  • ppc
  • ppc64
  • s390
  • sparc
  • sparc64
  • x86
  • x86_64

See Tested Configurations for details.

[edit] Hardware Availability

The host system availability is more critical than the availability of a typical server. Since it runs multiple Virtual Private Servers providing a number of critical services each, the outage of the host system may be very costly. Additionally outage can be as disastrous as the simultaneous outage of a number of servers running critical services.

Discussing all aspects of high-availability is out of the scope of this document, however the following considerations should be followed to have a clean and secure host system:

  • Use RAID storage for guest filesystems.
  • Do not run software on the host system. Instead create guest systems where you can host necessary services. The only needed service on the host system is probably sshd.
  • Do not create users on the host system. You can create as many users as you need in any guest system.

[edit] Hardware Requirements

The exact hardware configuration depends on how many Virtual Private Servers you are going to run on the computer and what load these VPSs are going to produce. Thus, in order to choose the right configuration, you should follow the recommendations below:

CPUs
The more Virtual Private Servers you plan to run simultaneously, the more CPUs you need.
Memory
The more memory you have, the more Virtual Private Servers you can run. The exact figure depends on the number and nature of applications you are planning to run in your Virtual Private Servers. However, on the average, at least 1 GB of RAM is recommended for every 20-30 Virtual Private Servers
Disk space
Each Virtual Private Server occupies 10–500 MB of hard disk space for system files (depends on the use of Unification) in addition to the user data inside the Virtual Private Server (for example, web site content). You should consider it when planning disk partitioning and the number of Virtual Private Servers to run.

[edit] Choose Your Distribution

There are many different Linux distributions, or versions. A distribution is the compiled Linux source code, usually combined with extra features and software. Some distributions are available for download at no charge while others are available at affordable prices on CD-ROM from Linux retailers worldwide.

Each distribution has its own purpose, and a number of factors should go into deciding which distribution is best for each user. Some distributions are better suited to home users, others are excellent for commercial settings. Some are better suited for Intel or Macintosh PCs, other are excellent for use on high-performance computers.

Any current Linux distribution most likely contains the software needed to do the job, including kernel and drivers, libraries, utilities and applications programs. Still, one of the most common questions people ask is "which distribution should I get?" This question is often answered by an assortment of people, each proclaiming their favorite distribution is better than all the rest.

Probably most people favor the first distribution they successfully installed. Or, if they had problems with the first, they favor the next distribution they install which addresses the problems of the first, and so on.

It is not in the scope of this document to discuss features of all the distributions out there, but nearly all should work with Linux-VServer, so it is up to you to decide which distribution fits your requirements. For an overview of available distributions checkout DistroWatch.

[edit] Choose Your Kernel Version

[edit] Versioning explained

The Linux-VServer project maintains several branches of the kernel patch. Since version 1.00 the versioning is similar to the kernel versioning scheme. Even numbered releases (a.X.z with even X) are stable, reasonably well tested and expected not to change feature-wise. Odd numbered (a.Y.z with odd Y) releases are development releases. The last digit/number (z) is a subversion identifier. Experimental versions and Release Candidates might add a fourth identifier to that scheme.

Basically the stable and development releases should be similar in functionality, but the development releases will include features and enhancements not present in the stable branch. Once those features mature (and get well tested), they will be incorporated by the stable branch.Check out the Feature Matrix for a comparison.

For example the first stable release (1.00) uses two systemcalls as the previous releases did. However, the vserver system calls have been changed in the first development release (1.1.0). Linus assigned the vserver project a single system call, so a System Call Switch has been implemented. Running a development release usually requires using recent (latest) tools from the util-vserver development branch.

1.X.z and 1.Y.z releases are for the 2.4 kernels, while 1.9.x (obsoleted by now) and 2.X.y releases are for the 2.6 series.

All downloads are available in the Downloads section. Also take a look at the ChangeLogs.

[edit] Disk Partitioning

Since each guest is a seperate root filesystem somewhere in the host system filesystem hierarchy, it is advisable to create a partitioning scheme that fits your needs. Discussing all possible disk configurations is surely out of the scope of this document, however you should take the following guidelines into account while setting up disk space for your guest systems:

  • Generally, one big partition for all guest systems should suffice
  • If you don't want to use Disk Limits you can use one partition per guest system to limit its available disk space. However this will (in most cases) prevent an easy enlargement of disk space later on.
  • If you want to use Quota inside your guest system you have to use seperate partitions per guest system. This will probably change in the future
  • You should take care of hard disk failure, e.g. use RAID systems, make regular backups, etc.
  • To obtain more flexible space management consider using some Volume Management solution like lvm, lvm2 or evms.

[edit] Networking

As the host and guest system both share the same physical networkconnection it is advisable to have as little as possible networkdaemons using the same port on both the host and guest system. If, as with ssh, it is inavoidable to have both daemons share a port, one has to make sure the daemon on the host only listens to the hosts IP address. The guestsytem is isolated by default.

[edit] Final Notes

If you think the information provided in this document is too vague, feel free to contact the Linux-VServer community and ask for help on your specific setup.

Personal tools