Difference between revisions of "Memory Management"

From Linux-VServer

Jump to: navigation, search
(migrated from old wiki Memory+Management)
 
(+cat)
 
Line 58: Line 58:
 
* many pages, and  
 
* many pages, and  
 
* this stuff is really performance critical!
 
* this stuff is really performance critical!
 +
 +
[[Category:Theory]]

Latest revision as of 20:39, 21 October 2011

Contents

[edit] Basic intro to virtual memory

When folks talk about Memory, they usually mean the expensive modules they buy and plug into the (typically x86) based machines. The operating system, in our case Linux, handles that physical memory according to the mechanisms of the architecture (in this case x86). Most architectures used nowadays have the concept of virtual memory which basically is linear address space, in units of pages, which can be real pages in memory or just not there. The operating system now knows about several types of such pages, for example, there is the file system based page, which can be a file or an executable (of course, such a file will usually require several such pages) and, an important detail here is, the pages do not necessarily have to be in RAM. Then there are anonymous pages, which are used by some application (those are usually read-write) and if there is swapping enabled, those pages can also be written to some swap area. In addition to those types, there are a bunch of special pages and methods to handle those pages: for example, the so called zero page, which is (at least) a page just containing read-only zeros. It will usually be used if you request some memory area from the memory management system (mm). The read-only property causes a trap (page fault) once you write to that page, which then will be replaced by actual memory. A similar thing happens with shared memory pages, which are basically marked read-only and copied on write. Pages which get swapped out (to swap space) will not be freed immediately, they are kept as swap cache; similar happens to file caches (inode cache): they are marked as 'unused' but will not be freed until somebody requires a page.

Now, on x86, the total addressable space is 4GB and this is also the maximum of virtual address space an application (or the kernel) can see. To simplify the transition of memory from userspace to kernelspace, the address space is divided between kernel and userspace.

[edit] The application's point of view

Q. OK, I think I understand all that. But i'm not sure about it from the other perspective - i.e. what happens when i run a program. So, say i run something - editing a text file, for example (something simple to start with) then that will use some pages to keep track of what i am doing, correct?

That's actually quite complex, maybe lets start with something like 'true' first! Let's further assume the binary (program) is not compiled dynamically but is static and fits into 4k and it does not have any data section (usually not true, but it's simpler). When it is executed, the kernel 'maps' the executable into memory, and creates an userspace task, with a stack page, and starts executing the just mapped memory. The file will be read into real memory (i.e. RAM), marked as read-only but executable and it will be added to the inode cache; the virtual address will be at some fixed address coded into the executable and it will call the kernel via syscalls requesting things like exit()

When you execute it a second time, the file is already in the inode cache, so all that happens is a new mapping into the virtual adress space of that task. Now, I mentioned already that the address space is shared by userspace and kernel. Typically, the split is 3/1 - where userspace gets 3GB of "space" and the kernel only 1GB; there are now split patches and recent changes to mainline which allow for other splits too such as 2/2 or 1/3. In whichever case, the userspace is allocated first, starting at 0, and (normally), the kernel starts at 0xc0000000 - which is 3GB.

Now usually this leads to the question: but what if I have only 1GB of RAM? The answer is simple, it will still be 3GB userspace and 1GB kernel space: as mentioned before, the virtual space does not have to be backed by any real RAM. You could, for example, fill the entire 3GB space with mappings of the zero page, using only a single 4k RAM area at any location. This also means that a physical address can be mapped at different virtual addresses and, of course, several times - even in the same task space. Also, the 3GB (or actually 4GB) address space is per process so the processes do not have to share that space in any way.

Now to get back to the editor example: this will cause

  • the executable pages to be mapped somewhere,
  • the file you 'open' to be mapped too (which will be done via the inode cache).

Then it will require stack and data pages to do the actual work (editing) and it will request a writeback to the file, which will make those pages buffer caches for write back I/O, which in turn might update the inode cache once it is written. If, for some reason, the editor is very large (i.e has many executable pages), it might happen that when you are low on physical RAM (or the swap system is tuned to do optimistic swap out) that some pages of your editor (which are not used right now) will simply be dropped and some of the data pages (editor memory) will be swapped out.

So, as you can see, the relation between processes and physical RAM is not straightforward and simple :)


[edit] The size of the word depends on the architecture

Q. This may be stupid, but why isn't the virtual memory size equivalent to the available physical memory plus swap space? It seems it is larger than the hardware can handle (unless you have 4gigs of physical memory). But I guess when you say 3GB of virtual memory, its more of a page of pointers or something, rather than actual memory.

Yes, that is right, but let me answer that with another question: why is an int 32-bit and not just ld(N) ? That is, to represent the value 20 you need just 5 bits (10100 in binary), so why 'waste' 32 bits for that? IMHO the answer is simple and straight forward: the hardware has to have certain limits, for the int this is 32bit; for the address space that is 4GB on x86.

Q. Is that due to the instruction set on the CPU itself?

Well, actually the 32bit address space and the Memory Management Unit MMU): { 2^32 = 4294967296 = 4GB }. The x86_64 has a much larger address space (as it is 64-bit based) and the MMU there usually has 48bits at least.

Q. So if there's more than 4 gigs, does that mean it is or isn't used?

As I said before, a mapping is required between virtual adresses and physical RAM. Without 'dirty' tricks (read PAE) 4GB is the maximum on x86. Beyond that, it is the dreaded HIGHMEM support (which is a special case below the 4GB). To access those >4gigs you do so at a cost, so its slower. Even with 4GB RAM, the kernel can only address 1GB (in the default split) directly. The thing here is, changing the mapping from virtual to physical memory is expensive and a kernel which can address 1GB will have to reserve a certain area for mapping in and out the remaining 3GB (on a 4GB system). This "mapping window," where the "high" memory pages are mapped in and out is called high mem so with the default 3/1 split, you can get roughly 970MB of memory. even with 2GB RAM this will not change, only enabling the dynamic mapping (highmem) will give you access to that

Q. you said the virtual memory isn't backed by RAM, but is it backed by anything?

Sometimes, it depends on the mapping.

[edit] Applying this to vservers

Q. So how does this all map onto VSZ and RSS (in vserver-stat) or VIRT/RES/SHR in top/vtop stats?

Good questions, and well, they are simply answered for a task:

  • the VSZ for a task is the amount of pages which have a mapping
  • the RSS (resident set size) is the amount of pages which are currently in RAM (physical memory)
  • shared is memory that is mapped between two applications
VSZ=VIRT = number of pages currently mapped
RSS=RES  = number of pages currently in RAM

This accounting is a little more problematic if you want to do it for, let's say two processes. First, what do you do about the address space? look for identical mappings and count them only once, or take the maximum of both? or just add them up? And it's even more complicated for the RSS because we can have shared RAM (e.g. inode caches, executables and we can have shared but copy on write pages and we can also have purely anonymous pages, that only belong to a single task. well, actually shared memory can belong to no task, but that probably complicates things for now, so let's say a single task.

Linux-Vserver tries to be as unintrusive as possible here and, of course, we try to keep it simple and efficient too. So what we do is mainly accounting the allocations and deallocations of those pages per context, which gives values (and if limited limits) which might not be directly mappable to physical RAM (or swap space, which we didn't even mention yet). We decided to 'simply add up' the address space of all tasks and call that VM/AS.

VM/AS = virtual memory pages (total) in a context

[edit] swap

We also decided not to account the shared pages special as [OVZ] does, instead we simply add them up in separate counters. A currently missing accounting/limit is the swap space because accounting the swap space properly would require a 'tag' on each memory page to know which context it belongs to, which is something I don't want to do without good reason, as there are

  • many pages, and
  • this stuff is really performance critical!
Personal tools