Welcome to Linux Support and Sun Help
Search LinuxSupport
The Unix and Internet Fundamentals HOWTO: How does my computer keep processes from stepping on each other? Next Previous Contents

7. How does my computer keep processes from stepping on each other?

The kernel's scheduler takes care of dividing processes in time. Your operating system also has to divide them in space, so that processes don't step on each others' working memory. The things your operating system does to solve this problem are called memory management.

Each process in your zoo needs its own area of core memory, as a place to run its code from and keep variables and results in. You can think of this set as consisting of a read-only code segment (containing the process's instructions) and a writeable data segment (containing all the process's variable storage). The data segment is truly unique to each process, but if two processes are running the same code Unix automatically arranges for them to share a single code segment as an efficiency measure.

Efficiency is important, because core memory is expensive. Sometimes you don't have enough to hold the entirety of all the programs the machine is running, especially if you are using a large program like an X server. To get around this, Unix uses a strategy called virtual memory. It doesn't try to hold all the code and data for a process in core. Instead, it keeps around only a relatively small working set; the rest of the process's state is left in a special swap space area on your hard disk.

As the process runs, Unix tries to anticipate how the working set will change and have only the pieces that are needed in core. Doing this effectively is both complicated and tricky, so I won't try and describe it all here -- but it depends on the fact that code and data references tend to happen in clusters, with each new one likely to refer to somewhere close to an old one. So if Unix keeps around the code or data most frequently (or most recently) used, you will usually succeed in saving time.

Note that in the past, that "Sometimes" two paragraphs ago was "Almost always," -- the size of core was typically small relative to the size of running programs, so swapping was frequent. Memory is far less expensive nowadays and even low-end machines have quite a lot of it. On modern single-user machines with 64MB of core and up, it's possible to run X and a typical mix of jobs without ever swapping.

Even in this happy situation, the part of the operating system called the memory manager still has important work to do. It has to make sure that programs can only alter their own data segments -- that is, prevent erroneous or malicious code in one program from garbaging the data in another. To do this, it keeps a table of data and code segments. The table is updated whenever a process either requests more memory or releases memory (the latter usually when it exits).

This table is used to pass commands to a specialized part of the underlying hardware called an MMU or memory management unit. Modern processor chips have MMUs built right onto them. The MMU has the special ability to put fences around areas of memory, so an out-of-bound reference will be refused and cause a special interrupt to be raised.

If you ever see a Unix message that says "Segmentation fault", "core dumped" or something similar, this is exactly what has happened; an attempt by the running program to access memory outside its segment has raised a fatal interrupt. This indicates a bug in the program code; the core dump it leaves behind is diagnostic information intended to help a programmer track it down.


Next Previous Contents
Valid HTML 4.01! Valid CSS!