[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MiNT] Virtual Memory



Hi,

On lauantai 12 tammikuu 2013, Helmut Karlowski wrote:
> Eero Tamminen, 12.01.2013 11:16:51:
> > If swap algorithm isn't good, it can interact badly with memory
> > mapping, you can enter swapping hell when some program processes
> > large files, even if it wouldn't dirty the memory.
> 
> Wouldn't it make sense to add nonblocking disk-IO before to not suffer
>  from too frequent pausing?

That's not the problem, but demand paging filling page cache (e.g. by
locate file indexing) and pushing other programs that don't go through
so much memory (at the moment), to swap.  When you try to use the machine,
all the interactive programs are in swap and everything is really slow.

Earlier Linux was quite bad with the kind of programs that traversed
through large files from start to end (scanning them, showing video,
playing large number of songs etc, but not really needing the page
after going through its data once), but that was fixed in somewhere
after 2.32 if I remember right.  Such a file caching is a (batch) speed,
not latency optimization.  Atari systems should be balanced more towards
low latency than fast throughput.


Swapping hell is the case where you have several processes competing
for memory and device will slow down because all a process has time to
do during its time slice, is getting some of its page(s) swapping into RAM,
but not having time to executing much instructions.

Only way to recover from a state like this is either killing or syspending
a lot of processes.  Killing just enough so that a normal amount of them is
running often isn't enough when system has passed the "swapping hell"
threshold.  And because system is *really* slow, doing that suspending /
killing is hard (that's why many systems try to prevent at least accidental 
forkbombs etc).


	- Eero

PS. one use for zero pages is thread stacks.  Because they're fixed
size (even on Linux, just the main thread stack grows on demand) and
one cannot necessarily predict how much it's going to be used, they're
on Linux typically very large, currently the default on 32-bit machines
is 8MB per thread...  Earlier it was 2MB.

(At work we were once wondering how device with 128MB of RAM could be
having gigabyte sized core dumps.  Turned out that process had leaked
threads, it had marked them as joinable although they exited without
process joining them. As a result, it eventually run out of address space
and crashed with coredump.)