[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MiNT] virtual memory



Hi Frank :).

>Hello!
>
>> > And at last the tables can be swapable too.
>> 
>> This won't work, especially for global memory (which is where I started
>> running into problems).
>
>I don't see the problem. Can you explain it me please?

See my windy explanation below.

>> Each process needs to see the global memory in 
>> it's MMU table.
>
>Yes.
>
>> Unless we want to restrict the size of virtual memory (I
>> don't), that means that each process needs at a fully populated global
>> table.  -That- table will take 2,163,200 bytes.
>
>No, here I disagree. What about the idea: the table is big as it's
>needed. Global memory is mapped to all MMU tables on the same place. For
>that it would be very easy that one root level pointer always point to the
>global memory table. The global memory table is shared for all processes.
>And if it's out of space another root level pointer is used for a second
>global table.

That's not a bad idea, but I'm kind of concerned about finding
free pages when the MMU table needs to grow.  The MMU table should
ideally be contigous memory for easier swap algorithms, and if the
pages are fragmented all over creation, it may become difficult.

Let me chew on that for a while, though.  The memory savings may
be worth it, and I may be able to come up with a decent algorithm
even if the pages are spread out.

>> Maybe I mis-spoke.  I am not re-writing the entire MMU table on a context
>> switch.  I am changing the bits in the level 3 tables associated with up to two
>> processes - the one being switched out and the one being switched in.
>> Entries for processes not involved in the context switch will not have
>> their entires modified.
>
>And are visible to the new process in this case? I can't see how you will
>make the table consistent per process without overlapping or collisions.
>
>Or is you approach what we have one 4 GB virtual address space and all
>applications use the same space? Application 1 use 0 - 32 MB, app 2 33 -
>64 MB and so on?

You're gettting warmer :).

Perhaps I should go through the thought process that brought me to
where I am.  I have a bad habit of skipping steps when explaining
things (especially via email).  Just ask Dan on PPP STiK.

My goals were simple - eliminate memory fragmentation and not
break anything. Memory protection currently provides
paged memory, so the hard part was done.

When memory becomes fragmented in a paged environment, it means
that you may have enough pages in total to run an application, but
not enough of the pages are in consecutive order.  But the MMU can
make them -look- like they are in consecutive order (as you probably
know).  But then the question hit me: so -where- do I put these pages?

Several solutions came to mind.  First was to use the address range
current occupied by physical memory.  But you can see that doesn't
fix anything; it just shifts the problem around.  The address ranges
will soon become as fragmented as the memory had.

Second, I thought to give each process its own context (which is what
I think you were thinking of).  But, because of all
the pointer passing that goes around, it would be quite difficult
to set up each process with its private context with 4 GB of memory
and not break anything.  And it would require too much work-around code.

Finally, I decided to use unoccupied address ranges in a single MMU table.
The address ranges are not being used for anything.  And when I unmap the
free TT RAM pages, it increases the amount of unoccupied address space.
It also fixes the issue of shared memory.  Everyone uses the same MMU
table, so sharing is easy, whether through the shared libraries or through
setting of the memory flags of a program.

With this algorithm, I need to track two things:

1)  A page list of available (free) pages.  These are in no way tied
    to an address range.  This free list is tracked by physical address
    (as it needs to be for setting up the MMU).  This is done for all of
    TT RAM, but not ST RAM.
2)  A list of free memory (actually now -address-) regions.  Current plan
    is to track these in chunks of 256K.  These address ranges are 
    unused and/or used to contain mappings to the TT RAM.

When a process needs a memory region for whatever reason, a free -address-
region of the appropriate size will be found in the same manner that is done
today for memory regions.  Once the address region is found, it may have to be
split up as is done today.  No problem, same algorithm.

Pages are then allocated to the address region.  Since the pages are 8K
but the address regions are 256K, there may be an invalid portion
of the end of the address region.  But that won't hurt anything.

This region is then mapped into a process's context as it is done
today.  The process sees the memory just as it would before.

>From this, freeing a region is pretty simple.  Free the address region like
a memory region, then put the pages back on the free list.

But why is one table now best?  If you take this to the extreme limit,
you'll need a full MMU table (minus the ST RAM mirror at the end of memory).
Even if we use your idea above to add onto the table dynamically, a long
lived system (up for years) may see address regions fragment over time.
This becomes even more apparent as we go into virtual memory.

Also, under the case where each process has it's own copy of the MMU
table, if a single process was created that required an addition
to the MMU table, it would require -all- processes to add that entry
into their MMU tables.  That could get ugly, as the MMU table would have
to grow for all processes.

Therefore, I chose a single (now possibly growing :) MMU table.  It will
take time to swap the table, but given the default single timeslice is
40 ms, I don't think the swap time will be too big of an issue, espeically
in the final 256K block implementation.  In  addition, you'll have nearly
4 GB to play with when I'm done.

Note that there is a second table for "OS Special" processes.  This
will be as it is today - all memory is seen by OS special processes.

I hope this answers your questions and addresses your concerns.  If
not, I'll be here :).

Michael White (michael@fastlane.net)