A recent commit from Matthew Dillon enables use of at least a terabyte of swap space. Is there anyone who can actually use that much yet? Swap is traditionally 2x available memory, so that would make for 500 gigabytes of RAM. I don’t think that’s even workable, though you’d be able to build up a heck of a MFS.
8 Replies to “Who can actually use this?”
Comments are closed.
FB-DIMMs are up to 16GB now. It’s not unimaginable!
OK maybe today it may seem as overkill. But in a very few years time it will be a reality. However, in today’s use, imagine that you have a data structure (like a B-tree variant) and you have implemented it as a main memory only data structure. It is not impossible to have a need to index data of this size (and even compromise using swap space).
>Swap is traditionally 2x available memory
It’s a tradition and it’s mere nonsense.
>But in a very few years time it will be a reality.
Why? Because you aren’t able to administer your machine correctly?
I had said “traditionally”, because I know it’s no longer as accurate a rule as it used to be, but I haven’t yet heard a rule to replace it.
Searching around the web in places like the FreeBSD Handbook or HP-UX documentation shows calls for 2 to 4 times as much swap as available memory, so perhaps that rule of thumb still holds.
right now we can only use it by having a lot of processes which all consume a lot of memory. Wait for amd64 and we can use this all by one process :)
>>But in a very few years time it will be a reality.
>Why? Because you aren’t able to administer your machine correctly?
Because I may need more RAM than the machine can provide. Rare exceptions do happen.
When I first looked at the patch it seemed like just a clean little change which added the benefit of a higher addressable swap space.
However I was wondering, if you were clustering systems which can migrate processes between machines, a shared swap space might seem a prudent step, processes could have been paged out on one machine only to be paged in on another following migration. I am not sure you would migrate a process that had not been swapped out as you would want the advantage of locality, but if it had it seems like a likely candidate for migration.
The amount of space required could be maximum amount of combined RAM times some multiple, but if there is no criteria as to how many machines might be a part of the cluster from one moment to the next, you would probably be going for as much space as can be spared for swapping.
It would not work quite like normal swap and there are plenty of cases I haven’t considered, and you can probably tell the mechanism of migration is still an unknown to me, but the basic idea *seems* right. I don’t know, it was just a thought, I might be quite wrong.