[Info-vax] OpenVMS I64 V8.1 "Evaluation Release"?
Johnny Billquist
bqt at softjar.se
Thu Mar 22 17:23:45 EDT 2012
On 2012-03-22 21.51, glen herrmannsfeldt wrote:
> Johnny Billquist<bqt at softjar.se> wrote:
>
> (snip)
>>> Well, if you put it that way, IA32 has a 45 bit virtual address
>>> space, which should have been plenty big enough. That is, 16 bit
>>> segment selectors minus the local/global bit and ring bits,
>>> and 32 bit offsets.
>
>> I don't know exactly how the virtual addresses look on the IA32 so I
>> can't make more explicit comments. But if it actually forms a 45-bit
>> virtual address space, then sure. But it depends on how the virtual
>> address is calculated. Maybe someone can make a more accurate comment,
>> if we want to pursue that.
>
> IA32 still has the segment selector system that was added with
> the 80286. While there were many complaints about the size of 64K
> segments, that isn't so much of a problem at 4GB. A task can
> have up to 8192 segments, each up to 4GB.
Yes. But the question is if the CPU forms a 45 bit address that it then
sends to the MMU. If it's just a 32-bit address, done by adding the
segment selector (16 bits) with a 32-bit offset, and then truncating
that to 32 bits, which is then fed to the address translator in the MMU,
then it don't matter that you have segments, and so on. That is what my
comment was about, and I don't know how the IA32 actually do, so I can't
say much more.
As an example, only slightly related, the PDP-11 have an MMU that forms
a 22-bit address by taking the high 16 bits from the MMU PAR register,
and adding the low 13 bits of the virtual address. This means that you
can get a physical address that would seem to be larger than what can be
addresses by 22 bits. But it don't happen. The address is truncated to
22 bits in the end anyway.
> (snip, I wrote)
>>> Having a large virtual address space is nice, but you can't
>>> practically run programs using (not just allocating, but actually
>>> referencing) 8, 16, or 32 times the physical address space.
>
>> You perhaps can't use all of it at the same time, for various reasons.
>> But you might definitely want to spread your usage out over a larger
>> address space than 32 bits allows.
>
> Maybe 2 or 3 times, but not 16 or 32. Note that disks haven't
> gotten faster nearly as fast as processors, especially latency.
Way more than 2 or 3 times. My typical example is a process with
multiple threads. Each thread needs its own stack. But how much space
would you want to allocate for the stack for each thread?
Code nowadays (especially with things like Java) really likes to have a
lot of stack. So a typical implementation with threads should allocate
several megs, minimum, per thread. So, for each thread created, you'll
probably want to get another 8 or 16 megs, minimum, for the stack.
Almost all of it will not be used or mapped at all, but it might become
over time.
I've seen programs with several hundred threads. Feel free to start
thinking of how much memory that will take.
Also, if you load in dynamically linked libraries, they need to be
placed somewhere in your virtual memory space. But you do not want them
to accidentally be placed somewhere which limits other structures
potential growth either, so you really want to located them somewhere
way out of the way of the rest of your code. Where do you place them?
So, most of the time, you are not really interested in really using the
amount of space, but the ability to spread out your usage over a large
area is invaluable. But when you get into several hundred threads, you
will also be using quite a lot of memory.
> If you do something, such as matrix inversion, that makes many
> passes through a large array you quickly find that you can't
> do it if it is bigger than physical memory.
Right. But most people actually don't do this most of the time. They do
run a web browser, though...
>>> The rule for many years, and maybe still not so far off, is that
>>> the swap space should be twice the physical memory size. (Also,
>>> that was when memory was allocated out of backing store. Most now
>>> don't require that.)
>
>> That has not been true for over 10 years on any system. It's actually a
>> remnant from when memory was managed in a different way in Unix, and
>> originally the rule was that you needed 3 times physical memory in swap.
>
> Ones I worked with, it was usually 2, but 3 probably also would have
> been fine.
Like I said. It evolved over time.
>> The reason for the rule, if you want to know, was that way back,
>> physical memory was handled somewhat similar to cache, and swap was
>> regarded as "memory". So, when a program started, it was allocated room
>> in swap. If swap was full, the program could not run. And when running,
>> pages from swap was read into physical memory as needed. (And paged out
>> again if needed.)
>
> The first system that I remember this on was OS/2, I believe 1.2
> but maybe not until 2.0. If you ran a program from floppy, it required
> that the swap space exist, as you might take the floppy out.
Unix used to always require swap. It was defined in the configuration
file for the kernel build, and was not optional.
> Well, using the executable file as backing store for itself is a
> slightly different question, but for many years they didn't even
> do that. Allocating in swap avoids the potential deadlock when the
> system finds no available page frames on the swap device, and needs
> to page something out. It made the OS simpler, at a cost in swap space.
> (And the ability to sell more swap storage.)
Right. Paging from the executable itself is also later/newer. In fact,
back when this was originally defined/designed, paging didn't even exist
in Unix. It was swapping only. And running a program meant allocating
swap, and copy the program in there, as well as to ram, and then start
running.
>> This should make it pretty obvious that you needed more swap than
>> physical memory, by some margin, or you could start observing effects
>> like a program not being able to run because there was no memory, but
>> you could at the same time see that there was plenty of free physical
>> memory. A very silly situation.
>
> When main memory was much smaller, that was much less likely to
> be true, but yes.
Which is when this was defined/designed, and the rule was true.
>> No system today works that way. You allocate memory, and it can be in
>> either swap, or physical memory. You do not *have* to have space
>> allocated in swap to be able to run. You don't even need to have any
>> swap at all today.
>
> Reminds me of wondering if any processors could run entirely off
> built-in cache, with no external memory at all.
There are such processors, yes.
Off the top of my head, some variants of both the PDP-10 and PDP-11
could do this.
I'm sure there are others.
>>> If you consider that there are other things (like the OS, other
>>> programs and disk buffers) using physical memory, you really
>>> won't want a single program to use more than 4GB virtual on
>>> a machine with 4GB real memory. Without virtual memory, you
>>> would probably be limited to about 3GB on a 4GB machine.
>
>> It's called paging, and every "modern" OS does it, all the time,
>> for all programs. Not a single program you are running today are
>> all in memory at the same time. Only parts of it is.
>
> As I noted above, it isn't hard to write a program, working with
> a large matrix, which does pretty much require it all in memory.
Means you'll get lots of page faults, and your program will not run very
fast, but it will run, and it will produce the expected result.
> With matrix operations, they tend to run sequentially through
> blocks of memory. I once wrote a very large finite-state
> automaton that pretty much went randomly through about 1GB of
> memory. No hope at all to swap.
it will be paging all the time. This is a classic in computer science,
as an easy way to grind a system down to almost a standstill, as it will
be paging all the time, and do very little work. But it will work, and
it will produce results (eventually).
>> So, even if you are running a program that is 4 GB in size,
>> it will not be using 4 GB of physical memory at any time.
>
> Maybe for the programs you write...
How about just about any program, including your matrix inversion.
And even more so on VMS, where quotas will stop you from ever getting
close to that amount of memory. You'll get your working set, and then
you'll page. Unix is a bit more "liberal", but it too will not give you
all that memory, ever...
Johnny
More information about the Info-vax
mailing list