[Info-vax] Current VMS engineering quality, was: Re: What's VMS up to these

glen herrmannsfeldt gah at ugcs.caltech.edu
Mon Mar 19 20:07:56 EDT 2012


Johnny Billquist <bqt at softjar.se> wrote:
 (snip)

>>> However, there still seem to be some debate about the real vs. double
>>> floating point types as related to the PDP-11 architecture.

(snip, then I wrote)
>> Fortran pretty much requires a single and double precision floating
>> point format. One or both can be done in software, but both should
>> be there. Hardware targeting Fortran usually supports both, so it
>> shouldn't be surprising that C also supports them.

>> Though from the beginning C tried to be a systems programming
>> language, and not a scientific programming language, one might
>> argue that they weren't needed.

(snip)
> Yes. But the PDP-11-centric detail of this story is why all constants 
> and all computations in C are (or were) always done in double, even if 
> you have simple floats. And the reason being that the FPP of the PDP-11 
> have a mode bit, which determines in which precision computations are 
> done, so it's either all single, or all double. 

Hmm. I don't know the FPP well enough to know. I had always thought
it was that C was not intended for number crunching, but even so
it is useful to have floating point. If you assume that it isn't a
big part of programs, and that they might run a little slower but
not so much slower, then it seems reasonable.

> Thus, using simple real in C just saves memory, but costs in 
> conversations everywhere, and speed wise you'll always be 
> doing double computations anyway, even if all you have are singles.

Well, the cost does depend on the processor, but most of the time
the single <--> double conversion cost is fairly low. (Fixed point
to/from floating point is not always so easy, though.) 

But yes, in K&R C all floating point was done in double precision,
as everything smaller than int is converted to int before any operation.

A large fraction of numerical algorithms do everything in double
precision, anyway. 

> And that was apparently specified in the standard for C for a long 
> while, but I think not anymore.

Well, it isn't so far off. Constants still default to double, with
a trailing f for single, added in ANSI C (C89). So, if you have any
constants in the expression, it is mostly double anyway. It might
be that they changed it a little more in C01 or C11.

> So, support for single precision in C was basically just that you 
> could store them, but they were converted to doubles everywhere 
> when used. Always.

It simplifies the compiler a lot, and also the library. (Only one
of each library routine like sqrt() or sin(), and not two or more.)
Also, about the same time as ANSI C, the 8087 and 80287 came out,
which internally do all arithmetic in extended (80 bit total,
and 64 bit significand) precision. 

Also, for S/360, an early C target after the PDP-11, floating point
registers are double precision, though you can do single precision
arithmetic on them. The single precision multiply instruction always
generates a double precision product. It takes two extra instructions
and one extra register to zero out the low bits of the product, as
many compilers for other languages do.

In the vacuum tube computer days, double precision was a lot slower,
but later on not so much. Independent of any mode bit, it seems to
me to make a lot of sense.

One of the easiest mistakes to make in Fortran is to use single
precision constants in double precision expressions. For example, 0.1
will convert (on a binary machine) to a single precision approximation
to 0.1, converted to double if needed, and not a double precision
approximation of 0.1. C avoids that problem.

-- glen




More information about the Info-vax mailing list