[Info-vax] Roadmap
John Reagan
xyzzy1959 at gmail.com
Sun Jan 6 14:53:26 EST 2019
On Saturday, January 5, 2019 at 8:24:44 PM UTC-5, Dave Froble wrote:
> On 1/5/2019 1:48 PM, John Reagan wrote:
> > On Saturday, January 5, 2019 at 2:15:15 AM UTC-5, Dave Froble wrote:
> >> On 1/4/2019 10:40 PM, John Reagan wrote:
> >>> D_float?!? Ugh. The DtoT sequence is the longest. Depending on your floating operations, it could be 10x slower than just using native T_float.
> >>>
> >>> Why use D? Do you have binary data you are using?
> >>>
> >>
> >> We use Basic ...
> >>
> >> OPTION SIZE = ( INTEGER WORD , REAL DOUBLE )
> >>
> >> As far as I know, the above used D_FLOAT.
> >>
> >> We have historical data files, with data from VAX, Alpha, and itanic.
> >> We never have done any conversions, and so I'm assuming that we're still
> >> using D_FLOAT, or, we've been screwed up for years ....
> >>
> >> --
> >> David Froble Tel: 724-529-0450
> >> Dave Froble Enterprises, Inc. E-Mail: davef at tsoft-inc.com
> >> DFE Ultralights, Inc.
> >> 170 Grimplin Road
> >> Vanderbilt, PA 15486
> >
> > Ok, so I just learned something else that I think BASIC does wrong.
> >
> > The default for /REAL_SIZE is essentially "give me the best single-precision format for this platform". F on VAX and Alpha, S on Itanium.
> >
> > The confusion is that the qualifier is used to provide BOTH the size of a real (single precision vs double precision) AND the format.
> >
> > I assumed that if you explicitly say /REAL_SIZE=SINGLE, it would mean the same as the default, namely I want REALs to be single-precision, compiler's choice on best format.
> >
> > I also assumed that if you explicitly say /REAL_SIZE=DOUBLE (or the equivalent OPTION SIZE directive), you would get "I want REALs to be double-precision, compiler's choice on the best format".
> >
> > Alas, nope.
> >
> > The HELP file was the first hint that I was wrong.
> >
> > The format of the REAL_SIZE qualifier is as follows:
> >
> > /REAL_SIZE={SINGLE}
> > {DOUBLE}
> > {GFLOAT}
> > {SFLOAT}
> > {TFLOAT}
> > {XFLOAT}
> >
> > I said to myself, "where are the explicit FFLOAT or DFLOAT keywords?"
>
> Probably would be good to have keywords for all supported real data
> types. Defaults can be so annoying.
>
> > So Dave's "OPTION SIZE = ( INTEGER WORD , REAL DOUBLE )" which was originally used to say "I want extra precision on my REALs" locks into you Dfloat on Itanium.
>
> There was a time in the past that D_FLOAT was the only 8 byte real.
> Then things got "interesting".
>
> > What a horrible design decision. So people who didn't say anything on Itanium, get fast Sfloat but the moment you want more precision and blindly say "REAL DOUBLE", you get your extra precision (but no extra range) and a performance hit.
> >
> > Another item for my "time machine" list.
> >
>
> Surprised ??
>
> VAX Basic used D_FLOAT for 8 byte real. As you may have noticed, Basic
> hasn't gotten a lot of attention through the years. Part of that means
> that architecture changed, but the floating point types did not. Not a
> really bad thing, since older data files, historical stuff, is in the
> formats available when the data was written.
>
> If users were to embrace different floating point format(s), there would
> be rather significant work in going back to convert data fields in
> historical data files. Frankly, that would be a lot of work, for no
> return value.
>
> I don't know when /REAL_SIZE was introduced. For lo these many years,
> I've been using REAL DOUBLE. Did I mention I don't get out much, or
> read release notes?
>
> :-)
>
> Horror story: 2011 files with one format, 2012 files with another, 2013
> files with another, ..............
>
> Maybe the D_FLOAT isn't so bad, if you're not doing a lot of work with it.
>
> --
> David Froble Tel: 724-529-0450
> Dave Froble Enterprises, Inc. E-Mail: davef at tsoft-inc.com
> DFE Ultralights, Inc.
> 170 Grimplin Road
> Vanderbilt, PA 15486
All the compilers got attention when moving to GEM/Alpha. We had joint meetings with people from all the frontends. I thought the plan was "F/D/H" on VAX and "F/G/X" on Alpha as the defaults. BASIC incorrectly pushed D_float back on your on Alpha instead of G.
As already pointed out, unless you are dealing with existing binary data, the vast majority of programs won't notice the "D" to "G" difference or even the "F/S" and "G/T" difference. D on Alpha is slightly worse than G but not by much. You don't get full D since there aren't enough mantissa bits.
More information about the Info-vax
mailing list