[Info-vax] Roadmap

Dave Froble davef at tsoft-inc.com
Tue Jan 8 03:20:25 EST 2019


On 1/5/2019 1:48 PM, John Reagan wrote:
> On Saturday, January 5, 2019 at 2:15:15 AM UTC-5, Dave Froble wrote:
>> On 1/4/2019 10:40 PM, John Reagan wrote:
>>> D_float?!?  Ugh. The DtoT sequence is the longest.  Depending on your floating operations, it could be 10x slower than just using native T_float.
>>>
>>> Why use D?  Do you have binary data you are using?
>>>
>>
>> We use Basic ...
>>
>> OPTION SIZE = ( INTEGER WORD , REAL DOUBLE )
>>
>> As far as I know, the above used D_FLOAT.
>>
>> We have historical data files, with data from VAX, Alpha, and itanic.
>> We never have done any conversions, and so I'm assuming that we're still
>> using D_FLOAT, or, we've been screwed up for years ....
>>
>> --
>> David Froble                       Tel: 724-529-0450
>> Dave Froble Enterprises, Inc.      E-Mail: davef at tsoft-inc.com
>> DFE Ultralights, Inc.
>> 170 Grimplin Road
>> Vanderbilt, PA  15486
>
> Ok, so I just learned something else that I think BASIC does wrong.
>
> The default for /REAL_SIZE is essentially "give me the best single-precision format for this platform".  F on VAX and Alpha, S on Itanium.
>
> The confusion is that the qualifier is used to provide BOTH the size of a real (single precision vs double precision) AND the format.
>
> I assumed that if you explicitly say /REAL_SIZE=SINGLE, it would mean the same as the default, namely I want REALs to be single-precision, compiler's choice on best format.
>
> I also assumed that if you explicitly say /REAL_SIZE=DOUBLE (or the equivalent OPTION SIZE directive), you would get "I want REALs to be double-precision, compiler's choice on the best format".
>
> Alas, nope.
>
> The HELP file was the first hint that I was wrong.
>
>        The format of the REAL_SIZE qualifier is as follows:
>
>           /REAL_SIZE={SINGLE}
>                      {DOUBLE}
>                      {GFLOAT}
>                      {SFLOAT}
>                      {TFLOAT}
>                      {XFLOAT}

 From the latest Basic help:

DATA_TYPES

   REAL

      Floating-point values are stored using a signed exponent  and  a 
binary
      fraction.   BASIC  allows  six  floating-point formats:  single, 
double,
      gfloat, sfloat, tfloat, and xfloat.  These  formats  correspond 
to  the
      SINGLE, DOUBLE, GFLOAT, SFLOAT, TFLOAT, and XFLOAT keywords.

      Keyword             Range                                  Precision

      SINGLE (32-bit)    .29 * 10^-38 to 1.7 * 10^38              6 digits
      DOUBLE (64-bit)    .29 * 10^-38 to 1.7 * 10^38             16 digits
      GFLOAT (64-bit)    .56 * 10^-308 to .90 * 10^308           15 digits
      SFLOAT (32-bit)   1.18 * 10^-38 to 3.40 * 10^38             6 digits
      TFLOAT (64-bit)   2.23 * 10^-308 to 1.80 * 10^308          15 digits
      XFLOAT (128-bit)  6.48 * 10^-4966 to 1.19 * 10^4932        33 digits

      In declarative statements, the  REAL  keyword  specifies 
floating-point

First, I've never used SINGLE, don't see any point.

My understanding is that on VAX D_FLOAT is implemented in HW, at least 
on some models.

And G_FLOAT on Alpha is implemented on HW, at least for some  models.

What is the best format to use on VAX, Alpha, itanic, and soon x86?

John, I seem to recall that you indicated T_FLOAT would be the best to 
use on itanic.

Is X_FLOAT slower?  Basic is stuck on 32 bit addresses, but I didn't 
think that had anything to do with the size of data types.

Me thinks that it's time to do a bit of testing.  Determine if other 
than D_FLOAT can be used in programs for better performance.

-- 
David Froble                       Tel: 724-529-0450
Dave Froble Enterprises, Inc.      E-Mail: davef at tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA  15486



More information about the Info-vax mailing list