[Info-vax] ADA and VMS (was Safer programming languages)
Stephen Hoffman
seaohveh at hoffmanlabs.invalid
Wed Nov 17 12:17:56 EST 2021
On 2021-11-16 14:47:33 +0000, Arne Vajhj said:
> On 11/16/2021 7:15 AM, Bill Gunshannon wrote:
>>
>> Thus the reason we have so much bloatware today. If the program runs
>> badly, throw more cores at it.
Can't say I see much wrong with more cores and more resources, and with
big.LITTLE and other heterogenous processor designs.
>> When I first started with programming we cared about programming and
>> efficiency.
Because you had constraints that required that; very limited and slow hardware.
>> We profiled our programs in order to find the bad parts and we fixed them.
That still happens. But again, you had limited and slow hardware, and
comparatively weak tooling, and you had to look at adds and multiplies.
Now, not so much.
>> It is sad that efficiency is no longer considered important to software
>> development today.
A full VAX-11/780 server configuration cost around a hundred thousand
dollars US, back in the mid 1980s. That's closer to a quarter-million
2021 dollars. Which buys a lot of computer.
A full Mac mini M1 costs a ~hundredth of that 1980s price, is massively
faster, and massively more capable. And is dwarfed by the massive size
of the LSI-11 console, and might well consume less power than did the
RX01 console floppy.
Do I need to optimize adds and multiplies on an M1? Probably not. And
if I do, I'm probably looking at using SIMD or such via the Accelerate
framework.
Does Accelerate or other frameworks add bloat? Absolutely. But
hopefully it avoids costs and hassles when porting hardware, and the
costs adjusting existing code when newer hardware becomes available.
https://developer.apple.com/documentation/accelerate
>> And they call it engineering while we just called it programming.
Economics, mostly. Hardware has gotten cheaper faster than programmer
investments have gotten cheaper.
Folks in the 1980s grumbled about changes in computing economics and
tooling, too. Tooling that many didn't understand and didn't want to
learn about. There were massive squabbles about 2GL and 3GL back then;
about assembler and the shift to compiled languages. Adding bloat, as
was widely reported back then.
I had more than a few discussions decades ago with a very experienced
and skilled developer that was then just having to mentally shift away
from punched card app designs and limits, and was aghast at an app
having a five thousand longword array. That overnight run for that
add-heavy app went from ~overnight to ~ten minutes, given new hardware
and new software. I didn't bother to profile the add and the multiply
times for that app.
> I would say that there is a lot of focus on efficiency today.
>
> But there are two types of such focus.
>
> There is the hacker/nerd crowd that focus on micro-benchmarks of all
> sorts of things. ADD A TO B GIVING C will fit fine in that.
That instruction-level focus was wicked popular back in the VAX era,
too. Various instruction-timing tables were published. Folks optimized
individual instructions. There were folks that implemented instructions
(e.g. XFC) to get microcode speeds, too. Optimization work which mostly
vaporized when DEC released a new VAX CPU design with different
timings. And with no XFC.
If doing billions or trillions of adds as the core of the app, folks
are interested in the performance of adds and multiplies. Otherwise,
not so much.
Even back in the VAX era, folks were still throwing hardware at this
problem, too. Back then, with the FP780 floating point accelerator. The
FP780 speeded both floating point, as well as integer multiply. Or
corrupted floating point and integer multiply, if the FP780 was
recalcitrant.
For those interested in these sorts of benchmarking and performance
problems and the closely-related "instruction bumming" challenges, have
a look around for "code golf", among other discussions.
There's a performance variation awaiting here too, with algorithms now
ill-performing for the scale of the current data. I've seen these cases
arise in more than a few existing OpenVMS apps.
> And then there is the engineering/professional crowd that focus on
> actual solution/system performance. But what measurement in this area
> prove to be relevant has changed over the last 30 years. Unless one is
> in a specialized area like HPC then ADD A TO B GIVING C is not relevant
> for solution/system performance today. They are looking for round trips
> between tiers, interpreted vs compiled, data models etc..
Absent a program executing billions or trillions of adds or multiplies,
~nobody cares about the speed of an individual add or multiply across
differing programming languages. Other cost factors are involved.
And for apps that are executing billions or trillions of adds or
multiplies, we're now looking at migrating out of iterative code and
into SIMD / AVX / Accelerate where feasible, or migrating the math
entirely off the (traditional) CPU.
For many years now, other app activities have utterly overwhelmed the
speed of integer math for most apps. DEC learned about that when the
VAX market drifted out of scientific and HPC apps, and drifted into
commercial apps. For hardware performance, Apple M1 Max will reportedly
run ~ten teraflops, give or take. And last-millennium HDD storage I/O
is glacial, as compared with NVMe and newer I/O. And in practical 2021
terms, VAX had ~-no storage and ~no memory, even the few VAX models
with extended physical addressing.
And economically, ill-chosen algorithms, and API compatibility, and of
under-maintained and un-refactored existing app code, and abstraction
layers and the rest of "bloat" will continue, as will the hardware
upgrades. Because the stuff involved still has to sell at a profit, or
management has to cover the costs of the app work from profits and
salaries. Trade-offs can and do and will shift, too. As will
gatekeeping.
--
Pure Personal Opinion | HoffmanLabs LLC
More information about the Info-vax
mailing list