[Info-vax] Current VMS engineering quality, was: Re: What's VMS up to these days?
glen herrmannsfeldt
gah at ugcs.caltech.edu
Tue Mar 6 15:23:37 EST 2012
David Froble <davef at tsoft-inc.com> wrote:
(snip)
> Back in the 1970s, 1980s, and a bit into the 1990s computer
> hardware was not cheap. Did the actual hardware really cost
> so much.
That is about when the transition occurred. Consider that core memory
required three wires to be threaded through a small ferrite ring for
each bit of main memory, often by hand. In the 1950's, hardware
really was expensive, and the (much simpler than today) software
was thrown in for free as incentive to buy.
In the 1960's, hardware (per unit computation) was getting
cheaper fairly fast (though still expensive), while software
was getting more complicated and more expensive.
There is a story about an IBM system, I believe system/3,
that came with either 32K or 64K of memory, with the lease
price depending on the memory size. It was cheaper to build
only one model, with a switch. When you order 64K they send
someone out to turn the switch to 64K. (All machines were
leased and under maintainence contract.)
> My take on this is that a lot of that money went toward
> significant software and support expenditures.
The 1970's to 1990's were about the time of transition.
It may have taken the companies some time to adjust to it.
Until people started building clone processors, there was no
worry about people stealing software.
> With significant funding, it's possible to have software
> R&D, and a good support organization. That's what DEC had, and
> probably IBM also. Both companies developed multiple operating
> systems, compilers, and other software.
OS/360 is well known, read "The Mythical Man Month," for being
in the transition. It took longer to write and debug than was
expected, as the previous rules didn't apply anymore.
Among other reasons, the delay in OS/360 allowed DOS/360 to
become more popular than IBM had expected. (They had planned
for everyone to use OS/360, at least eventually.)
> Why did Unix become so popular?
> Because the "me too" computer manufacturers didn't have
> to develop it. If they had to develop their own software
> portfolio, they'd never have been able to afford it.
Well, first hardware had to be priced separately from software.
As well as I remember, for some time VAX was a popular host
for Unix. If VAX price included VMS, that would have discouraged
its use for Unix.
> Why did Microsoft take over the software world? Because all the
> PCs and such needed software. The companies building the
> computers could never have afforded to develop their own software.
Well, the microcomputers would have been about the beginning of
building processors without also supplying software for them.
The DEC processors of the time at least had DEC software as an option.
Intel supplied some software for the 8080, but not so much.
> DEC software development listened to their customers, and
> delivered what was asked for, but standards were maintained
> at a high level. Similar to their hardware. That's why I'm
> running VMS V7.2 on a VAXstation 4000 model 90A. Anybody
> still running DOS on early 1990s hardware?
As hardware prices came down, there was more possibility for
competition. Also, the transition from writing system software
in assembler to HLLs allowed for cheaper and faster development
by competitors.
> The big mistake was DEC thinking that they could carve out a
> large customer base, keep it, and milk it for large sums of money.
Yes. But as long as big companies, government agencies, and
universities (funded by government grants) would pay it, they
could keep doing it.
> Remember the BI bus? What DEC should have done was license their
> software to anyone who wanted to use it. If they would have done
> that, it would be in the best interest of DEC's competitors to
> insure DEC remained viable. But I digress ....
> I just don't think that any computer company will ever again be
> able to afford the sizable investment for software and support
> that DEC (and probably IBM) spent in the past. And even if they
> could, they won't in this day of "only the next quarter matters"
> management.
Well, S/360 (the hardware) was pretty much the beginning of the
idea of an "Architecture", allowing one to upgrade the hardware
while keeping the software pretty much constant. Prior to that,
each level of hardware used an incompatible instruction set,
and so all different software. Even so, I don't believe that IBM
expected that 50 years later they would still be selling processors
that would run that software. (z/ machines will still run the OS/360
compilers and their compiled code.) Processors have gotten faster,
memories bigger, but the S/360 instructions are still there.
> Which is why I wrote that it's not a fair comparison.
> There most likely will never again in private enterprise be
> the funding to hire, train, and keep such a large and excellent
> group of software and support people.
Well, the 1950's were the beginnging of high-level languages,
allowing for software to be machine independent. In the 1960's,
system software, much of OS/360, was still written in assembler.
The transition to writing system software in HLLs allowed it
to be ported, relatively easily, between processors. It also allows
programmers to be much more productive. In addition, it allows
such development teams to be independent of hardware producers.
There is much incentive to use existing system software and not
try to reinvent the wheel. Note, for example, Linux/390 that IBM
is investing in.
-- glen
More information about the Info-vax
mailing list