[Info-vax] System Programming versus Application Programming (was: Re: i4 Possible?)
Stephen Hoffman
seaohveh at hoffmanlabs.invalid
Sat Feb 8 10:56:05 EST 2014
On 2014-02-08 04:33:23 +0000, David Froble said:
> You'd want to assume (damn, I did it again) that the new CPU would not
> have omitted anything in the old CPU.
You're thinking more about the application-level environment presented
by the operating system, and not at the level of the operating system
and those gnarly details.
The operating system has to deal with different device hardware and
bridges and different details in the processor, as well as
comparatively "silly" stuff such as reading and processing the ACPI
configuration data.
These are the sorts of details that the applications and even a fair
amount of user-written inner-mode code just doesn't have to deal with.
The vast majority of user-written code upgraded from the swizzle-space
Alpha systems to the flat address space Alpha systems without noticing
anything happened, though certainly some of the user and system device
drivers did need to be reworked. This to cite a more visible case
where low-level I/O details changed across various Alpha families.
<http://labs.hoffmanlabs.com/node/543>
VAX system hardware interfaces could be substantially and surprisingly
different across families, as well. Itanium systems also have
differences across platforms, even with the same processors.
"Simple" stuff, like booting OpenVMS from USB optical that was the
default boot path versus booting from IDE/ATAPI optical commonly found
on older Itanium and most Alpha, for instance, involved shuffling some
code. Or unexpected changes to the particular ACPI configuration data
that you're getting back, for that matter.
Certain I/O hardware configurations necessitated changes to the Itanium
bootstrap to allow for memory-based system booting, too. (That's
basically transparent to all of the higher-level application code and
even to most of the operating system, but that operating system
bootstrap code still had to be written and tested and supported.)
> This is rather reasonable, otherwise many things could be broken.
Most applications are coded for the OS-provided APIs
<http://en.wikipedia.org/wiki/Application_programming_interface> and
ABIs <http://en.wikipedia.org/wiki/Application_binary_interface>. The
operating system is inherently coded to the hardware ABIs, and usually
with some effort to render many of the lower-level differences among
the supported systems and servers comparatively inconsequential or even
irrelevant to the applications. In some cases to work around hardware
bugs; those that have been around VMS long enough may remember the
VAX-11/750 firmware update that loaded into the control store at boot,
for instance. Or the VAX VVIEF or Alpha instruction subset emulation
support that was implemented, for instance. Or the so-called subset
VAX systems; the MicroVAX and later systems. Systems and families can
and do differ, and the operating system is doing a pretty good job if
(when?) the applications don't notice that.
> If that is indeed the case, then Poulson would be a superset of the
> prior CPUs. One could then figure that VMS would continue to use what
> it expects, and ignore the new features. Unless Intel "broke"
> something.
As was discussed recently, the processor register file sizes differ
(which has been mentioned upthread) and that change was apparently
added to OpenVMS with UPDATE V6.0. Without that change, VMS reportedly
won't boot (based on the comments upthread).
> Now, why HP didn't run their test suites, and if successful declare
> Poulson "supported" but without using new features, I don't have a
> clue. You'd think that it would allow them to sell more hardware
> (assuming they want to do so) and would continue to generate more
> support revenues (assuming they wanted to continue collecting support
> revenues).
I'd have to assume that the costs involved with doing that work and
that qualification were perceived as not enough to justify the effort;
what they thought they'd make in return. In short, revenues and
economics: the distinction between what's technically possible and
what's commercially viable.
Given that adding full support likely runs into a VMS SMP limit (AFAIK
currently at 4s / 32c / 64t with Poulson), the effort involved in
extending support to the bigger Tukwila and Poulson boxes is much
higher than extending support to the four-socket and smaller boxes.
Tukwila hits that same limit with eight-socket (8s / 32c / 64t)
configurations; with slightly larger boxes. The question then becomes
whether you extend full support to all boxes (which is probably going
to be a fair chunk of work in the kernel, as well as "chasing" all of
the key applications over to the "newer" non-quadword-bitmask APIs for
SMP details), or to just the four-sockets and under, and then how many
of each that you think you're going to sell, and how much it costs to
build those boxes and factoring for the rest of the usual overhead.
> Makes you think that they have very little to no confidence in the
> current support people, or maybe the testing stuff was lost, or maybe
> the current people cannot figure out how to run the testing stuff, or
> ....
This is engineering and testing and — most importantly — revenues.
Having been through more than a few of these, there are all sorts of
gnarly little details that can and variously do change — some with a
new processor, and some with the new platforms that are often involved.
Or as is increasingly the case, with the SMP limits inherent in
current VMS data structures, with the numbers of cores.
> Can anyone still paying HP for support tell me why they are doing so ???
The usual reason: to have somebody to call and own the problem when
something goes wrong with your current box.
++ somewhat more general ++
For a high-level overview comparing Tukwila to Poulson — this at the
processor level, rather than at the board or system level — see
<http://www.realworldtech.com/poulson/>
That entirely hypothetical x86-64 port discussion has some parallels to
here, too. If somebody is actually looking at that port for more than
just giggles and grins, they're only going to be able to manage testing
and supporting a small subset of the x86-64 systems available — these
are probably not the x86-64 boxes that most hobbyists have — and
they're going to be adding and trading-off support for x86-64
instruction set extensions with the various compilers and compiler
language standards, and they're going to be staring at the
aforementioned SMP limit with OpenVMS given that the future
availability of 16c processors (2s / 32c / 64t) certainly seems to be
reasonable expectation.
Now a 2c / 32c / 64t box is a pretty big box in terms of classic
OpenVMS environments and applications — that would have required a big
AlphaServer GS1280 box not that long ago — and that scale then makes me
wonder if there'll be other synchronization issues encountered.
Extending the current 32c / 64t SMP limit would likely uncover some
other performance-limiting bottleneck somewhere, whether in the
applications or in OpenVMS itself — or more likely in both.
Various senior HP folks have been marketing mission-critical x86-64
servers, but AFAIK there aren't many of those boxes available (yet?)
from HP beyond the ProLiant DL980, and those boxes are reportedly
targeting Microsoft Windows and Linux environments based on the
marketing. My transcription from one of those videos
<https://groups.google.com/d/msg/comp.os.vms/hRJCGeSLwac/s3mWEC1CpAEJ>.
Some related Project Odyssey / DragonHawk / HydraLynx marketing:
<http://www8.hp.com/us/en/hp-news/press-release.html?id=1147777>
(Haven't seen very much HP PR on Project Odyssey lately, though — not
sure what that means.)
Related threads here (to save folks the effort of re-entering all the
kabuki
<https://groups.google.com/d/msg/comp.os.vms/G7VbPX4XJzM/HCgSTyCZ6XkJ>
--
Pure Personal Opinion | HoffmanLabs LLC
More information about the Info-vax
mailing list