[Info-vax] 1 year.
terry+googleblog at tmk.com
terry+googleblog at tmk.com
Thu Aug 6 19:37:32 EDT 2015
On Wednesday, July 29, 2015 at 8:13:16 PM UTC-4, IanD wrote:
> The business has no plans of moving to Itanium. I doubt they can wait for x86 either going by the recent chatter unless there was a financial incentive to do so
>
> Once this VMS platform closes, VMS will have exited this organisation and I seriously doubt it will get a look in again
I looked at migrating to Itanium (as a business, not a hobbyist) some years ago, and brought in some RX2620s to test with. Now, that may not have been the "best" 2RU Itanium to use for an evaluation, but they were (relatively) inexpensive.
I found them to be boat anchors, space heaters, and generally unpleasant to deal with at the hardware / EFI console level.
A single RX2620 consumed 1/3 of the power used by a cabinet full of Cisco routers / switches, several x86 servers of various vintages, and a small amount of other stuff. It was also impossible to rack / unrack normally due to it being longer than the cabinet-to-cabinet aisle spacing in the facility (mostly due to the Itanium needing a deeper cabinet than anything else in that cabinet). By comparison, a Dell PowerEdge R710 (similar vintage) with 12x the memory, 6x the disk space (at 15K RPM no less) with controller-based RAID / battery backup / cache, and 8 more physical cores consumes about 220W during operation, with peaks to 240W max.
We had repeated problems with the iLO (iLO2?) becoming unreachable and then coming back, and they were swapped out several times until we found out "they just do that". A firmware fix for that may be available if it is installed in an x86 box to do the update. There is (was?) no fix available for the big SSL vulnerability of some time ago (as opposed to the more recent ones) on Itanium, though (again) if I moved the card to an x86 system, a fix was downloadable. Even with a support contract from HP, all we ever got (after multiple tries) was "Huh?".
The EFI console and the combination of different command structures for different pieces of the system (iLO vs. MP vs. ...) seem like someone went out of their way to do things in the most obscure way. And I'm used to UEFI on x86.
This may all have been fixed in new Itanium systems (somehow, knowing HP, I doubt it), but why would someone purchase that hardware, migration of licenses, etc. for an architecture that is essentially dead, as a stopgap on the way to x86? Or, for that matter, to move to one dead-end architecture (Itanium) from another (Alpha) if the goal is to move away to "something else" once support costs get out-of-hand.
And we've heard that most of "the heavy lifting" to make porting of user-side code (applications, compilers / tools, etc.) easier was done going from VAX to Alpha and that Alpha / Itanium share the same source code pool, while VAX was left doing its own thing. If moving from Alpha -> x86 is the same amount of work as Alpha -> Itanium -> x86, why briefly pass through Itanium along the way? A customer would probably be better served by obtaining the latest compilers / tools and making sure they have the source to all of their applications and that they can be compiled with the latest compilers / tools and that the resulting executables work properly. For extra credit, they can check to make sure the needed compilers / tools exist on Itanium (even though they won't migrate to it). It is probably a reasonable generalization to assume that something that didn't make it to Itanium probably won't be available on x86. I'm not sure what percentage of those things that exist for Itanium will make it to x86, but I'd expect it to be pretty large. The point being that if the application depends on Product X and X isn't available on Itanium, it might make more sense to either re-engineer the application to no longer depend on X, or to migrate the application to some other platform. The former can be done on Alpha as part of the "make sure source is available and can be built" - no need to go to Itanium and no need to wait for VMS on x86 to ship.
As far as "no new releases for legacy architectures", as long as there is cluster interoperability at the continuing (as opposed to migration) level, that seems to be fine.
However, it is certainly possible that some issues may come up during development of the x86 port that would be easier to fix on "the other end". That could be either a remedial kit (which it seems would need to be provided by HP, no matter who develops it) or a new software release.
And what about some of the hypothetical whiz-bang new features that might appear on VMS x86? For example, a new filesystem that takes advantage of larger-capacity disks, SSDs, etc. VAX got at least partial ODS-5 support. With a "no new versions" policy, it seems unlikely that Alpha would have any knowledge of the hypothetical new filesystem. So customers would need to use lowest-common-denominator settings in order to have interoperability. How many customers would then convert to the new filesystem once the old Alpha / VAX systems have left the cluster, with the necessary re-qualification of applications, etc.? What about the case of a lone Alpha remaining in the cluster "forever" in order to handle some low-usage legacy application? Will that mean that some new features can't be used on the x86 nodes?
More information about the Info-vax
mailing list