[Info-vax] Integrity firmware version vs VMS version docs
Stephen Hoffman
seaohveh at hoffmanlabs.invalid
Thu Jul 11 18:38:25 EDT 2019
On 2019-07-11 13:17:24 +0000, Robert A. Brooks said:
> On 7/11/2019 8:17 AM, Simon Clubley wrote:
>> On 2019-07-10, Robert A. Brooks <FIRST.LAST at vmssoftware.com> wrote:
>>>
>>> *that guess is likely on the low side, as I used to develop firmware at
>>> HP, after working in VMS Engineering, and did a lot of firmware
>>> upgrading. That number includes upgrading to firmware that was quite
>>> buggy, but didn't brick, and allowed a subsequent upgrade.
>>>
>>
>> Serious question: What happens on Itanium if, on a rare occasion, a
>> firmware update, or a power failure at exactly the wrong point, _does_
>> well and truly brick the machine ?
>
> I don't know, because I never saw a bricked system, nor do I remember
> any of my firmware-writing colleagues experiencing that problem.
Without the benefit of a failsafe loader, the design was a trade-off
against depot repair. I've seen one or maybe two Integrity server
bricked by failed firmware or possibly by a contemporaneous hardware
failure, but it's very rare. That's out of a whole lot of firmware
upgrades, too. Per then-HP, "It should be noted that problems due to
firmware are relatively rare, and you should look for other problem
causes first." The server I was dealing with and that was bricked was
replaced, as that was more expeditious than dealing with the depot
repair. Swap the boards and the storage, recreate the boot aliases,
and off we went. I don't know if depot repair is still typical here,
but I would not be surprised. The firmware for the Integrity rx3600
series has been around for quite a while and has been entirely
unchanged, too.
>> Is there anything like a JTAG port which a HPE engineer can use to
>> install a working version of the firmware ?
>
> For certain platforms, yes.
AFAIK, all of the Integrity boxes had manufacturing access of some sort.
The bigger issue lurking here has little to do with the firmware
upgrades and the (very low) risk of bricking, and more centrally with a
critical dependency on a server design that was announced Sept. 7,
2006—~thirteen years ago—and is well past its end-of-service date of
January 31, 2011—~eight years ago.
Servers this old can tend to fail for reasons other than failing
firmware upgrades.
I'd also be concerned that any OpenVMS server that's been booted for
months or years might not reboot correctly, due to previously-untested
startup modifications or some other unrecognized issue, too.
Uptime being a measure of the interval since system and security
patches were applied, and the last time the startups have been tested.
And a server that's running OpenVMS without recent patches is already
running with known vulnerabilities.
A redundant server or a cluster is the typical approach toward keeping
the apps—the apps, and not necessarily a specific server—available, too.
And HPE OpenVMS I64 V8.4 was released over nine years ago, and it's
falling off of all new-patch support in less than 18 months.
Firmware-assisted server bricking... is probably not the biggest
consideration lurking here. What's the path to VSI OpenVMS and new
patches, and to an Integrity i4 or i6-class server—or a pair of i4 or
i6 servers and maybe an external storage shelf—or are there plans to
lay in stocks of spare servers and parts and awaiting the completion of
the VSI OpenVMS x86-64 port and the associated local application port?
--
Pure Personal Opinion | HoffmanLabs LLC
More information about the Info-vax
mailing list