[Info-vax] DECnet Phase IV broken after VSI update
John Wallace
johnwallace4 at yahoo.co.uk
Sun Nov 7 04:59:36 EST 2021
On 06/11/2021 19:32, Dave Froble wrote:
> On 11/6/2021 12:28 PM, Stephen Hoffman wrote:
>> On 2021-11-06 13:52:03 +0000, Robert A. Brooks said:
>>
>>> On 11/5/2021 7:52 PM, Rich Jordan wrote:
>>>
>>>> VSI found it, though there's still a mystery of sorts attached.
>>>>
>>>> The original system also has EIA-0 (and 1) NICs. The EIA-0 is set
>>>> to 1Gbs full duplex, auto negotiation disabled, and the
>>>> corresponding Cisco port is set the same.
>>>
>>>> This is because they've had the system long enough that they
>>>> experienced the bad behavior between VMS and switches doing auto
>>>> negotiations in the dim past.
>>>
>>> Unless they are referring back to the mid-90's when the early PCI
>>> Ethernet adapters on Alphas were not-so-great, that info is a bit stale.
>>>
>>> VMS Engineering (specifically, the guy who's been writing our
>>> Ethernet drivers for over 30 years) has stated that auto-negotiate
>>> should always be used.
>>>
>>> If it doesn't work, he'll fix it, or determine that the switch is
>>> non-conforming to the standard.
>>
>>
>> That's been becoming the new policy since ~Y2K or so—though with some
>> wrinkles around Alpha and Itanium NICs—then GbE controllers and
>> late-era Fast Ethernet that are detected with auto-negotiate disabled
>> should generate an informational message at OpenVMS boot, in the logs,
>> when viewed within LANCP, and within the documentation. For important
>> network switch settings preferences, I'd be included to post driver
>> status information to end-users via SHOW DEVICE /FULL, and AMDS/AM, too.
>>
>> The distribution of this information—and of other analogous
>> recommendations for many other API choices available—has been
>> inconsistent, at best. An API with choices needs to have published
>> opinions, and best has diagnostics when the existing settings are
>> drifting out of current preferences. Y'all want us pesky customers to
>> move in certain shorter-term or longer-term directions, y'all need to
>> tell us that. WTFM, minimally. Displaying diagnostics is preferred.
>>
>> If y'all as developers don't have an opinion for an API or settings
>> choice, there shouldn't be an API or settings choice. And preferences
>> can shift over time, which means shifting our usages.
>>
>> Unfortunately for this and similar cases where the end-user really
>> intends to have a bogus setting—this because there's a busted switch
>> port or busted switch firmware or whatever—OpenVMS also lacks a means
>> to provide overt alert messaging and to then suppress the the overt
>> displays over time, moving the displays to status-related cases. Such
>> as into LANCP, here. That'll probably require some updates to the
>> existing 1970s- and 1980s-era diagnostics and status-reporting
>> infrastructure.
>>
>>
>
> I've got to second this concept. An example:
>
> With one exception, every VMS system I set up had one ethernet port.
> The exception
> is my AlphaServer 800, which had a 4 ethernet port card when I got it.
> After having
> problems, I pulled out the 4 port card and installed a DE500-BA single
> port card.
> Things worked, and I didn't look further.
>
> One of your fine support people mentioned to me:
>
> By default, DECnet Phase IV installation and configuration will enable
> DECnet protocol on all available interfaces on the system. Once
> configured, the system administrator would want to go into NCP and purge
> all lines and circuits that are not needed from the database.
>
> I never knew that.
>
> When setting up DECnet, perhaps in NETCONFIG, or elsewhere, something
> could be mentioned about that issue.
>
> Just one example of how to make VMS more user friendly.
>
> And yes, I'm aware, the list of such "hints" could be quite extensive.
>
For some relatively brief period during the life of multi-port adapters
on PCI, and not just multiport network adapters, some PCI cards used PCI
to PCI bridges to provide multiple adapters on one card.
That introduced a whole load of fun for the affected adapters, as the
rules for configuring stuff behind a PCI bridge weren't particularly
clear at the time.
Pulling out your four port adapter and replacing it with a single port
adapter, in the AlphaServer 800 era, *might* have unknowingly fixed that
problem too.
Also, some systems used a PCI-PCI bridge on the *motherboard* to provide
an increased number of PCI slots. This is back in the days of e.g. Miata
and MiataGL and similar.
Back then, some considerable time ago, PCI-PCI bridges in general were a
bit of a challenge.
Nowadays the HYPErvisor presumably solves all this device support
weirdness, leaving just the DECnet bits to be sorted in your picture.
More information about the Info-vax
mailing list