[Info-vax] Kernel Transplantation (was: Re: New CEO of VMS Software)

Stephen Hoffman seaohveh at hoffmanlabs.invalid
Tue Jan 9 14:32:48 EST 2024


On 2024-01-06 20:08:02 +0000, Lawrence D'Oliveiro said:

> On Sat, 6 Jan 2024 13:36:59 -0500, Stephen Hoffman wrote:
> 
>> On 2024-01-06 02:48:42 +0000, Lawrence D'Oliveiro said:
>> 
>>> That can be blamed on the limitations of Mach. People still seem to 
>>> think microkernels are somehow a good idea, but they really don’t help 
>>> much, do they?
>> 
>> With current hardware including cores and performance and with newer 
>> message-passing designs such as OKL4 and ilk, some things are looking 
>> rather better.
> 
> Hope springs eternal in the microkernel aficionado’s breast. ;)
> 
>>>> As another example, it was not possible to emulate VMS’ strong 
>>>> isolation of kernel resource usage by different users.
>>> 
>>> Would the Linux cgroups functionality (as commonly used in the various 
>>> container schemes) help with this?
>> 
>> No.
>> 
>> Designers of VAX/VMS chose a memory management model closer to that of 
>> Multics, where much of the rest of hardware and software in the 
>> industry diverged from that lotsa-rings memory management design.
> 
> Seems you are confusing two different things here. I am aware of the 
> user/supervisor/exec/kernel privilege-level business, but you did say 
> “resource usage by different *users*”. cgroups are indeed designed to 
> manage that.

Not that the implementation detail I'm referring to.

> Remember that my proposal for adopting the Linux kernel would get rid 
> of every part of VMS that currently runs at higher than user mode. It’s 
> only their own user-mode code that customers would care about.

That's a massive effort toward ring compression if not a wholesale 
rewrite of userland, and for negligible savings in staff, for no 
savings in initial schedule, and maybe for a faster next port of 
OpenVMS to AArch64 or RISC V or whatever in a decade or two.

And given LLVM and compiler support has been a gating factor this time 
'round, that hypothetical future port is already in much better shape 
for any hypothetical future platform already supported by LLVM.

> 
>> Containers are arguably fundamentally about product-licensing arbitrage, too.
> 
> I don’t use them that way. I use them as a cheap way to run up multiple 
>  test installations of things I am working on, instead of resorting to 
> full  VMs. Typically it only takes a few gigabytes to create a new 
> userland for  a container. E.g. on this machine I am using now:
> 
>     root at theon:~ # du -ks /var/lib/lxc/*/rootfs/
>     1700060 /var/lib/lxc/debian10/rootfs/
>     7654028 /var/lib/lxc/debian11/rootfs/
>     876568  /var/lib/lxc/debian12/rootfs/

Good on you.

For the folks with massive bills for third-party dependencies, 
containers are interesting for an entirely different reason.

>> Microkernels are in use all over the place nowadays, seL4-, L4-, and 
>> OKL4-derived.
> 
> Really?? Can you name some deployments? How would performance compare 
> with Linux? Because, let’s face it, Linux is the standard for 
> high-performance  computing.

Off hand, only a billion or so devices in widespread usage for L4 with 
one vendor.  Probably more, given other usage at other vendors.

HPC is a different market. Not where OpenVMS was or is.

VAX/VMS was focused on commercial computing since the VAX era, and well 
before Y2K. UNIX and then Linux got most of technical computing from 
VAX/VMS as VAX and VMS prices increased and VAX performance decreased.

Yes, VAX/VMS was sorta kinda in high-performance computing decades ago 
with VAX, but—outside of existing installs—not in decades.

As for OpenVMS in high performance, OpenVMS hasn't been particularly 
supported on the vendor's own top-end hardware platforms; on 
AlphaServer SC series, and on upper-end Superdome and Superdome 2 
models. Superdome support was as a guest.

>> For a small development team—and VSI is tiny—kernel transplantation 
>> doesn't gain much from a technical basis, once the platform port is 
>> completed. It might help with future ports, sure.
> 
> Which was my point all along: if they’d done this for the AMD64 port 
> from  the beginning, they would have shaved *years* off the development 
> time. And likely ended up with a somewhat larger (remaining) customer 
> base than  they have now.

Not unless you mean way back at the juncture that mis-branched to 
Itanium. Sledgehammer (and Yamhill) hadn't been officially announced 
then.

The kernel transplant is larger than a port, for the first time around. 
Much larger. After that, the effort is smaller when the host kernel 
supports the target platform. And adding platforms is a form of 
dilution for an operating system vendor, and for         third-party 
vendors. More work. More testing.

And again, what you are interested here in has been available for many 
years via Sector 7. Sector 7 provides an incremental off-ramp from 
OpenVMS for those that want or need that. But probably not a good 
long-term model for any OS vendor.


-- 
Pure Personal Opinion | HoffmanLabs LLC 




More information about the Info-vax mailing list