[Info-vax] CRTL and RMS vs SSIO

Stephen Hoffman seaohveh at hoffmanlabs.invalid
Tue Oct 12 16:54:12 EDT 2021


On 2021-10-12 00:14:15 +0000, Lawrence D’Oliveiro said:

> On Monday, October 11, 2021 at 1:30:36 AM UTC+13, chris wrote:
>> 
>> On 10/10/21 01:12, Lawrence D’Oliveiro wrote:>>
>>> The key point is that VMS shied away from a radical idea that Unix 
>>> embraced: that the filesystem itself should abstract away the need for 
>>> blocking and deblocking, and offer up a file as just a stream of n 
>>> bytes, with no requirement on n being a multiple of any integer greater 
>>> than 1.
>>> 
>> Yes, and at the lowest level, it's completely transparent to data and 
>> it's format. Filesystems, structure and format should be layered in top 
>> of that. Obvious really...

Swapping in a kernel file system API and a FUSE layer is a smaller and 
more focused effort than would be re-hosting OpenVMS onto a different 
kernel.

> Which brings us to a point I’ve made before: the Linux kernel already 
> runs on every architecture that VMS users and developers might care 
> about, now and into the future. It already has a range of drivers for 
> common (and not-so-common) hardware, including that in enterprise use. 
> It includes a robust, high-performance TCP/IP stack.

And Linux with longstanding features you are decidedly unfamiliar with, too.

> Wouldn’t it be easier to just keep the parts of VMS that users and 
> developers need, and implement them as a compatibility layer on top of 
> a Linux kernel? And just scrap the rest. Wouldn’t that save a lot of 
> effort?

In aggregate, likely no.

Your suggestion is akin to Sector7 provides—an ever-growing but still 
partial implementation of the OpenVMS APIs—for their customers.

A compatibility layer will involve substantial programming efforts 
elsewhere within the platform; outside the now-swapped kernel.

This approach has been discussed before and has been prototyped before, too.

The DEC OpenVMS advanced development group did do a prototype of 
OpenVMS on Mach a ~quarter-century ago.

https://www.semanticscholar.org/paper/A-Model-and-Prototype-of-VMS-Using-the-Mach-3.0-Wiecek/01810cffbddc949ff73a66b38de63f963d659db3?p2df 


More recently, a similar starting point might be from seL4 or DragonflyBSD.

If you're going to invest the effort and swap the kernel, might as well 
swap the existing kernel for a newer design.

Downside of any kernel-swappage is that you pretty quickly then own the 
kernel you're working with; just as soon as you have to start modifying 
the kernel to better fit OpenVMS and OpenVMS app expectations.

Which means you end up forking and then maintaining the kernel, and 
maybe submitting pull requests upstream. And fetching and merging 
updates and fixes from upstream. Whether that's all then a net benefit 
or a net loss?

Possible areas where kernel modifications might necessary? Linux memory 
management is thoroughly two-ring, and OpenVMS expectations are 
four-ring. Do you drop those areas from OpenVMS, and force app source 
code changes?

Other considerations awaiting VSI developers: any hypothetical chunks 
of OpenVMS linked against Linux, seL4, or some of the other kernels 
necessarily involves working within GPL2, which means VSI must write 
all of that source code themselves, and must then release it.

The DragonflyBSD kernel started out with a focus on clustering, and 
also has a license more compatible with commercial closed-source use.  
Various other BSD kernels are similarly licensed.

VSI might (and likely only then very briefly) consider a kernel-swap 
during the (hypothetical) OpenVMS port to Arm and (hypothetical) ARMv10 
architecture servers later this decade or next (if enough of us are 
hypothetically still around), but kernel swappage is still doubtful.




-- 
Pure Personal Opinion | HoffmanLabs LLC 




More information about the Info-vax mailing list