[Info-vax] OpenVMS on x86 and Virtual Machines -- An Observation

Stephen Hoffman seaohveh at hoffmanlabs.invalid
Wed Jan 30 12:13:54 EST 2019


On 2019-01-30 15:09:23 +0000, gezelter at rlgsc.com said:

> While attending the Oracle-hosted OpenVMS Update in New York City this 
> past Monday, I realized that there was a potential for misconception 
> and misunderstanding.
> 
> Traditionally, OpenVMS has been run on dedicated hardware. In the past 
> two decades, an initially small but increasing number of systems have 
> been, and are, running on one or another emulator (e.g., simh, Charon, 
> AVT, etc.). With the advent of OpenVMS on x86, there is an increasing 
> discussion of running OpenVMS x86 on various virtual machine 
> hypervisors (e.g., xen, VirtualBox, Hyper-V).
> 
> Questions ensue along the lines of "What if my (fill in your supported 
> VM) infrastructure is using enterprise-class storage facility that is 
> not supported by OpenVMS?"
> What matters in a hypervisor-based environment is not the underlying 
> storage or network device used by the hypervisor. What does matter is 
> the simulated device presented to the client virtual machine.
> ...........


As with the rest of this business, "it depends".

At one end of the spectrum, there's full-on virtualization.  This is 
where the guest operating system is oblivious to the presence of 
virtualization, and accesses fully virtualized devices.  This adds some 
overhead, as the virtual machine hypervisor has to present a device 
interface and host environment that the guest supports, and there's 
some overhead in that mapping within the hypervisor.   Few or no guest 
modifications are required, depending on how close the virtualization 
presents the hardware.   And in this design and as Bob indicates, the 
guest of the hypervisor is oblivious to what sorts of storage is 
located behind the hypervisor.

At the other end of the spectrum, there's hardware pass-through akin to 
what OpenVMS Galaxy provided on Alpha.  Where the virtual machine 
coordinates which guests can access which hardware devices in the 
underlying configuration, and each guest accesses its own hardware 
subset and its allocated hardware devices directly.  This avoids the 
overhead of the intermediate layer that's involved with full-on 
virtualization within the hypervisor.  With the subset ACPI hardware 
configuration data presented from the hypervisor and the existing 
drivers, the guest can connect and use guest-specific device drivers 
directly.  Few or no guest modifications are required.

In the middle, there's what's called paravirtualization, where the 
guest is modified to provide drivers into the emulation.  This is akin 
to the UQSSP interface from aeons past, and the SCSI command sets used 
on more recent SCSI, SAS, SATA and related devices.  Where the guest 
operating system communicates via an API presented by the virtual 
machine.  This requires guest modifications, but it can provide much 
better performance than the overhead of emulating hardware devices 
through software within the virtual machine.  This is the preferred 
path, though it does mean the guest must detect and support the 
hypervisor and its associated API.

The paravirtualization support can vary widely, too.  At its simplest, 
it'll be akin to a VAX emulator that has chosen to implement the 
architected VAX idle instruction (02FD WAIT Wait for Interrupt) or 
similar as a way to allow a slightly-modified guest to signal its idle 
state to the emulator (or to the hypervisor), without requiring the 
emulator (or the hypervisor) to detect the guest idle loop.  Or the 
communications and coordination between the guest and the hypervisor 
can be far more involved.

Beyond the presentation of the I/O interfaces, a particular hypervisor 
can virtualize processors, memory and other system resources.

And emulation has opened up with some very different approaches, with 
apps and with whole operating systems compiled into and targeting Wasm 
and ilk, with intermediate approaches akin to Bitcode, and with 
projects such as Klee.

The other wrinkle in this discussion is the supported hypervisor, as 
various folks have specific requirements for hypervisors.  VSI has a 
working list of hypervisors they're targeting, though that list may 
well evolve as the port proceeds.  One detail that has been discussed, 
though: and the vendor of one of the more common 
hypervisors—VMware—will not be supporting OpenVMS as a guest.  The lack 
of VMware support been discussed before, and it'll be discussed again.  
Now as to whether OpenVMS boots and runs under VMware, unsupported? 
That's a different discussion.


Here's an intro to virtualization, and there are many others available...
http://dsc.soic.indiana.edu/publications/virtualization.pdf
https://binarydebt.wordpress.com/2018/10/14/intel-virtualisation-how-vt-x-kvm-and-qemu-work-together/ 


For some discussions around virtualization and security...
https://www.qubes-os.org/intro/
https://developer.amd.com/sev/
https://blog.cloudflare.com/cloud-computing-without-containers/ 
(Wasm-based; also see 
https://groups.google.com/d/msg/comp.os.vms/6nQ1oSo9zNc/RS-6IPcMBQAJ )
https://googleprojectzero.blogspot.com/2017/04/pandavirtualization-exploiting-xen.html 

https://df-stream.com/2017/08/memory-acquisition-and-virtual-secure/

Bitcode:
https://lowlevelbits.org/bitcode-demystified/

And just for grins, since LLVM is soon (finally) in play on OpenVMS...
https://klee.github.io



OpenVMS has been in a backwater for quite a while, around the 
associated hardware and software environments and tools.



-- 
Pure Personal Opinion | HoffmanLabs LLC 




More information about the Info-vax mailing list