[Info-vax] OpenVMS on x86 and Virtual Machines -- An Observation
Stephen Hoffman
seaohveh at hoffmanlabs.invalid
Wed Jan 30 14:50:53 EST 2019
On 2019-01-30 18:57:52 +0000, Phillip Helbig (undress to reply said:
> Since VMS will soon run natively on x86, what is the motivation to run
> it on some sort of emulator?
Emulation involves differing instruction sets and differing
architectures and run-time instruction translation. The underlying
hardware is typically of a different architecture with a different
instruction set.
Apps and operating systems running under virtualization use the native
instruction set and architecture of the hardware with no translations,
and with a few specific operations either invoking the hypervisor or
reserved to the hypervisor. Outside of those operations, instructions
and apps run at full speed, untranslated, directly on the underlying
hardware.
For you? Maybe you'll be running parts of your environment under
virtualization eventually, as it'll let you build and test different
environments—multiple instances of OpenVMS, and mixes of OpenVMS and
other operating systems—on the same hardware. That might well involve
a OpenVMS cluster operating within a single box for instance, while
you're migrating your plethora of old Alpha hardware to fewer or
potentially to one x86-64 box. You may well find that a single small
x86-64 box runs your entire existing Alpha load, after all. Or you
might eventually be testing a newer product release or a newer
installation of OpenVMS, without disrupting the main installation.
Getting your entire environment upgraded to the VSI releases for Alpha,
and then clustering with the x86-64 boxes and getting your apps ported.
For other folks, virtualization means that their OpenVMS apps can be
hosted on a shared server, and that folks can boot up multiple OpenVMS
instances for unexpected loads. This is consolidating hardware to
fewer boxes, and this can also involve temporarily renting hardware and
software rather than the expense of purchase what may well be excessive
hardware and software capacity. If you're running a back-end for a
gaming environment, you can either purchase enough hardware for your
maximum load and hope that some extremely-popular game doesn't exceed
that capacity, and also hope that the aggregate load can support the
costs of what can often be excess capacity. It also means that folks
don't necessarily have to staff as many data centers, and can boot up
hosts that are geographically local to the clients. Or geographically
appropriately-distant, in the case of disaster preparedness. Etc.
Rolling in a system image—a fully-configured environment—and booting it
as needed is pretty handy for deployment, testing, and for recovery,
too. With virtualization, it's possible to effectively pause the whole
running environment out to disk, transfer it to another host, reload
the guest onto another hypervisor on another box, and restart the
paused processing.
Some organizations outsource the hosting for their servers, and other
folks prefer to have their own shared data centers and shared hosting.
Can you do this with hardware? Sure. But you're going to be
purchasing a whole lot of capacity you won't be using, if you don't
want to saturate. And moving copies of guests is easier than moving
backups around.
I routinely have guests running on the local desktop box, as that
allows me to use apps and tools that require specific Linux or BSD
distributions. I don't need to reboot, or switch to other hosts, or to
even need or use additional hardware to do this.
For some of the parallels here with virtualized hosts, ponder what
virtualized memory and virtualized storage and virtualized networking
have each provided.
--
Pure Personal Opinion | HoffmanLabs LLC
More information about the Info-vax
mailing list