[Info-vax] BL870c shared memory?

Stephen Hoffman seaohveh at hoffmanlabs.invalid
Mon Oct 17 21:08:13 EDT 2016


On 2016-10-17 15:22:17 +0000, Kerry Main said:

>> 
>> -----Original Message-----
>> From: Info-vax [mailto:info-vax-bounces at rbnsn.com] On Behalf
>> Of Stephen Hoffman via Info-vax
>> Sent: 17-Oct-16 9:24 AM
>> To: info-vax at rbnsn.com
>> Cc: Stephen Hoffman <seaohveh at hoffmanlabs.invalid>
>> Subject: Re: [Info-vax] BL870c shared memory?
>> 
>> On 2016-10-16 13:58:57 +0000, Snowshoe said:
>> 
>>> Is there a way for a BL870c or a BL890c, divided into two (or more) 
>>> systems, to have special memory set aside that can be shared between 
>>> them, while most of the memory is per-system specific as usual?  If so, 
>>> how to configure them to do so? Kind of like a Alpha Galaxy feature.
>>> 
>>> Does VMS support this, if it is possible?
>> 
>> Nope.
>> 
>> Galaxy (what HP/HPE later called vPar) is a feature dependent on the 
>> console firmware and is not available on Itanium, nor would I expect it 
>> on x86-64.   EFI and UEFI do not provide the necessary mechanisms, and 
>> would require custom work to support differential configuration 
>> presentations; Galaxy.
>> http://labs.hoffmanlabs.com/node/813
>> http://h41379.www4.hpe.com/availability/galaxy.html
> 
> While the dependency on console FW is certainly true for previous 
> flavors of Galaxy, we should also mention that HW virtualization was 
> not part of the previous Alpha/IA64 architectures.

The design of Galaxy fundamentally involves the console.   That's where 
the processors, memory and I/O are partitioned and configured, where 
the operating system reads the configuration data from, and where (and 
how) the various configured instances are located and booted.

As for virtualization?    Hardware virtualization support was part of 
both the VAX and Alpha architectures, and was a part of the 
architecture and the HP-IVM package was available a product on Itanium. 
  It's listed in DEC Standard 32 for VAX — there are copies of that 
posted — and the Alpha Architecture Reference discusses support for the 
virtual machine monitor as well.    For details, search for VAX VVAX 
and the Alpha VMM support.    Neither VAX systems nor Alpha PALcode 
were ever available to provide that; there weren't products.

http://bitsavers.informatik.uni-stuttgart.de/pdf/dec/vax/archSpec/EL-00032-00-decStd32_Jan90.pdf 

http://download.majix.org/dec/alpha_arch_ref.pdf
http://www.scs.stanford.edu/nyu/02sp/sched/vmm.pdf

But that's looking backwards.   x86-64 doesn't have the console support 
for Galaxy, so somebody'd need to add that.   And — for the purposes of 
OpenVMS — I'd think containers would make better use, as Galaxy is 
little different from the virtualization sprawl you're fond of 
referencing — the console substitutes for the virtual machine monitor, 
but otherwise you're still looking at full copies of operating systems, 
with all the baggage that entails.

> HW virtualization is part of the X86-64 architecture standard.
> http://www.cs.columbia.edu/~cdall/candidacy/pdf/Uhlig2005.pdf

I suspect that such virtualization support is not a surprise to most 
folks reading here.

> Will this mean a future version of Galaxy (post V9?) is in the works via VSI?

AFAIK, Galaxy didn't use the hardware virtualization assists.   it 
didn't need to.

> I have no idea, but sure hope so.
> 
> To be clear, let's remember what Galaxy provides with specific flavors 
> of Alpha HW:
> - dynamic cpu sharing between different OS instances (VMware does not 
> do this) based on workloads via customizable business rules or 
> drag-n-drop. Huge capability for those who want to maximize server 
> utilization  based on varying workloads;
> - shared memory access for very high speed, low latency TCPIP, Cluster 
> DLM and inter-process communications between different OS instances;
> - RAM disk in shared memory;
> - NUMA (RAD) aware (remember all X86-64 arch's are now NUMA based)
> - GUI configurator;
> - improved DR capabilities;
> - custom API's to write local Galaxy resource management services;

Can't say that Galaxy solves any problems I'm particularly dealing 
with, and that can't also be addressed with — in decreasing order of 
overhead — hardware virtualization, paravirtualization or particularly 
OS-level software virtualization.   Galaxy would be nice, but it needs 
EFI or UEFI firmware, which means the hardware vendors will need to be 
involved, as well as VSI for the OpenVMS bits.   Unless VSI gets into 
the custom UEFI firmware business, and the folks at VSI didn't think 
that was likely, and I'd certainly concur with that.   If they did get 
into the custom firmware business, I'd be more interested to see UEFI 
support for VAFS access first, too.

>> As for shared or remote memory access into other servers...
>> 
>> Memory Channel was an Alpha feature providing reflective memory, and 
>> the hardware involved didn't sell in large volumes.  AFAIK, there is no 
>> Itanium support for Memory Channel.
> 
> Technology standards are changing rapidly.

Ayup.    Which is part of why a small organization like VSI is at a 
competitive disadvantage.

>> VSI hasn't gotten around to adding support for RDMA adapters or analogous.
> 
> To clarify - the difference between RDMA (original spec) and RDMA V2 
> (RoCEv2 is current spec) is huge. The new spec allows a much simpler 
> migration from current drivers.
> 
> The current technology that has already been adopted by Windows/Linux 
> and available via very high speed, very low latency cards from 
> companies like Mellanox is based RoCEv2.

VSI and OpenVMS doesn't have any RDMA, last I checked.

>> Most folks didn't head toward Galaxy-style processor and memory 
>> sharing.    Console-level virtualization didn't (hasn't?) caught on.
> 
> By those who are still promoting distributed computing.

Can't say I've encountered console-level virtualization with 
cooperating instances.

> The "enterprise" world is rapidly changing to fewer, much larger 
> systems with TB scale memories, very high speed, low latency 
> interconnects.

Consolidation is certainly underway, and more than a few folks are 
hosting their applications.    If I were at VSI and as central as is 
keeping the installed base happy, I'd be very interested in what the 
smaller folks are thinking about and looking to do.   This gets back to 
the projects and the folks running Centos rather than RHEL, and that 
are looking to do prototypes and smaller deployments.   Enterprises are 
often inherently slow to change and to migrate tools and platforms, 
after all.    Various enterprises do seem to like those HANA boxes, 
though — not that OpenVMS has any place in those plays.

> These types of system require much better levels of workload management 
> and server utilization techniques than old technologies like VMware 
> which has minimal workload mgmt.

You were pointed in the other direction yourself, just a few months 
ago.   This is where app stacking goes.

>> For most folks, virtualization can happen at the system hardware level 
>> — this is what most virtual machines present, pretending to be a 
>> descendent of the 1981-vintage box that came from Boca Raton, or a para 
>> virtualization of that box — or happens at the software and 
>> particularly at the operating system level — and this is what 
>> containers provide, particularly when sandboxing and related are 
>> available to keep the apps from accidentally or intentionally stomping 
>> on other apps.
> 
> I am not convinced containers are not the answer for most enterprise 
> deployments.

Again, you were pointed in that direction yourself, just a few months ago.

> In addition, my understanding, but willing to be corrected, is that 
> sandbox's are basically processes with TMPMBX only priv's and perhaps 
> some additional monitoring/auditing like what one can already do with 
> products like System Detective from PointSecure .

I've previously pointed to documentation giving details of the level of 
control involved, and it's far past something as limited and inflexible 
as privileges — a design that even OpenVMS has been slowly moving away 
from — in terms of what can be managed and controlled.    I'm already 
dealing with and using these tools elsewhere.   They're features of 
available computing platforms, and only likely to become more common.

https://developer.apple.com/app-sandboxing/

>> VSI has stated that they will be providing OpenVMS support for both 
>> native x86-64 boot and for x86-64 virtualization in specific virtual 
>> machines, and that they're pondering adding support for host-level 
>> virtualization — containers — as part of some more distant future work.
> 
> Again, I really do not see any value with industry containers in the 
> OpenVMS environment.
> What is the issue containers are trying to address?

I've already pointed to that the reply.   Okay.   Repeating the 
response.   Virtualizing at the OS level — a container — is more 
efficient than cooperative processing akin to Galaxy, and far more 
efficient than virtualizing at the hardware level.   Containers also 
allow easier distribution of applications, and reduce the likelihood 
that applications will stomp on each other intentionally or otherwise — 
privileges are a much blunter instrument, as anybody that's dealt with 
those and with INSTALL and subsystem identifiers and concealed rooted 
logical names — the forerunners of container support on OpenVMS — will 
have encountered.

> Containers on Linux/Windows are making headway because these platforms 
> recognize VM sprawl is a huge issue and they do not have good App 
> stacking capabilities like what OpenVMS Cust's have been doing for 
> decades. Running 15+ business apps from different groups or BU's on 1 
> system or cluster is considered normal in most OpenVMS environments.
> 
> See links above for security and network issues with containers.

Apropos of discussions of security and URL shorteners: 
https://en.wikipedia.org/wiki/URL_shortening#Privacy_and_security

As for issues, I'm aware of various security issues that have arisen 
over the years with those, with Qubes and other approaches seeking to 
address that, and the various issues with hardware-level 
virtualization.   Never did look around at the security involved with 
Galaxy, so I don't know off-hand if there were issues there.    Welcome 
to the complexity of virtualizing interfaces, and why more than a few 
places that were interested in security went to "system high" 
configurations rather than pay for and deal with the complexity of 
multi-level security.   I'm also area of holes in OpenVMS and various 
associated packages, and glaring weaknesses such as the patch process.  
 Welcome to software and security.

https://www.qubes-os.org

> Imho, enhancing native things like LD technologies (boot support?), 
> additional security features,  class scheduler, batch / workload 
> management ect. would be far more beneficial than trying to resolve 
> issues that do not apply to OpenVMS.

All of those are already or increasingly part of competitive systems, 
and various of those other systems have better and more flexible 
application isolation than is offered on OpenVMS.   All of those are 
among the many limits of present-day OpenVMS, as well.   I've certainly 
requested most of those — OpenVMS can already network boot from LD via 
InfoServer — in previous threads.   Booting from local disk images 
could probably be managed via the graphical console that VSI discussed, 
and the (lack of) partitioning-related support needs work, too.

>> Maybe some hardware vendor that's pondering qualifying OpenVMS x86-64 
>> support on their iron might decide to create a customized UEFI with 
>> customizable ACPI reports?   But then that's ~2020-vintage discussion 
>> and roll-out, at the earliest.  There's also that UEFU and ACPI are not 
>> exactly the most user-friendly bits in an Integrity or x86-64 box, and 
>> adding Galaxy support atop that could well cause UI nausea.
>> 
> 
> Imho, VSI will need to address the "Why OpenVMS" as feature 
> differentiators on X86-64 over Windows/Linux.

VSI is a installed-base play, so the marketing is similar to the 
approach that Oracle uses.  In simplest terms — and assuming the 
prerequisite products become available — it's more expensive to port 
the applications off of OpenVMS than it is to port the applications to 
OpenVMS x86.

Outside of the installed base, OpenVMS is not competitive with other 
platforms.   Which means I don't expect to see particularly strong 
marketing efforts nor large numbers of sales outside the installed base 
in the foreseeable future.   VSI has a whole lot of work ahead, before 
they can start to expand the installed base to include wholly new 
applications and wholly new ISVs.   All that's been covered before.

> Besides just being a more stable platform, Galaxy-like virtualization 
> shared resource features, clustering (improved scalability of shared 
> disk/everything), improved server utilization and workload management 
> are going to be critical components of "Why OpenVMS" in the V9/V10+ era 
> on X86-64 and other future HW platforms.

Most of those areas have been weak for decades, and some — such as 
scaling up shared-everything clusters — are going to run into the 
fundamental limits of the design.

> (yes, the OpenVMS license model and marketing needs to improve with V9 
> + as well)

They needed to improve that some two years ago.




-- 
Pure Personal Opinion | HoffmanLabs LLC 




More information about the Info-vax mailing list