[Info-vax] BL870c shared memory?
johnwallace4 at yahoo.co.uk
johnwallace4 at yahoo.co.uk
Mon Oct 17 13:19:40 EDT 2016
On Monday, 17 October 2016 16:25:04 UTC+1, Kerry Main wrote:
> > -----Original Message-----
> > From: Info-vax [mailto:info-vax-bounces at rbnsn.com] On Behalf
> > Of Stephen Hoffman via Info-vax
> > Sent: 17-Oct-16 9:24 AM
> > To: info-vax at rbnsn.com
> > Cc: Stephen Hoffman <seaohveh at hoffmanlabs.invalid>
> > Subject: Re: [Info-vax] BL870c shared memory?
> >
> > On 2016-10-16 13:58:57 +0000, Snowshoe said:
> >
> > > Is there a way for a BL870c or a BL890c, divided into two (or
> > more)
> > > systems, to have special memory set aside that can be shared
> > between
> > > them, while most of the memory is per-system specific as
> > usual? If
> > > so, how to configure them to do so? Kind of like a Alpha Galaxy
> > feature.
> > >
> > > Does VMS support this, if it is possible?
> >
> > Nope.
> >
> > Galaxy (what HP/HPE later called vPar) is a feature dependent on
> > the console firmware and is not available on Itanium, nor would I
> > expect it
> > on x86-64. EFI and UEFI do not provide the necessary
> > mechanisms, and
> > would require custom work to support differential configuration
> > presentations; Galaxy.
> > http://labs.hoffmanlabs.com/node/813
> > http://h41379.www4.hpe.com/availability/galaxy.html
> >
>
> While the dependency on console FW is certainly true for previous flavors of Galaxy, we should also mention that HW virtualization was not part of the previous Alpha/IA64 architectures.
>
> HW virtualization is part of the X86-64 architecture standard.
> http://www.cs.columbia.edu/~cdall/candidacy/pdf/Uhlig2005.pdf
>
> Will this mean a future version of Galaxy (post V9?) is in the works via VSI?
>
> I have no idea, but sure hope so.
>
> To be clear, let's remember what Galaxy provides with specific flavors of Alpha HW:
> - dynamic cpu sharing between different OS instances (VMware does not do this) based on workloads via customizable business rules or drag-n-drop. Huge capability for those who want to maximize server utilization based on varying workloads;
> - shared memory access for very high speed, low latency TCPIP, Cluster DLM and inter-process communications between different OS instances;
> - RAM disk in shared memory;
> - NUMA (RAD) aware (remember all X86-64 arch's are now NUMA based)
> - GUI configurator;
> - improved DR capabilities;
> - custom API's to write local Galaxy resource management services;
>
> > As for shared or remote memory access into other servers...
> >
> > Memory Channel was an Alpha feature providing reflective
> > memory, and the hardware involved didn't sell in large volumes.
> > AFAIK, there is no Itanium support for Memory Channel.
> > https://people.eecs.berkeley.edu/~culler/cs258-
> > s99/project/memchannel/memchannel.pdf
> >
>
> Technology standards are changing rapidly.
>
> Check this link out: (posted Oct 17, 2016)
> http://bit.ly/2enpgNF
> "Opening Up The Server Bus For Coherent Acceleration"
>
> >
> > VSI hasn't gotten around to adding support for RDMA adapters or
> > analogous.
> >
>
> To clarify - the difference between RDMA (original spec) and RDMA V2 (RoCEv2 is current spec) is huge. The new spec allows a much simpler migration from current drivers.
>
> The current technology that has already been adopted by Windows/Linux and available via very high speed, very low latency cards from companies like Mellanox is based RoCEv2.
>
> > Most folks didn't head toward Galaxy-style processor and
> > memory
> > sharing. Console-level virtualization didn't (hasn't?) caught on.
>
> By those who are still promoting distributed computing.
>
> The "enterprise" world is rapidly changing to fewer, much larger systems with TB scale memories, very high speed, low latency interconnects.
>
> These types of system require much better levels of workload management and server utilization techniques than old technologies like VMware which has minimal workload mgmt.
>
> > For most folks, virtualization can happen at the system hardware
> > level — this is what most virtual machines present, pretending to
> > be a descendent of the 1981-vintage box that came from Boca
> > Raton, or a para-virtualization of that box — or happens at the
> > software and particularly at the operating system level — and this
> > is what containers provide, particularly when sandboxing and
> > related are available to keep the apps from accidentally or
> > intentionally stomping
> > on other apps.
>
> I am not convinced containers are not the answer for most enterprise deployments.
> http://bit.ly/2dZ9c6C
> http://bit.ly/2ebdjwS
>
> In addition, my understanding, but willing to be corrected, is that sandbox's are basically processes with TMPMBX only priv's and perhaps some additional monitoring/auditing like what one can already do with products like System Detective from PointSecure .
>
> > VSI has stated that they will be providing
> > OpenVMS
> > support for both native x86-64 boot and for x86-64 virtualization
> > in specific virtual machines, and that they're pondering adding
> > support for host-level virtualization — containers — as part of
> > some more distant future work.
> >
>
> Again, I really do not see any value with industry containers in the OpenVMS environment.
>
> What is the issue containers are trying to address?
>
> Containers on Linux/Windows are making headway because these platforms recognize VM sprawl is a huge issue and they do not have good App stacking capabilities like what OpenVMS Cust's have been doing for decades. Running 15+ business apps from different groups or BU's on 1 system or cluster is considered normal in most OpenVMS environments.
>
> See links above for security and network issues with containers.
>
> Imho, enhancing native things like LD technologies (boot support?), additional security features, class scheduler, batch / workload management ect. would be far more beneficial than trying to resolve issues that do not apply to OpenVMS.
>
> > Maybe some hardware vendor that's pondering qualifying
> > OpenVMS x86-64 support on their iron might decide to create a
> > customized UEFI with
> > customizable ACPI reports? But then that's ~2020-vintage
> > discussion
> > and roll-out, at the earliest. There's also that UEFU and ACPI are
> > not exactly the most user-friendly bits in an Integrity or x86-64
> > box, and adding Galaxy support atop that could well cause UI
> > nausea.
> >
>
> Imho, VSI will need to address the "Why OpenVMS" as feature differentiators on X86-64 over Windows/Linux.
>
> Besides just being a more stable platform, Galaxy-like virtualization shared resource features, clustering (improved scalability of shared disk/everything), improved server utilization and workload management are going to be critical components of "Why OpenVMS" in the V9/V10+ era on X86-64 and other future HW platforms.
>
> (yes, the OpenVMS license model and marketing needs to improve with V9 + as well)
>
>
> Regards,
>
> Kerry Main
> Kerry dot main at starkgaming dot com
"I am not convinced containers are not the answer for most enterprise deployments."
Too many negatives?
Missing punctuation? [convinced. Containers are]
Ambiguity between not/now?
Or meant as tryped?
More information about the Info-vax
mailing list