[Info-vax] Distributed Applications, Hashgraph, Automation

Kerry Main kemain.nospam at gmail.com
Thu Feb 15 22:15:48 EST 2018


> -----Original Message-----
> From: Info-vax [mailto:info-vax-bounces at rbnsn.com] On Behalf Of
> Richard Maher via Info-vax
> Sent: February 15, 2018 9:20 PM
> To: info-vax at rbnsn.com
> Cc: Richard Maher <maher_rjSPAMLESS at hotmail.com>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph,
Automation
> 
> On 15-Feb-18 8:17 PM, Kerry Main wrote:
> >
> > Just to clarify -
> >
> > While the OpenVMS community refer to its clustering arch as shared
> > everything, the industry term for the same thing is shared disk.
> >
> > In both cases, one could refer to these as differing strategies to
share
> > data between multiple systems. There are pro's and con's.
> >
> 
> I disagree and think you'll find that the third option "shared
> everything" includes share memory. I can't believe I've forgotten what
> VMS' offering for a low latency interconnect was Memory Channel?
> 
> Oracle Cache Fusion and Redis Cache are wide area examples.

mmmm.. it's a bit different, but the basics are really about how data
sharing is done between servers. 

Regardless of whether disk or memory sharing, with shared disk (OpenVMS
- shared everything), there is still a DLM doing the inter-server update
coordination.

I fully agree OpenVMS has significant advantages over other shared disk
offerings - mission critical proven DLM, cluster logicals, cluster
batch, common file system (new one with significant new features cooking
as well). However, the industry really only looks at shared disk or
shared nothing.

Btw, the modern day equivalent to memory channel and ultra low latency
data sharing is either Infiniband or RoCEv2 (RDMA over converged
ethernet)

Not sure where it is at right now, but RoCEv2 is on the research slide
of the OpenVMS roadmap. 

Imho, this type of cluster communications capability is critical to next
generation cluster scalability of shared disk clusters.  It is how VSI
can address the biggest counter argument to shared disk clusters -
"shared disk clusters have scalability issues due to the requirement of
a distributed lock manager"

Note - RoCEv2 is supported on Linux, Microsoft environments and that is
what VSI's competition is in the new world.

Reference:
<http://www.mellanox.com/related-docs/whitepapers/roce_in_the_data_cente
r.pdf>
" OS bypass gives an application direct access to the network card,
allowing the CPU to communicate directly with the I/O adapter, bypassing
the need for the operating system to transition from the user space to
the kernel. With RDMA, there is no need for involvement from the OS or
driver, creating a huge savings in efficiency of the interconnect
transaction.

RDMA also allows communication without the need to copy data to the
memory buffer.  This zero copy transfer enables the receive node to read
data directly from the send node's memory, thereby reducing the overhead
created from CPU involvement.

Furthermore, unlike in legacy interconnects, RDMA provides for the
transport protocol stack to be handled by the hardware. By offloading
the stack from software, there is less CPU involvement, and the
transport is more reliable.

The overall effect of the significant reduction of CPU overhead that
RDMA provides by way of OS bypass, zero copy, and CPU offloading is to
maximize efficiency in order to provide lightning fast interconnect"

Regards,

Kerry Main
Kerry dot main at starkgaming dot com








More information about the Info-vax mailing list