[Info-vax] Gen-Z, a new memory interconnect on the horizon.

Bill Gunshannon bill.gunshannon at gmail.com
Mon Oct 31 14:46:51 EDT 2016


On 10/31/16 10:41 AM, Kerry Main wrote:
>> -----Original Message-----
>> From: Info-vax [mailto:info-vax-bounces at rbnsn.com] On Behalf
>> Of Baldrick via Info-vax
>> Sent: 31-Oct-16 8:55 AM
>> To: info-vax at rbnsn.com
>> Cc: Baldrick <trickynic at gmail.com>
>> Subject: Re: [Info-vax] Gen-Z, a new memory interconnect on
>> the horizon.
>>
>> ...
>>
>> One of my clients use reflective memory. They obtained the
>> drivers from the manufacturer VMIC (HP did facilitate this) and
>> ported from Alpha to Integrity. I'm not completely conversant
>> with its operation but it allows multiple platforms to share
>> memory in application space between a number of platforms. So
>> the VMS system sees memory from UNIX and M$ windows but
>> no reason why VMS cannot see VMS of course. I'm not sure of
>> the topology but its exceptionally high speed as its doing real
> time
>> processing on a number of systems (including VMS) to deliver
> the
>> application.
>>
>> https://en.wikipedia.org/wiki/Reflective_memory
>>
>
> OpenVMS has a solid history of dealing with shared memory
> technologies - even back to the VAX days. A 11/782 and 11/784
> were either 2 or 4 VAX 11780's interconnected with MA780 shared
> memory. I fondly remember installing a couple of these back in
> the day. They were a cabling nightmare, but worked very well. The
> OpenVMS Galaxy (Alpha only) technology was (is) way ahead of the
> competition. Galaxy provided the means to "dynamically" share
> CPU's via drag-n-drop and/or business rules between different OS
> instances and at the same time, provided very high speed
> communications for TCPIP, cluster traffic via shared memory.
>
> Even though it's a bit older technology, reflective memory is
> just one example of how current compute models will be changing
> significantly in the future
>
> The models today and really since the late 80's, are basically
> many smaller systems distributed over Ethernet based networks.
> This is often referred to as the "wild west" of the 80's, 90's
> and in many cases, even today. There were good reasons for doing
> this i.e. lack of local memory, compute and interconnect
> technologies and Ethernet was cheap, but effective way to address
> overall solution latency and data compute requirements.
>
> So what has changed?
>
> 1. VM sprawl has now become a much bigger issue than X86 sprawl.
> The biggest costs (by far) are not associated with physical HW,
> but rather licensing, managing and securing the OS instance
> itself. Its why so many are looking at container technologies
> like Docker.
>
> 2. While a distributed world is much better at local
> communications with end users and the Customer, IT management
> costs and security concerns are much higher in a highly
> distributed world than a centralized one.
>
> 3. The biggie technical change - how to move massive amounts of
> data closer to the CPU where it can be crunched and reduce the
> overall "solution latency". This is sometimes called "function
> shipping" i.e. the separating of logical and physical tiers with
> each tier having its own dedicated server(s) communicating over
> LAN networks as a means to better address compute requirements.
> However, while CPU, Memory and Storage  technologies have
> increased exponentially, network LAN latency has not kept up
> (TCPIP stack, buffer, NIC, firewall, router hops, load balancers
> etc.) in a multi-tier compute model all contribute to LAN network
> latency which, in turn, significantly impacts overall solution
> latency.
>
> In relative terms, while todays cpu-cache, cpu to memory, cpu to
> local flash disk can be measured in relative terms of secs,
> minutes, days, local LAN network latency updates (writes) can be
> measured in months. Now - think about those hundreds, thousands
> of small systems all doing updates (including HA replication)
> over a highly distributed local network. Think about the impact
> this has on limiting the speed of moving data closer to the CPU's
> where it needs to be.
>
> Fast forward to today and the next 2-3+ years.
>
> GB level drives are the new floppy drives. This is not hyperbole.
> A self-encrypting PC laptop/desktop 10TB 3.5" HDD with 256Mb
> cache and limited 5 year warranty is now available for USD $500
> (google "Seagate 10TB").
>
> Data that is required to be crunched to address competitive
> business requirements has grown exponentially. Much cheaper,
> non-Volatile memory e.g. 3D XPoint (pronounced "crosspoint") in
> range of TB's is expected to be available from Intel/Micron later
> in 2017. Other vendors are expected to have memory products
> available with similar features.
>
> Emerging high speed local interconnects, much larger/faster
> storage and much larger non-volatile memory technology has
> prompted a new way of strategic thinking as a way to reduce
> overall solution latency and data compute requirements.
>
> As an example of a model that could be good for OpenVMS - with
> TB's of local non-volatile memory (VSI OpenVMS on HPE Blades
> supports up to 1.5TB today), and 32/64 cpu's in a small blade
> server, why not keep the App and server layers logically
> separated, but physically deploy them on the same system/cluster?
> Every reference between the App and DB server instances would be
> a local cache/memory reference or at worst, a local IO to flash
> storage. With common system disks (yes and planning - remember
> TB's of local non-volatile memory, so OpenVMS resides in memory),
> you would also make it easier to add systems to the cluster
> (select new root, customize and boot new blade).
>
> Yes, you need to do your homework and more up front security,
> capacity planning and testing, but these are proactive best
> practices that OpenVMS customers have been doing for decades.
> Heck, mainframes have been doing this forever and they have a
> pretty good record for capacity and security planning.
>
> Background references:
> http://bit.ly/2ergVrq
>
> Original (likely will wrap):
> https://www.nextplatform.com/2016/10/17/opening-server-bus-cohere
> nt-acceleration/
>
> http://bit.ly/2f5aRpF
>
> Original (likely will wrap):
> https://www.nextplatform.com/2016/10/12/raising-standard-storage-
> memory-fabrics/
>
> Times are a changing ...

Wasn't shared memory how array processors communicated?  I remember
them working on VAX and Pr1me and I think the one I worked with at
Martin Marietta could even be used on the PDP-11.

bill





More information about the Info-vax mailing list