[Info-vax] What would you miss if DECnet got the chop? Was: "bad select 38" (OpenSSL on VMS)

Kerry Main kemain.nospam at gmail.com
Fri Oct 7 11:31:31 EDT 2016


> -----Original Message-----
> From: Info-vax [mailto:info-vax-bounces at rbnsn.com] On Behalf
> Of Stephen Hoffman via Info-vax
> Sent: 07-Oct-16 10:40 AM
> To: info-vax at rbnsn.com
> Cc: Stephen Hoffman <seaohveh at hoffmanlabs.invalid>
> Subject: Re: [Info-vax] What would you miss if DECnet got the
> chop? Was: "bad select 38" (OpenSSL on VMS)
> 
> On 2016-10-07 06:11:14 +0000, Michael Moroney said:
> 

[snip]

> >
> >> Oddly, the rest of the universe gets by with ssh, netcat, file
> shares
> >> and related.
> >
> > Yes and the rest of the world gets by without shared-
> everything
> > clusters
> 
> Because for folks that need clusters, shared-everything clusters
> don't scale.
> 

That is 1990's wild, wild west of distributed systems theory.

Of course, you could pick a google or Amazon exception, but technology has improved significantly since the 90's and even early 2000's.

We now have 64 core (2.5Ghz) blade servers with TB's of local physical memory. Next year, this memory will be non-volatile. We have extremely fast SAN based SSD flash drives.

So how many applications out there need more than 96-150+ 64 core servers in a multi-site cluster - each with say 1.5TB of local physical memory?

> > and M$ PCs filled with bloatware that have to be rebooted
> every few days.
> 
> Management and marketing folks need to understand and look
> at the problems and benefits of competing platforms to their
> customers, and to
> be able to clearly state a case for purchasing your products.   Not
> that I particularly think that Microsoft Windows is competing with
> OpenVMS, though Windows Server certainly is.
> 

OpenVMS is not competing in the desktop space, so Windows Server is the appropriate product to compete with OpenVMS on X86-64.

The bigger competition for OpenVMS on X86-64 is going to be Linux.

Imho, Microsoft is following the DEC strategy of pricing Windows Server out of the picture.

> Development folks need to look at those other platforms for
> their technical negatives, as well as detailed examinations for
> ideas that
> are useful and clever and worth reuse and improvements.   This
> as part
> of creating future versions of the local organizations products,
> updating user interfaces and tools, and related.   There are a
> number
> of features of Microsoft Windows that are vastly superior to
> OpenVMS, and that your customers would greatly appreciate
> having access to.
> 
> Development folks also need to look for the negatives in their
> own
> products.  As an example of this, consider OpenVMS clustering.
> There's a wide-open SCS network transport.  There's no
> authentication.
>  There's no distributed scheduling, no job control, no distributed
> logging.   There's no mechanism for bridging clusters.   The file-
> based
> management and set-up — what are we up to, ~20 files,
> manually configured, moved back to the system disks for patches
> and upgrades —
> is just hilariously bad.   There's no easy mechanism to boot into a
> cluster without manual intervention, and OpenVMS lacks any sort
> of
> profile mechanism.   Programming APIs aren't there, either —
> there's
> ICC and the DLM and add-on bits for LDAP and the various
> network stacks and VCI, but those all tend to be rather low level
> interfaces, and require a fair amount of knowledge and
> experience to work with, fairly complex to properly set up
> failover or DT configurations, and many of
> the APIs tend to be somewhat less than secure.   There are
> clustering-related scaling issues for folks that might want larger
> numbers of hosts, too — clusters larger than even the theoretical
> limit of OpenVMS clusters are not particularly rare in the industry,
> and eventually some OpenVMS folks — if the license prices,
> features and support work out for customers and for VSI — will
> be looking for larger
> configurations.   All of which gets into discussions of Apache
> Mesos,
> of work around AvailMan, OpenVMS Management Station and
> otherwise, of scaling the interconnects and the bandwidth, and a
> whole host of other
> cluster-related topics.    Doing patch management and cluster
> rolling
> upgrades gets interesting, too — that all needs to be much
> easier.
> 

I agree there is loads of room for improvement. 

A biggie I would like to see is Enterprise Directory / LDAP being used as a SSO mechanism between multiple clusters and standalone systems i.e. "cluster of clusters concept". Think about how AD does not only SSO for distributed Windows environments, but also resource control (group policies etc).

While I agree increasing the number of hosts  should certainly be a future goal (this will likely require RoCEv2 for higher DLM scaling), next generation systems are not going to be like the 90's and early 2000's where every dept has a small 2-4 core servers with 8-16Gb memory all connected by legacy LAN technologies with very slow latency.

Next generation server designs will be much fewer, much bigger servers with much larger TB's local memory, much bigger local disks (Seagate now sells 10TB disks for USD $600) and much faster local interconnects. I see this as having a major impact with next generation systems design and network tier consolidation happening between App and DB server strategies.

> OpenVMS clusters are great and work exceptionally well for tasks
> that a number of OpenVMS customers have — when the
> customers can afford clustering — and parts of clustering are
> exceedingly elegant and
> well-integrated into the OpenVMS environment.   Other parts of
> clustering and particularly configuration and management parts
> are a bit of a dog's breakfast.
> 

What is often overlooked is that one of the biggest benefits of a shared disk cluster like OpenVMS is that node mgmt., data consistency, HA, DR, DT is all handled at the OS level while in a shared nothing architecture, these all need to be handled at the application level. Multiple shared nothing App's in the same company will likely do things differently between applications.

Hence, in a shared disk (everything) strategy, App developers can focus on their code quality and optimizations and not have to embed in their code all of these other considerations. Yes, there are some cluster aware details openVMS App folks need to know , but these are minor compared to the what the infrastructure knowledge that shared nothing, distributed developers need to be aware of and code in their applications

When comparing performance between a shared disk (OpenVMS, Linux/GFS, z/OS) cluster vs a shared nothing (Windows, Linux, *nix, NonStop) cluster DB architecture, reference this third party WP:

http://bit.ly/2dScx9k
“Comparing shared nothing and shared disk in benchmarks is analogous to comparing a dragster and a Porsche. The dragster, like the hand tuned shared nothing database, will beat the Porsche in a straight quarter mile race. However, the Porsche, like a shared disk database, will easily beat the dragster on regular roads. If your selected benchmark is a quarter mile straightaway that tests all out speed, like Sysbench, a shared nothing database will win. However, shared disk will perform better in real world environments.”

[snip]


Regards,

Kerry Main
Kerry dot main at starkgaming dot com








More information about the Info-vax mailing list