[Info-vax] Distributed Applications, Hashgraph, Automation

Stephen Hoffman seaohveh at hoffmanlabs.invalid
Wed Feb 21 17:23:34 EST 2018


On 2018-02-21 21:21:17 +0000, DaveFroble said:

> As many are aware, I don't get out much, so I have no idea what 
> percentage of users would fit the description of needing varying 
> resources.  My experience is more with situations where the 
> requirements are more fixed, basically the same every day.

A number of sites have seasonal activities and/or have peak seasons, 
and for any of various business-related reasons.  Ask'm what their 
upgrade window is, and when their systems are most heavily loaded.

Some other sites have incremental growth with plots out six months or 
longer; plots with predictions of when their capacity requirements will 
outgrow their current hardware.

But varying loads can also include operational-related activities such 
as running backups, activities making heavy use of encryption or 
compression, or of running weekly or monthly reports, optimizing a 
database or local storage, or whatever, too.

> Got any numbers showing the distribution of users based upon varying, 
> or non-varying requirements?

No, I don't.

What I do see are a lot of folks with lots of spare cycles on their 
OpenVMS systems; with larger-than-necessary server configuration than 
they need for their typical load.  Existing supported server hardware 
and software forced many (most?) OpenVMS folks into over-building and 
over-provisioning their data centers.

We're all also used to the effort involved in spinning up a new OpenVMS 
system instance, which gets back to integrating the pieces and parts 
and core services into the base distro, of integrating IP networking, 
of provisioning, of streamlining the patch process, of sandboxing and 
app isolation, and other assorted details.

OpenVMS is headed into an era when that over-provisioning won't be as 
centrally required, as support for x86-64 and for operating as a guest 
becomes available.    Where spinning up an instance can and should be a 
whole lot easier and faster; more competitive.  Spin up a cluster 
member for running backups or whatever.  Or for dealing with a surprise 
increase in loads, whether due to a data center failure and fail-over 
elsewhere in your organization, or due to unexpectedly-increased app 
loads secondary to any number of potential reasons.  Right now, 
over-provisioning is often seen as easier than adapting to a changing 
load, and cheaper than (for instance) clustering.  But how long is that 
approach going to remain competitive?  For some folks with small 
seasonal variations, probably quite a while.  For other folks with 
wider variations in app activities or with the expectation of app or 
server or site fail-overs or whatever, maybe they get interested?   
It's really quite nice to spin up an instance or a dozen instances for 
(for instance) software testing, too.

Pricing aside — and OpenVMS Alpha diverges from past practices here, 
and diverges in the right direction — cluster rolling upgrades and 
clustering are still a powerful construct for end-users and for 
developers.  This gets back to making details such as the DLM and 
deployments easier to use, as well as other enhancements that've been 
mentioned in various threads.

I'm here ignoring the HPE iCAP support, as that capability hasn't 
seemed particularly popular among folks.  
http://h41379.www4.hpe.com/openvms/journal/v13/troubleshooting_icap.html

Collecting telemetry — opt-in, etc — would help VSI figure some of this 
out, too.


-- 
Pure Personal Opinion | HoffmanLabs LLC 




More information about the Info-vax mailing list