[Info-vax] Moonshot

Stephen Hoffman seaohveh at hoffmanlabs.invalid
Tue Jan 6 11:52:30 EST 2015


On 2015-01-06 09:17:21 +0000, Matthew H McKenzie said:

The following reply is slanted toward OpenVMS, but then this is the 
comp.os.vms newsgroup...

> This would be the direction most of the big players are going in.

For the folks that are serving up small hunks of their infrastructure 
to specific tasks — private clouds, certainly.  It's density.  For the 
folks that are running big apps, I'd expect to see them more interested 
in HP Apollo boxes than in HP Moonshot boxes.  For the smaller folks, 
I'm seeing those folks consolidate onto the denser boxes where they 
can, and usually incrementally.  The larger sites are consolidating 
into rack- or DC-scale computing, and/or adding racks and pods.

> The AWS EC2 "C4" instances are running "bespoke" haswell CPU's.

Apple has used bespoke Intel chips, too.  When you're operating at the 
scale of Amazon — they were massively bigger than the next dozen or so 
cloud providers, combined — or at the scale of Apple, you can get and 
can create customized products.  
<http://readwrite.com/2013/08/21/gartner-aws-now-5-times-the-size-of-other-cloud-vendors-combined#!> 


For the larger players, this also means you can get boxes and designs 
far more customized than Moonshot.  Facebook is pushing this with the 
Open Compute project <http://www.opencompute.org>, and Google uses 
their own custom hardware.   Which means that Moonshot ends up aimed at 
somewhat "smaller" customers, looking to get density, and looking to 
avoid their own bespoke rollouts.  HP is also obviously developing the 
Apollo and Moonshot products in an effort to stay ahead of SuperMicro 
and other providers, for those customers that aren't doing bespoke 
hardware.

Then there's the software to run on this hardware, and OpenVMS just 
isn't there.  Not as a ginormous-core SMP box.  Maybe as a cluster, but 
you won't find a cluster with a thousand members and a thousand hosts 
can be a fairly small cloud.

> i.e they are a custom order and sold in volume to a single customer.

Or if Intel decides that what a customer needs is more generally 
applicable; if they decide to start selling it elsewhere.

The entire market for ARM is custom work, and the SBSA is (hopefully) 
going to be the foundation of wider server software availability; to 
allow a way for "arbitrary" ARM software to at least be able to boot 
and run and figure out what features and options are available on a 
particular ARM configuration.

> - One bus replaces all PSU's,  these are not so much blades but board 
> level servers. You need the chassis to run them.
> - Ultimately they will be maintained by robots, by pull and swap.

I'd expect to see things head in the other direction, here.   Robotics 
involves a whole lot of hardware, and — recalling those old RWZ 
magneto-optical jukebox designs, and the various Ultrium libraries — 
tend to be complex and expensive.   The last warehouse-scale deployment 
I've visited involved over a million dollars of hardware for the robot 
and its rails and its control systems, and it wasn't a very big 
warehouse, and that robot wasn't capable of doing drawer-level service 
work.   Put another way, I'd tend to expect to see the management will 
be by adding or removing chassis or racks or pods, and particularly 
with redundancy.  Leaving the carcasses in the racks.  The cost of an 
expensive robot can also buy a whole lot of redundent server and 
storage hardware, after all.

The cartridges and cabinets and drawers are all going to need 
substantial re-work, before the occasionally-suggested UAV-based repair 
scheme might, um, fly.  But at the incremental costs of the necessary 
spares versus the cost of the repairs, why bother?

> - External storage is drifting towards similar arrangements with large 
> raid arrays of SSD's hot swapped like reactor control rods.

Yes.  Well, other than that the ancient and baroque rotating-rust I/O 
bus interface that's still used for SSDs is (slowly) going away.
But which means there'll just be differently-connected vendor-specific 
and custom sleds for whatever storage is involved, though.

> - Still attractive to the MaaS enterprise (nothing beats being in the 
> same room as your hardware).

There are way too many aaSs in this business.  For folks that want 
their own metal — their own hardware or want a hosting provider that 
presents hardware — yes, there are options.  There are also software 
options that'll run on that stuff, too, and some folks are interested 
in running "bare" <http://www.returninfinity.com/baremetal.html> 
<http://www.openmirage.org>.   This custom-kernel unikernel bare-metal 
approach looks a whole lot like VAX ELN, too.  But I digress.   Right 
now, VMS isn't one of the operating system options that'll work very 
here.  This is where my comments on far better LDAP integration, and on 
supporting profiles and application deployments all arise.  These 
features are helpful, whether large-scale mass deployments and even 
with individual and small deployments, and VMS is lacking here.

> The scale is there, and the economies are obvious for power consumption 
> and real estate.

For those that need that.  But there's a divergence happening, here.  
Where the hardware and the cores are outstripping what many folks need. 
   For those that like their own metal where they can touch it (TaaS™), 
the available servers can be massively more capable than necessary.  
More than a few of the available server cores are idle, in most places. 
  Sure, there'll be any number of folks looking to consolidate onto 
denser boxes.  Over time.  Incrementally.

> How moonshot would work for parallel processing has yet to be seen, but 
> I expect they would scale like leds on a bar graph display.
> (IBM- aware readers might think of CICSPLEX regions rolling in on demand.)

OpenVMS folks would think of OpenVMS Clusters and OpenVMS Galaxy, here.

But there's also a pervasive idea that application software and 
operating systems can scale arbitrarily, and I'm just not seeing that.  
Sure, it's possible that some tasks can, but a whole lot of application 
software and operating system code around would require potentially 
substantial rework to migrate to and operate efficiently on boxes with 
large numbers of cores; with parallelism.  This assumes folks are going 
to bother to migrate the code.

> ARM has smarts for specific functions too (mod, for instance ?)
> They must be as good as Prestonia era servers, without the weight, 
> heat, noise, and power draw..

Yes.  In mobile, for instance.  Or small servers, and cartridge 
servers.   ARM is also customizable, which is a strength for some apps 
and environments, and — in the absense of SBSA — is a problem for folks 
that want to use more generic operating systems.

Intel is aiming at this space with their Atom designs, if we're not 
discussing ancient NetBurst-class x86-64 chips like Prestonia.

> So really, web sites would not really have to be static, just not too 
> ambitious. Groupware for SME's.

Unless the groupware is excessively stupid, incremental scaling is 
possible via directory services and multiple boxes.  Sure, that might 
be a part-populated Moonshot box, but — unless the SME is tight on 
space or storage, I'd expect to see a number of SMEs install their 
servers incrementally, and then when the load gets large enough migrate 
to a Moonshot or equivalent.  Put another way, I'd expect to see more 
Moonshot boxes installed as upgrade-replacements than as new 
installations and "greenfield" deployments, though there will be new 
installations.

There is still the need for servers and services behind that web front 
end, dealing with the transactions and the databases, too.  Unless 
you're just serving up static data.  At scale, keeping that data 
consistent gets... entertaining.

> Any work that can be broken into work units, well the CPU's could be 
> changed to application specific dies for crypto currency say, or 
> folding/SETI.

Which, looking around, isn't a huge part of the work many OpenVMS folks 
use their computers for.  Parallelism is great both in theory, and for 
specific applications.
Unfortunately, adding parallelism can involve re-working more than a 
little of the existing application code, and the return on that 
investment and that effort may or may not be present.   VMS doesn't 
really have much support for parallelism, beyond KP threads or 
pthreads, either; there's no grid management and no distributed 
scheduling, and no OpenCL or CUDA support for the folks using GPUs as 
compute engines.


> 
> 
> "Stephen Hoffman" <seaohveh at hoffmanlabs.invalid> wrote in message 
> news:m0k8he$bcd$1 at dont-email.me...
>> On 2014-10-02 18:43:07 +0000, johnwallace4 at yahoo.co.uk said:
>> 
>>> Hoff wrote:
>>> "Moonshot is good for some tasks, such as serving static web content, 
>>> maybe even mucking about with video transcoding (though I'd be looking 
>>> at GPUs there, too), and such."
>>> 
>>> Patience, young man.
>> 
>> I'm aware of the (published) cartridge plans here, having sat in more 
>> than a few HP presentations on this and related topics.
>> 
>> 
>> --
>> Pure Personal Opinion | HoffmanLabs LLC


-- 
Pure Personal Opinion | HoffmanLabs LLC




More information about the Info-vax mailing list