[Info-vax] VMS QuickSpecs

Carl Friedberg frida.fried at gmail.com
Mon Aug 17 11:29:56 EDT 2015


Rotating-rust  LOL

On Mon, Aug 17, 2015 at 10:12 AM, Stephen Hoffman via Info-vax <
info-vax at rbnsn.com> wrote:

> On 2015-08-17 12:15:17 +0000, johnson.eric at gmail.com said:
>
> On Friday, August 14, 2015 at 8:18:43 AM UTC-4, Dirk Munk wrote:
>>
>> The ethernet protocol for FCoE is far less robust as FC, FCoE didn't make
>>> it, iSCSI adds the overhead and latency of the IP stack, in most
>>> situations we don't need it.
>>>
>>> FC still is technically superior to anything ethernet can offer at the
>>> moment.
>>>
>>
>> Rather than argue in a traditional fashion, I'm curious to see which of
>> the following statements you'd agree with.
>>
>> a) There are some storage problems that only FC is equipped to deal with
>>
>> b) Ethernet based solutions can be an appropriate solution for smaller
>> domains
>>
>> c) In general, the ethernet solutions would be called "good enough"
>>
>> d) The ethernet solutions will have a lower upfront cost than their FC
>> counterparts
>>
>> e) Providers of ethernet based solutions will grow at a rate faster than
>> FC
>>
>> f) In five years time, FC will be even more of a niche product, and
>> ethernet based solutions will be the dominate commodity of choice for
>> everyone.
>>
>> g) In five years time, the number of problems that is true for (a) will
>> have shrunk
>>
>
>
> Here's some data... For pricing. Intel dual-port 10 GbE PCIe NIC is
> ~$220.  Intel single-port 40 GbE NIC is ~$480.  (Those are capable of
> various protocol offloads, as well.)   For comparison, HP-branded dual
> 8GbFC FC HBA is ~$456.  ATTO 16GbFC is on sale for ~$1625.   Those are
> street price, quantity one, right now, via Amazon.
>
> Next-iteration predictions... Formal 32 GbFC prediction was ~2016.  The
> prediction for the 400GbE standard (IEEE P802.3bs 400GbE) is ~2017, per a
> working group timeline.
>
> Here's some opinion...   Outboard FC is increasingly squeezed between
> inboard non-volatile storage, outboard Ethernet where the low-end gets up
> to 40GbE, and outboard InfiniBand.   That's before any discussions here
> around trying to add support to OpenVMS FC for what Ethernet can already do
> — IP — and before getting FC HBAs and FC cabling replicated everywhere, and
> before discussing the savings from having one set of wiring and devices,
> and not two parallel sets.
>
> <
> http://www.intel.com/content/dam/www/public/us/en/documents/product-briefs/ethernet-xl710-brief.pdf>
>
> <https://en.wikipedia.org/wiki/3D_XPoint>
>
> As for what outboard storage will have to contend with, block-addressable
> flash is available now, and Intel is claiming 2016 for volume 3D Xpoint
> production.  For outboard storage, start stacking up the 16 GbFC HBAs or
> the 40GbE NICs, or 100 GbE NICs when and where you can get those to meet
> latency and bandwidth — and where you have the switch ports for it.  All of
> the major switch vendors have had 100 GbE ports available for a while,
> too.  This to be able to keep up with faster storage, and to remain
> semi-competitive with inboard non-volatile storage and with designs based
> on redundant arrays of servers — in OpenVMS terms, think host-based volume
> shadowing (HBVS, RAID-1, mirroring) over three-node clusters interconnected
> via 40GbE or 100GbE NICs 3D Xpoint, with no outboard storage required.
>
> Then briefly ponder what byte-addressable non-volatile storage (akin to 3D
> Xpoint) means for the minimally-available 48-bit physical address space on
> most x86-64 boxes, and to the baroque block-addressable outboard storage
> designs being held over from the rotating-rust block-addressable era.  Also
> ponder what byte-addressability means for the present HBVS "disk"
> abstraction, too.  Maybe a switch to RDMA via Ethernet or InfiniBand.
>  Then what's involved to more than a few applications, if you no longer
> have to use $qio or $io_perform to access non-volatile storage.  Or for
> that matter, potentially addressing and executing code directly out of
> non-volatile memory, with RAM caching.
>
> Can't really see the point of implementing IP over FC with the way things
> certainly seem headed.  If that's not already obvious.
>
>
>
> --
> Pure Personal Opinion | HoffmanLabs LLC
>
> _______________________________________________
> Info-vax mailing list
> Info-vax at rbnsn.com
> http://rbnsn.com/mailman/listinfo/info-vax_rbnsn.com
>



More information about the Info-vax mailing list