[Info-vax] VMS QuickSpecs

Dirk Munk munk at home.nl
Mon Aug 17 15:50:04 EDT 2015


johnson.eric at gmail.com wrote:
> On Friday, August 14, 2015 at 8:18:43 AM UTC-4, Dirk Munk wrote:
>
>> The ethernet protocol for FCoE is far less robust as FC, FCoE didn't
>> make it, iSCSI adds the overhead and latency of the IP stack, in most
>> situations we don't need it.
>>
>> FC still is technically superior to anything ethernet can offer at the
>> moment.
>
> Rather than argue in a traditional fashion, I'm curious to see which of the
> following statements you'd agree with.
>
> a) There are some storage problems that only FC is equipped to deal with

very true

>
> b) Ethernet based solutions can be an appropriate solution for smaller domains

certainly, I have promoted iSCSI for certain systems/applications

>
> c) In general, the ethernet solutions would be called "good enough"

Explain "in general". It all depends on the situation. It is possible to 
set up one 19" rack with some servers and a storage array, and still use 
FC, against moderate costs. I have build such configurations, the extra 
costs for the FC infrastructure are very low compared to the total costs.

>
> d) The ethernet solutions will have a lower upfront cost than their FC counterparts

Maybe, but certainly not in all cases.

>
> e) Providers of ethernet based solutions will grow at a rate faster than FC

FC is considered high-end, ethernet is not. So that is a logical 
conclusion. Many professional storage arrays support both, some support 
infinyband as well.

>
> f) In five years time, FC will be even more of a niche product, and ethernet based solutions will be the dominate commodity of choice for everyone

No, I expect very different storage interconnects in five years time.

But first let's look at storage over ethernet. There are two very 
different principles for storage over ethernet.

The first and the oldest one is file services. A server does not address 
storage directly, but uses a storage service provided by another server. 
That second server is responsible for the actual data storage, the file 
system that is used for storing the data is invisible for the first 
server. Multipathing is also impossible, so no redundancy or load 
balancing. As a matter of *principle* , I don't consider file services 
as 'real' storage, because the server does not actually control the 
storage on block level. Of course it also adds lots of latency, 
certainly with random IO.

Years back, Cisco was very sad. It looked at the fibrechannel business 
and realised that it had no proper FC switch in its portfolio. It had a 
big FC switch, but that was rubbish and they knew it all to well. So 
they had a brilliant idea, let's forget FC and tunnel FC over ethernet. 
There was one big problem however, FC has guaranteed packet delivery, 
ethernet has not. No problem, just change the ethernet protocol. And 
that's what happened, but even with the change in place it's still not a 
very robust solution. Of course it's not routable either. It seems FCoE 
is rather dead.

Another way to transport block IO over ethernet is iSCSI. As the name 
suggest it transports SCSI commands over TCP/IP. TCP will make sure that 
the blocks arrive and in the proper sequence. So that solves the 
guaranteed delivery problem. You can (and should) use multipathing with 
iSCSI, if you do it the proper way you should have two completely 
independent IP networks. The problem with iSCSI is the latency the IP 
stack adds.

Latency is the biggest problem with storage, not speed. FC is a low 
latency network, iSCSI introduces a lot more latency. A random read from 
a database is the most important type of IO. The performance of many 
applications depends on this kind of IO. Every millisecond counts, a 
millisecond is an eternity compared with the speed of the cpu and 
memory. The IP stack can add milliseconds!

Using PCIE lanes to address flash storage and reduce latency is getting 
more common, if this can be done with external storage, I don't know. 
There is equipment to connect PCIE busses of different systems, so it 
seems feasible.

There is a very strong tendency to use memory as storage. It can be 
flash memory, or even better DIMMs with a flash module that can save the 
contents of the DIMM if the power is removed from the DIMM.

What this will do to external storage, I have no idea. I can envision 
external storage being used as a mirror from the very fast internal 
storage. All reads are done from the internal storage, the writes are 
done to both.

Anyway, look at the latency, not the speed of the connection, By the 
way, that is what makes infinyband so great, very low latency.
.
>
> g) In five years time, the number of problems that is true for (a) will have shrunk

No, not at all. Again, look at the latency, not the speed of the 
connection. And if you do look at the speed, there isn't really that 
much of a difference.

>
> EJ
>




More information about the Info-vax mailing list