[Info-vax] Linux 40 GbE and 100 GbE NIC Performance
Stephen Hoffman
seaohveh at hoffmanlabs.invalid
Wed Jan 28 10:26:59 EST 2015
On 2015-01-28 03:11:00 +0000, terry+googleblog at tmk.com said:
> On Tuesday, January 27, 2015 at 10:27:39 AM UTC-5, Bill Gunshannon wrote:
>> The only thing that still gets me is I don't think any current machine has
>> an internal bus speed as fast as these network speeds so how to they fill
>> the pipe?
>
> Even 10GbE is often used just as a top-of-rack to core link, not as a
> host-to-switch link. Though that is changing and you can now get 10GbE
> switches for under $100 per (copper) port.
Ayup; there were similar statements made and similar configurations
made when GbE and FE started to become available, and I'd expect
similar things to happen. Vendors got enough volume, and the
equipment prices cratered. It was once difficult to get access to GbE
switches and ports, and now they're ubiquitous.
> Getting back to your question, PCI Express 3.x provides more than
> 7.75Gbit/sec per lane (the actual speed is 8 gigatransfers per second,
> but that has some overhead). Multiple lanes are used to connect the
> expansion card to the CPU.
For 10 GbE... Thunderbolt does two channels of 10 Gbps external, while
v2 offers 20 Gbps by aggregating those two channels. So not only are
there machines with the bus speeds, they're common — Mac Pro has a
half-dozen Thunderbolt ports, and various two+ year old Mac laptops
have internal SSD drives and a pair of Thunderbolt v1 ports.
There are 10 GbE Thunderbolt adapters available now for both optical
and copper, and the switch prices are dropping.
So yes, there are now boxes that are commonly available, and that can
drive 10 GbE NICs. These boxes have been available for a while now,
too.
As for going yet faster, Thunderbolt v3 is built on PCIe 3.0, will
reportedly offer 40 Gbps, and supposedly available in the Intel Skylake
chipsets in the latter part of this year, or early next. I'd tend to
expect these chips including Intel's DDIO support, too; direct I/O into
or out of shared processor cache via QPI. With DDIO and PCIe flash
storage as is becoming increasingly common, system performance would be
pretty fast; removing the old rotating rust I/O connection paths used
for early SSD storage.
For folks filling 10 GbE networks, there are folks slinging around
multi-gigabyte files for various purposes, and they've been grumbling
about GbE speeds for a while now. Then among the more familiar crowd,
there's Phillip here, who could really use the network bandwidth for
that gonzo HBVS configuration of his, but unfortunately none of his
hardware is anywhere near new enough nor anywhere near fast enough.
By the time VMS is ported to and available on x86-64 (and "if",
obviously), this Skylake-class gear will be pretty old and heading for
replacement, too.
Now when we might see a Thunderbolt adapter that contains a Xilinx or
other FPGA with enough room for a PDP-11 emulation...?
--
Pure Personal Opinion | HoffmanLabs LLC
More information about the Info-vax
mailing list