[Info-vax] Linux 40 GbE and 100 GbE NIC Performance

terry+googleblog at tmk.com terry+googleblog at tmk.com
Tue Jan 27 22:11:00 EST 2015


On Tuesday, January 27, 2015 at 10:27:39 AM UTC-5, Bill Gunshannon wrote:
> The only thing that still gets me is I don't think any current machine has
> an internal bus speed as fast as these network speeds so how to they fill
> the pipe?

Even 10GbE is often used just as a top-of-rack to core link, not as a host-to-switch link. Though that is changing and you can now get 10GbE switches for under $100 per (copper) port.

Inside smaller switches, the switching is frequently done on one chip. Larger switches link these chips via a number of different methods. So switches aren't internally constrained by standard bus speeds.

Getting back to your question, PCI Express 3.x provides more than 7.75Gbit/sec per lane (the actual speed is 8 gigatransfers per second, but that has some overhead). Multiple lanes are used to connect the expansion card to the CPU.

Looking at the 10GbE card I use, the Intel X540-T1, it is PCI Express 2.1 compliant and has a useful data rate of 4Gbit/sec per lane. It is an 8-lane card, so has the theoretical ability to move 32Gbit/sec. [The reason it is an 8-lane card is that the X540-T2 uses the same PCB but has 2 ports, otherwise a 4-lane card would do fine for the single port.]

Extrapolating to a 16-lane PCI Express 3.x card, the bus offers 126Gbit/sec, so it should work for single-port 100GbE cards. By the time such cards are common, we should have PCI Express 4.0, which doubles speed to 16 gigatransfers per second.



More information about the Info-vax mailing list