[Info-vax] Accuweather new contract

Stephen Hoffman seaohveh at hoffmanlabs.invalid
Sun Mar 29 09:38:39 EDT 2015


On 2015-03-29 13:11:09 +0000, johnson.eric at gmail.com said:

> In my experience, that code is always just slower and less scalable 
> than the same counterpart on Linux. I've participated in such an in 
> house project (home grown http server on VMS) and it would always have 
> higher latency than its counterpart on Linux.
> 
> After years of disbelief and frustration, I spent a few weeks really 
> digging into it, and from what I could observe, much of the blame 
> really sat at the hands of the TCP/IP stack. Interestingly enough, both 
> Multinet and TCP/IP services came up short.
> 
> I believe its possible to fix them, but it will take time.

Ayup.  The driver stack, too.   Linux has spent a whole lot of time 
optimizing how that works.   Fewer and preferably no buffer copies, 
lighter-weight I/O operations, etc.   Beyond any kernel changes and 
renovations within the TCP/IP Services stack — and I still think VSI 
might well replace that with a Process IP stack, but I digress — 
improving this performance might be disruptive and/or require 
application changes, too.

Each generation of faster NIC means less time for the host software to 
"dawdle", too.  Less time for the host to even do other things.

Related: <https://lwn.net/Articles/629155/>

Web servers can involve either some rather big hosts, or can involve 
pools of smaller hosts, or a combination.  Having lots of idle serves 
is a waste, which is one of the reasons why there are folks quite fond 
of using public or private clouds here — HP Helion / OpenStack, or 
otherwise — as these seek to avoid over=provisioning your servers for 
expected peak loads.  Think of how "Galaxy" could migrate cores among 
instances, but implemented with whole fleets of servers, and not 
limited to the cores within a single Alpha box.

Some other issues that arise involve managing software deployments and 
large-configuration management — even in a cluster, VMS is somewhat 
clunky here, and VMS just isn't very good at these sorts of tasks 
beyond the scope of a cluster — and then there's scaling — somebody 
will have to figure out how to deal with large clusters of lots of 
boxes and potentially of lots of cores — and then there are discussions 
of pricing — clustering is comparatively expensive.   Then there's the 
distributed database discussion, because some folks will want to be 
able to support multiple clusters and/or multiple data centers and some 
sort of failover.


-- 
Pure Personal Opinion | HoffmanLabs LLC




More information about the Info-vax mailing list