[Info-vax] New OpenSSL update from HP
Stephen Hoffman
seaohveh at hoffmanlabs.invalid
Mon Jun 15 09:42:54 EDT 2015
On 2015-06-15 03:25:41 +0000, David Froble said:
> Stephen Hoffman wrote:
>> On 2015-06-14 22:43:29 +0000, Jan-Erik Soderholm said:
>>
>>> One of the reasons CSWS performes less good on VMS is becuse of the
>>> high use of forked subprocesses. *That* is inefficient on VMS.
>
> Well, the way I understand FORK,
Few do understand fork, David. Few do. Even many Unix users don't
realize what it provides. AFAIK, Apache does not use the full
capabilities of fork. There'd be no reason to, unless you were sharing
address spaces and I/O channels.
> ...anything on VMS outside of threads is going to perhaps have more overhead.
Threads do have the downside of needing to deal with multi-level
security — you either need everybody in the process to have the same
access, or now you need to prevent access and data from the
more-entitled threads from leaking over into the other and
less-entitled threads and activities. This is the same morass arises
with traditional OpenVMS privileged server processes that must provide
and coordinate access controls for less-privileged users. It's
certainly possible, but it's more complex than the one-user one-process
model.
> If the worker process has to duplicate anything the web server already
> has, that's a bit more work.
>
> However, if you pre-allocate say 25 or 50 processes, (which I would not
> do), then all you have is a queue of available processes to maintain,
> and process to process communication. If pressed, I could get damn
> fast with the communications. Though just mailboxes would be adequate
> in my opinion.
Mailboxes are slow and host-limited, but yes. IPC would be a
platform-specific alternative to mailboxes, with some better
flexibility and speed and with cluster awareness.
A pool of those whose worker processes are how Apache works on most
platforms. Including OpenVMS. It's also something that folks are
used to tuning with the Apache configuration files, if that's
necessary, and if there's not a GUI front-end.
Looking wider, I'd tend to look to stay with what Apache and other
common Unix tools use — sockets. Either Unix sockets, or network
sockets. Those are very commonly used in many applications, have
advantages over mailboxes, and definitely need to be very fast, and
there was some work in this area that HP had started some years ago
(SSIO), and that hopefully VSI either picks up or provides some
alternative and/or C extensions or C jackets. The C library has issues
and limitations, and parts of that might well be why Apache is
relatively slow. (Not that I've particularly noticed it being slow,
though.) So going wider, getting C and sockets working more quickly
might have a more general pay-off than allowing existing applications
to switch to sockets, and avoiding cases where porting each tool
including Apache means switching to use IPC services or mailboxes and
$creprc.
> But think about it. You claim WASD is faster. It has the same issues.
> So, it isn't the common issues. There has got to be other stuff in
> Apache that makes it a dog. Woof!
Can't say I've noticed Apache being slow on OpenVMS, but then I'm
usually running it on Unix platforms and those are so much faster than
most of the OpenVMS boxes...
But if Apache is slow, it's time to collect some data.
>
>> There is a pool of worker processes yes, but that'll exist in any web
>> server configuration short of running it all in one process with
>> threading.
>
> Or, creating the sub-process or detached process as needed ....
It's more typical to create the pool on startup. But pulling back and
looking wider, pools of worker processes is not at all unusual, and
OpenVMS has no standard mechanism for that. The auxiliary server
(inetd, etc) doesn't have this capability, and there's no in-built
mechanism for dealing with and scheduling and monitoring and restarting
processes and services.
> Ok, I've never written a web server, yet. If I was to do so, I'd
> seriously look at detached processes rather than sub-processes. Why?
> Because once you assign the worker process a task, perhaps you'd want
> it to complete, regardless of whatever the web server does.
You're right (and again going wider...): the OpenVMS process management
and control model is extremely limited. Beyond the lack of worker
process pool management, there's no good way to transition from one
process mechanism for exiting to a process that will run to
termination. Nor is the reverse transition feasible, either. The job
tree is an elegant representation of the exit path mechanism, but it's
also fixed and somewhat limiting.
>> While CPU is still a factor for some cases, there can be other issues
>> beyond the process creation. There's that OpenVMS network I/O tends to
>> be slower than Unix in various tests, that the file I/O also tends to
>> be slower in various tests, that the interprocess communications have
>> been slow, etc. There are many potential contributing factors to
>> slowness in any complex design.
>
> But, he's claiming that WASD is faster. On the same HW. So no, it's
> not what you list.
Sure it is. Apache I/O is likely different than WASD I/O.
>> It may well be the process creation for the worker processes is the
>> limiting factor, but I'd want to see some data before drilling in on
>> that...
>
> I truly doubt it.
>
> My opinion, it's the bloated protocols in use. SOAP. XML. Stuff like that.
Which means folks would want WASD to provide those, too. Apache does
have massive flexibility, which means it's going to have issues. If
you just want faster web services, it's usually nginx.
> I've got to wonder. Is there any real use for so many different
> ciphers and such? I'll admit that I know nothing of SSL. Or is it all
> the backward compatibility?
It's cross-version compatibility, both with older versions and newer
versions, and also for sites that require specific combinations whether
for technical, administrative or regulatory reasons. Some folks need
to use FIPS-certified tools, for instance. Some folks don't have and
can't get access to certain encryption algorithms, due to
implementation limitations or local laws.
The ciphers and the hashes also fall as the available computing
increases. Cracking DES is feasible for "ordinary" folks and not just
the supercomputer crowd, and the rest of the now-ancient "export grade"
ciphers are vulnerable, and a moderate-grade x86-64 box can now
generate MD5 checksum (hash) collisions.
--
Pure Personal Opinion | HoffmanLabs LLC
More information about the Info-vax
mailing list