[Info-vax] Listeners in VMS Basic, was: Re: Integrity iLO Configuration?

Stephen Hoffman seaohveh at hoffmanlabs.invalid
Fri Jun 25 16:06:11 EDT 2021


On 2021-06-25 14:30:08 +0000, Arne Vajhj said:

> On 6/25/2021 9:12 AM, Dave Froble wrote:
>> On 6/25/2021 4:48 AM, Jan-Erik Söderholm wrote:
>>> Den 2021-06-25 kl. 00:38, skrev Dave Froble:
>>>> I need to develop a better method of handling lots of socket connect 
>>>> requests.  I also need to see if my ideas will work Ok.
>>> 
>>> How many are "lots of"?
>> 
>> That is sort of undefined at this time.  Historically, the usage has 
>> continued to rise.  Not sure what it could rise to.  So far we've seen 
>> maybe 20K per hour at times.
> 
> 20K per hour is 333 per minute or 5.5 per second.
> 
> That should be manageable.
> 
> 100 ms sessions => 1 worker
> 1 s sessions => 10 workers
> 10 s sessions => 100 workers

5.5 requests per second might be sustainable on a Raspberry Pi 
(discussions of storage aside), and would trivial for an Apple Mac 
mini. An HPE Integrity server with a decently-modern SSD I/O subsystem 
would be snoozing.

A Mac mini with a chunk of a petabyte of fast RAID storage is feasible 
to configure, too.

But as for the connection rate, that's not how many of these situations 
work out of course. App loads tend to have spikes. And that's why we 
usually end up over-provisioned.

>>>> Current thoughts are a single listener that validates requests, accepts 
>>>> the connection, and passes it off to a worker process, then drops it's 
>>>> own connection to the socket.  Involves some inter-process 
>>>> communications.  Listener might get rather busy, but will spend little 
>>>> time on each request.

On OpenVMS, I'd tend to use the auxiliary server (OpenVMS inetd 
internet daemon), and let the network spin up workers.

Otherwise, you're re-implementing connection hand-off and that's ugly 
on OpenVMS with TCP, and uglier with SSL, or you're starting up a 
second connection for not a whole lot of gain.

This is where message queuing can be handy, but that tends to shift the 
underlying app design around somewhat.

>>> How are the clients deigned? Calls from Javascript running in browsers? 
>>> Are the clients your own applications too?
>> 
>> Clients use a protocol we've designed and implemented.  The source of 
>> connection requests doesn't matter.  Some are from VMS, others from 
>> various types of clients.

Pretty typical of established OpenVMS apps, more than a few of which 
had app predecessors using DECnet connections.

>>> For the usual HTTP (and everything related to that) based communication 
>>> WASD will provide most of what you need out of the box. Have you yet 
>>> looked at WASD? What you describe as your "current thoughts" above is 
>>> just a description of how WASD works.
>> 
>> No, WASD will not satisfy our requirements.  It's not just accepting 
>> connections.  It is very specific apps that handle the requests.  One 
>> of the requirements is identifying the incoming request as valid as 
>> early as possible.  Failure at this point will immediately terminate 
>> the connection.

One variation involves having the client direct the connection to the 
appropriate server, rather than the server trying to do load balancing 
or dispatching directly.

This client-assisted dispatching might be configured or compiled in, or 
it might be something akin to Portmapper, or it can be a map downloaded 
at initial connection and then refreshed periodically or as needed.

Message queuing would be a different and somewhat newer approach.  VSI 
has ported message queuing frameworks, though there'd likely need to be 
jackets added to allow easier access from BASIC.

>> 
>> WASD is  a "jack of all trades" type of app, and does that reasonably 
>> well.  What we require is a specialist, for various reasons.
> 
>> One size does not fit all.
> 
> 95% is HTTP today, but that leaves 5% for everything else.

HTTPS is very common, yes. Downside on OpenVMS is that REST support 
stinks in most of the established OpenVMS languages and frameworks, and 
support is ~nonexistent within OpenVMS itself.

I've ported libwww a few times, and there are undoubtedly other ports 
and other frameworks around.

> Non-HTTP approaches are still seen.

Particularly for established app activities, and for cases where REST 
doesn't fit well.

> If you have a requirement to be easily accessible from multiple 
> languages then then you could look at Thrift.

Which would entail porting Apache Thrift to OpenVMS, of course.

> I still believe that a multi-threaded listener doing IPC with workers 
> is the right design.

OpenVMS doesn't do threads all that well.

BASIC does just fine with ASTs (two threads, one core), but I'd want a 
careful look at the load trends and the peaks before committing to 
using ASTs. And I wouldn't prefer it.

OpenVMS does offer KP Threads, but I've not met production apps mixing 
KP threads and BASIC.  It's likely possible, just not something I've 
seen or used from BASIC.

> The workers can be VMS Basic.
> 
> But you will need something else for the listener.

BASIC tends to use the $qio or $io_perform interface, not the sockets.  
Or there's the auxiliary server (inetd), which then deals with starting 
up processes on request.

>>   For our customers, it seems that internet communications is replacing 
>> most of the previous activity.

Servers disappearing further into the background, with fewer or no 
direct logins, and with the front-end UI handled either in a web 
browser or in a dedicated client app, is increasingly common, yes.





On 2021-06-25 17:40:52 +0000, Simon Clubley said:

> On 2021-06-25, Arne Vajhøj <arne at vajhoej.dk> wrote:
>> 
>> I still believe that a multi-threaded listener doing IPC with workers 
>> is the right design.
>> 
>> The workers can be VMS Basic.
>> 
>> But you will need something else for the listener.
> 
> We know VMS Basic is no good for normal multi-threaded coding.

OpenVMS itself isn't all that great at multi-threaded code, having 
written a whole lot of it over the years.

OpenVMS threading support here is pre-millennial, in terms of 
frameworks and language support. Aside from some existing 
underpinnings; KP Threads, or ASTs with two threads, one core. Or 
pthreads for C and C++ and (presumably) Rust if/when.

Which means more than a few cases of writing interlocked queue 
message-passing, or other similar home-grown mechanisms.

libdispatch/GCD or similar multi-threading support never made it over, 
short of porting or re-creating it yourself.

> Is it any good at AST coding ? If so, instead of a multi-threaded 
> listener, perhaps an AST based listener could be used to handle the 
> workers instead.

BASIC does ASTs just fine. I have piles of BASIC code around using 
that, going back to network servers running DECnet, and with "more 
recent" network servers running IP and SSL.

TL;DR: I'd run a prototype to see how well a pool of server processes 
worked as a solution, with auxiliary server (inetd) as the initial 
design.  This if message queuing isn't feasible.


-- 
Pure Personal Opinion | HoffmanLabs LLC 




More information about the Info-vax mailing list