[Info-vax] Using terminal lines - a long shot question
Michael Moroney
moroney at world.std.spaamtrap.com
Sun Mar 8 17:52:50 EDT 2020
Lee Gleason <lee.gleason at comcast.net> writes:
> I have occasion to write some programs that control some foreign gear
>via terminal lines (I know - already, I am in a state of sin...).
> Many years back, Jamie Hanrahan (RIP) posted an entry to comp.os.vms
>that described in great detail the best way to do this sort of thing -
>it described when to use what sort of read qio's, from single character
>reads with timeouts to bulk reads by length, and several types in
>between those, and when to switch to which variety. It went on at length
>how to do this sort of thing efficiently and reliably. It was a great
>generalized description of how to deal with gear that needs to send and
>receive small commands as well as large data packets - a sort of
>universal treatment of how best to implement an asynchronous protocol
> I've misplaced my copy of that post - wondering if anyone else saw it
>and saved a copy....
I never saw the post you mentioned, did you try to find it with Google
Groups?
Long ago, I did a project where I was trying to read data from a 9600 bps
terminal line (set PASTHRU)
with a Microvax II to feed data from a test system into an artificial
intelligence system. Since the data was semi-real time with no guaranteed
terminators, my first attempt was to use 1 character QIOs. It didn't work,
the Microvax could not process ~960 QIOs per second. I struggled with how to
do this for quite some time, and finally did something like the following:
0. Set TYPAHDSIZ to be very large (or use a large ALTYPAHD, I forget)
1. Issue a 1 character QIO read, no timeout.
2. When this completes, process character and check for # of characters in
the typeahead buffer.
3. If this is 0, go to step 1.
4. Read and process that many characters, with 0 timeout on the QIO read.
5. Go to step 2.
"Process" for the reader was to stick them into a common buffer for another
process to deal with.
When the sender was sending at the full rate, the step 4 reads were quite a bit
larger, showing how large the QIO overhead was.
Now that I think of it I may have had to double buffer the QIOs (guaranteeing
there was always 1 in progress) but I am not sure.
More information about the Info-vax
mailing list