[Info-vax] QIO Writes to a mailbox
Johnny Billquist
bqt at softjar.se
Sun Nov 6 18:15:21 EST 2011
On 2011-11-06 23.42, Johnny Billquist wrote:
> On 2011-11-06 23.07, Johnny Billquist wrote:
>>> In other words, because a process can die gracefully and any unread
>>> mailbox message it wrote before dying are preserved and still delivered,
>>> the odds point to the driver making an immediate copy of the contents to
>>> the mailbox buffer (aka: queues the io).
>>
>> Remove the word "immediately", and I agree with all you write.
See VMS Programming Concepts Manual, section 3.2.1.4 - Asynchronous I/O.
That pretty much lays it out clear that the I/O operation is not
conceptually completed as a part of the QIO, and that the data is copied
asynchronously in relation to your program.
>>> Note that my experience is with QIOW with IO$M_NOW for writes to
>>> mailbox, and I assume that the DCL WRITE statement does the equivalent
>>> when writing to a mailbox device because they behave the same. And note
>>> that a DCL WRITE to a full MAILBOX does cause the process to hang in
>>> RWMBX.
>>
>> If you do a QIOW with IO$M_NOW, then you should never get to RWMBX
>> state. Nor will you ever have a situation where your program can
>> potentially have a buffer that is yet not copied into the mailbox, but
>> soon might be. IO$M_NOW ensures that if the mailbox is full, the QIOW
>> will complete the I/O with an error saying that the mailbox was full,
>> and the I/O could not be completed. The QIOW itself blocks your program
>> from ever continuing until the I/O has completed.
>>
>> There is a big difference between QIO and QIOW!
>>
>> A DCL write will most likely always be a QIOW.
>
> Also, I might have been slightly confused by the IO$M_NOW. Unless my
> memory fails me now, that modifier will guarantee that your write does
> not complete until the message have been read by someone.
Aw! Crap! Wrong again. It was the other way around. Without the
IO$M_NOW, it would wait. With it, it continues and complete the I/O
regardless of if a reader have read the message or not. Ok, I hope this
is my last correction on this section. :-)
So yes, the IO$M_NOW means you can get to the RWMBX state, even when
using QIOW. I suspect it is equally possible to get to RWMBX without the
IO$M_NOW, if you use QIO instead of QIOW.
>>> Since the $QIO system service doesn't know what "IO$M_NOW" means, the
>>> different behaviour would be the result of the mailbox driver taking
>>> actions during the initial QIO. And since the portion of the driver
>>> involved in responding to SYS$QIO has the ability to copy the user
>>> buffer into the mailbox buffer when IO$M_NOW is specified, it is a good
>>> bet that it copies it for all write operations, and that IO$M_NOW simply
>>> means that the IO is considered complete when it is queued to the
>>> mailbox with no notification pending for when a reader has read that
>>> specific message later on.
To make another comment here - Your comment don't make sense to me. Of
course the different behavior is originating from inside the device
driver. However, your conclusion that it therefor happens during the
queuing of the I/O is totally unfounded. It does not happen during the
initial QIO. The IO$M_NOW modifier affects what happens when the I/O is
completed, not what happens when it is initiated. All that happens
during the QIO phase is that parameters are checked for basic sanity,
basic setup, and then the I/O is queued to the device driver. At this
point, the QIO returns execution to the process, with a result in R0 for
the system call, while the IOSB is zero (operation is pending). If you
do a QIOW, execution does not return to your process until the I/O is
complete, at which time you will have both R0 containing the result of
the system call, and the IOSB containing the result of the I/O.
I agree that it is a good bet that the mailbox driver copies the data
for all operations, in fact, reading manuals more or less says that it
does. But it don't really say *when* it happens.
The IO$M_NOW is only affecting when the I/O is signaled as complete.
Either when the data have been copied into the mailbox, or when the data
then also have been copied out into the buffer of a read I/O request.
All of that happens in the device driver, as a part of the I/O
processing, and is generally not happening in the context of the
requesting program, but in a kernel thread.
(Now, in reality, for mailboxes, I would guess that there is no need for
a kernel thread, since all that is involved is actual data copying, and
therefore no kernel thread is needed since it really becomes rather
synchronous, since it's all depending on the CPU anyway. And the normal
next step after queuing an I/O request to a device driver, is to wake up
the device driver, to make it pick up any possible work, I wouldn't be
surprised if the end result is that a QIO and QIOW becomes identical in
the case you add the IO$M_NOW modifier, since the I/O will in fact be
completed immediately, if space exist. But that is a very peculiar side
effect of the specific driver in a specific mode, and does not fairly
reflect on how QIO is actually processed by VMS, and it would appear it
might make people draw the wrong conclusions about what the QIO system
call actually do.)
Johnny
More information about the Info-vax
mailing list