[Info-vax] QIO Writes to a mailbox

Johnny Billquist bqt at softjar.se
Sun Nov 6 17:42:52 EST 2011


On 2011-11-06 23.07, Johnny Billquist wrote:
> On 2011-11-06 21.23, JF Mezei wrote:
>> Johnny Billquist wrote:
>>>
>>> Do a QIO to a serial port, then change the buffer, and then sit and
>>> watch what you actually get as output.
>>
>> I am talking specifically about the mailbox driver which is an in-memory
>> construct. The serial driver cannot instantly deliver the data since it
>> may be forced to send bits out the serial port at 100 baud, and even an
>> all mighty Microvax II was able to go faster than that.
>
> The QIO to a serial device could still buffer the data in an internal
> buffer for the later read, in order for the memory where the buffer
> exists to be allowed to be paged out. Otherwise, the memory must be
> locked in place, since the writing to a serial port can in fact be a DMA
> operation.

I was a little careful when writing this. After checking more 
documentation, it turned out that I remember right. Terminal drivers do 
indeed buffer I/O requests, so that memory don't have to be locked in 
memory. So they are in fact buffered.

>> However, in the case of the mailbox driver, there is an in-memory buffer
>> that belongs to the mailbox. Messages are queue there to be delivered
>> by whoever reads the mailbox.
>
> Right.
>
>> It is entirely possible that during the QIO call, the driver queues the
>> request by moving the data to the mailbox buffer.
>
> Not neccesarily directly in the QIO call, but at some later point in
> time, before the I/O request completes.
>
>> You should note that if a process writes to a mailbox and then crashes,
>> the reading process still gets the message despite the writing process
>> having died before the message was read. Thus would indicate to me that
>> messages are written to the mailbox buffer quickly and stay there.
>
> That is not the same thing as saying that the data is copied before the
> QIO returns. This is not Unix. The QIO is a system call, which requests
> that I/O should be initiated. It does not do the I/O in the context of
> the QIO system call.
> The I/O is an asynchronous operation.
> It completes in parallel with your process running.
>
>> If the driver merely copies from the writer's user buffer to the
>> reader's user buffer, then a maibox would not need to have its own
>> buffer size sepcified, would not be able to contain multiple messages in
>> that buffer.
>
> Right.
>
>> More importantly, since a process can die gracefully between the time it
>> has done the write QIO and the time some other process has read the
>> mailbox message, it would indicate that the driver does not "lock" the
>> writer's user buffer by forcing a crashing process into RWAST to ensure
>> that when the reading process finally reads the message that the driver
>> woudl still have access to the writer's user buffer.
>
> But all that says is that the data is copied at some point from the
> program buffer to the mailbox buffer. It does not ensure that this
> happens before the QIO is completed.
>
>> In other words, because a process can die gracefully and any unread
>> mailbox message it wrote before dying are preserved and still delivered,
>> the odds point to the driver making an immediate copy of the contents to
>> the mailbox buffer (aka: queues the io).
>
> Remove the word "immediately", and I agree with all you write.
>
>> And this is why processes go into RWMBX when the mailbox buffer is full
>> since the driver is unable to write to the mailbox device until a
>> process reads some data from it to free up space in the mailbox's buffer.
>
> The RWMBX state might prove your point for mailboxes, though. I should
> test this (or someone else could).
>
> Does the whole process block if you try to write to a full mailbox? Even
> if you just do a QIO, and not a QIOW?
>
> If so, then the mailbox driver is indeed moving in as a part of the QIO
> processing. But if so, then I also wonder what the VMS engineers were
> thinking. There isn't really any reason for blocking the whole process
> under such circumstances.

I'm still trying to get clarification of the RWMBX state. It is a 
resource wait condition because of no room in the mailbox. But exactly 
when does it happen? I haven't found an answer to that, just as I have 
not found an answer to when the mailbox driver will copy the data. The 
manuals are not clear on that point.
It might actually be that if you issue several QIO, and then go on 
processing something else, your program will, at a random later point, 
go into a RWMBX because the driver have now chewed through the I/O 
requests to the point that the mailbox is full.
However, since the data copying into the mailbox, as such, is done with 
the CPU itself, it's operation is not truly asynchronous, unless we have 
SMP. And even in an SMP situation, the QIO to write to a mailbox can 
still be both queued and dequeued before the requesting process have a 
chance at performing another instruction, making the QIO and QIOW in 
reality equivalent, even though they conceptually are very different.

>> Note that my experience is with QIOW with IO$M_NOW for writes to
>> mailbox, and I assume that the DCL WRITE statement does the equivalent
>> when writing to a mailbox device because they behave the same. And note
>> that a DCL WRITE to a full MAILBOX does cause the process to hang in
>> RWMBX.
>
> If you do a QIOW with IO$M_NOW, then you should never get to RWMBX
> state. Nor will you ever have a situation where your program can
> potentially have a buffer that is yet not copied into the mailbox, but
> soon might be. IO$M_NOW ensures that if the mailbox is full, the QIOW
> will complete the I/O with an error saying that the mailbox was full,
> and the I/O could not be completed. The QIOW itself blocks your program
> from ever continuing until the I/O has completed.
>
> There is a big difference between QIO and QIOW!
>
> A DCL write will most likely always be a QIOW.

Also, I might have been slightly confused by the IO$M_NOW. Unless my 
memory fails me now, that modifier will guarantee that your write does 
not complete until the message have been read by someone.

>> Since the $QIO system service doesn't know what "IO$M_NOW" means, the
>> different behaviour would be the result of the mailbox driver taking
>> actions during the initial QIO. And since the portion of the driver
>> involved in responding to SYS$QIO has the ability to copy the user
>> buffer into the mailbox buffer when IO$M_NOW is specified, it is a good
>> bet that it copies it for all write operations, and that IO$M_NOW simply
>> means that the IO is considered complete when it is queued to the
>> mailbox with no notification pending for when a reader has read that
>> specific message later on.
>
> Did you mistype something above? You talk about QIO here, but above you
> said that you only had experience using QIOW. I hope you understand the
> important difference between a QIO and a QIOW.
>
> Johnny

	Johnny



More information about the Info-vax mailing list