[Info-vax] LLVM, volatile and async VMS I/O and system calls
Simon Clubley
clubley at remove_me.eisner.decus.org-Earth.UFP
Wed Sep 22 16:16:19 EDT 2021
On 2021-09-22, chris <chris-nospam at tridac.net> wrote:
>
> That sounds like bad code design to me and more an issue of critical
> sections. For example, it's quite common to have an upper and lower io
> half, with queues betwixt the two. Upper half being mainline code that
> has access to and can update pointers, while low half at interrupt
> level also has access to the queue and it's pointers. At trivial level,
> interrupts are disabled during mainline access and if the interrupt
> handler always runs to completion, that provides the critical section
> locks.
>
It's nothing like that Chris.
At the level of talking to the kernel, all I/O on VMS is asynchronous
and it is actually a nice design. There is no such thing as synchronous
I/O at system call level on VMS.
When you queue an I/O in VMS, you can pass either an event flag number or
an AST completion routine to the sys$qio() call which then queues the
I/O for processing and then immediately returns to the application.
To put that another way, the sys$qio() I/O call is purely asynchronous.
Any decisions to wait for for I/O to complete are made in the application,
(for example via the sys$qiow() call) and not in the kernel.
You can stall by making a second system call to wait until the event
flag is set, or you can use sys$qiow() which is a helper routine to
do that for you, but you are not forced to and that is the critical
point.
You can queue the I/O and then just carry on doing something else
in your application while the I/O completes and then you are notified
in one of several ways.
That means the kernel can write _directly_ into your process space by
setting status variables and writing directly into your queued buffer
while the application is busy doing something else completely different.
You do not have to stall in a system call to actually receive the
buffer from the kernel - VMS writes it directly into your address space.
It is _exactly_ the same as embedded bare-metal programming where the
hardware can write directly into memory-mapped registers and buffers
in your program while you are busy doing something else.
> What you seem to be suggesting is a race condition, where the state of
> one section of code is unknown to the other, a sequence of parallel
> states that somehow get out of sync, due to poor code design, sequence
> points, whatever.
>
It is actually a very clean mechanism and there are no such things
as race conditions when using it properly.
>
> I'm sure the designers of vms wpuld be well aware of such issues,
> steeped in computer science as they were, and an area which is
> fundamental to most system design...
>
They are, which is why the DEC-controlled compilers emitted code
that worked just fine with VMS without the application having to
use volatile.
However, LLVM is now the compiler toolkit in use and it could
potentially make quite valid (and different) assumptions about
if it needs to re-read a variable that it doesn't know has changed.
After all, if the application takes full advantage of this
asynchronous I/O model, there has been no direct call by the code
to actually receive the buffer and I/O completion status variables
when VMS decides to update them after the I/O has completed.
I am hoping however that there are enough sequence points in the
code, even in the VMS asynchronous I/O model for this not to be
a problem in practice although it is a potential problem.
Now do you see the potential problem ?
BTW, this also applies to some system calls in general as a number of
them are asynchronous as well - it's not just the I/O in VMS which
is asynchronous.
Simon.
--
Simon Clubley, clubley at remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
More information about the Info-vax
mailing list