[Info-vax] OpenVMS async I/O, fast vs. slow

Johnny Billquist bqt at softjar.se
Mon Nov 6 06:31:15 EST 2023


On 2023-11-04 22:44, Arne Vajhøj wrote:
> On 11/4/2023 7:11 AM, Johnny Billquist wrote:
>> On 2023-11-03 15:08, Arne Vajhøj wrote:
>>> On 11/2/2023 9:02 PM, Jake Hamby (Solid State Jake) wrote:
>>>> I've become a little obsessed with the question of how well OpenVMS
>>>> performs relative to Linux inside a VM, under different conditions.
>>>> My current obsession is the libuv library which provides a popular
>>>> async I/O abstraction layer implemented for all the different flavors
>>>> of UNIX that have async I/O, as well as for Windows. What might a VMS
>>>> version look like? How many cores could it scale up to without too
>>>> much synchronization overhead?
>>>>
>>>> Alternatively, for existing VMS apps, how might they be sped up on
>>>> non-VAX hardware? Based on the mailbox copy driver loop in the VMS
>>>> port of Perl that I spent some time piecing together, I've noticed a
>>>> few patterns that can't possibly perform well on any hardware newer
>>>> than Alpha, and maybe not on Alpha either.
>>>
>>> The normal assumption regarding speed of disk IO would be that:
>>>
>>> RMS record IO ($GET and $PUT) < RMS block IO ($READ and $WRITE) < 
>>> $QIO(W) < $IO_PERFORM(W) < memory mapped file
>>>
>>> (note that assumption and fact are spelled differently)
>>
>> I'm not sure I have ever understood why people think memory mapped 
>> files would be faster than a QIO under VMS.
> 
> Very few layers.

Well. The point is that QIO (and even more IO_PERFORM) actually do not 
have many layers. It's very different from I/O operations in a Unix system.

> Large degree of freedom to the OS about how to read.

There aren't really that many ways to read from a disk. Bascially, there 
is just one. Beyond that, it's about layers inside the OS. But we 
already said here now that we don't want those layers...

>> So how would memory mapped I/O be any faster? You basically cannot be 
>> any faster than one DMA transfer. In fact, with memory mapped I/O, you 
>> might be also hitting the page fault handling, and a reading in of a 
>> full page, which might be more than you needed, causing some overhead 
>> as well.
> 
> Fewer layers to go through. More freedom to read ahead.

Read ahead is something that the system can easily do both for normal 
I/O and memory mapped I/O. It's just a question of speculative reads 
assuming some pattern by the program. Most commonly that you are reading 
files sequentially from start to finish.

>> Also, what do $IO_PERFORM do, that could possibly make it faster than 
>> QIO?
> 
> $QIO(W) is original. $IO_PERFORM(W) was added much later.
> 
> $IO_PERFORM(W) is called fast path IO. The name and the fact
> that it was added later hint at it being faster.
> 
> That name has always give me associations to a strategy of
> doing lots of checks upfront and then skip layers
> and checks when doing the actual reads/writes. But I
> have no idea if that is actually what it does.

I think we covered this elsewhere. IO_PERFORM is really the one without 
any layers in VMS, it seems. QIO looks like it might actually have one 
or two layers in there.

But IO_PERFORM I can't see would be any slower than memory mapped I/O. 
And it might in fact actually be faster in some situations.

   Johnny




More information about the Info-vax mailing list