[Info-vax] OpenVMS async I/O, fast vs. slow
Arne Vajhøj
arne at vajhoej.dk
Sat Nov 4 17:44:24 EDT 2023
On 11/4/2023 7:11 AM, Johnny Billquist wrote:
> On 2023-11-03 15:08, Arne Vajhøj wrote:
>> On 11/2/2023 9:02 PM, Jake Hamby (Solid State Jake) wrote:
>>> I've become a little obsessed with the question of how well OpenVMS
>>> performs relative to Linux inside a VM, under different conditions.
>>> My current obsession is the libuv library which provides a popular
>>> async I/O abstraction layer implemented for all the different flavors
>>> of UNIX that have async I/O, as well as for Windows. What might a VMS
>>> version look like? How many cores could it scale up to without too
>>> much synchronization overhead?
>>>
>>> Alternatively, for existing VMS apps, how might they be sped up on
>>> non-VAX hardware? Based on the mailbox copy driver loop in the VMS
>>> port of Perl that I spent some time piecing together, I've noticed a
>>> few patterns that can't possibly perform well on any hardware newer
>>> than Alpha, and maybe not on Alpha either.
>>
>> The normal assumption regarding speed of disk IO would be that:
>>
>> RMS record IO ($GET and $PUT) < RMS block IO ($READ and $WRITE) <
>> $QIO(W) < $IO_PERFORM(W) < memory mapped file
>>
>> (note that assumption and fact are spelled differently)
>
> I'm not sure I have ever understood why people think memory mapped files
> would be faster than a QIO under VMS.
Very few layers.
Large degree of freedom to the OS about how to read.
> With memory mapped I/O, what you essentially get is that I/O transfers
> go directly from/to disk to user memory with a single operation. There
> are no intermediate buffers, no additional copying. Which is what you
> pretty much always have otherwise on Unix systems.
>
> However, a QIO under VMS is already a direct communication between the
> physical memory and the device with no intermediate buffers, additional
> copying or whatever, unless I'm confused (and VMS changed compared to
> RSX here...).
XFC?
> So how would memory mapped I/O be any faster? You basically cannot be
> any faster than one DMA transfer. In fact, with memory mapped I/O, you
> might be also hitting the page fault handling, and a reading in of a
> full page, which might be more than you needed, causing some overhead as
> well.
Fewer layers to go through. More freedom to read ahead.
> Also, what do $IO_PERFORM do, that could possibly make it faster than QIO?
$QIO(W) is original. $IO_PERFORM(W) was added much later.
$IO_PERFORM(W) is called fast path IO. The name and the fact
that it was added later hint at it being faster.
That name has always give me associations to a strategy of
doing lots of checks upfront and then skip layers
and checks when doing the actual reads/writes. But I
have no idea if that is actually what it does.
Arne
More information about the Info-vax
mailing list