[Info-vax] OS implementation languages

Johnny Billquist bqt at softjar.se
Thu Aug 31 13:11:23 EDT 2023


On 2023-08-31 04:16, Bob Gezelter wrote:
> On Wednesday, August 30, 2023 at 4:58:45 PM UTC-4, Arne Vajhøj wrote:
>> On 8/30/2023 4:53 AM, Johnny Billquist wrote:
>>> Not sure how easy it is to dodge RMS under VMS. In RSX, you can just do
>>> the QIOs to the ACP yourself and go around the whole thing, which makes
>>> I/O way faster. Of course, since files still have this structure thing,
>>> most of the time you are still going to have to pay for it somewhere.
>>> But if you are happy with just raw disk blocks, the basic I/O do not
>>> have near as much penalty. Admitted, the ODS-1 (as well as ODS-2)
>>> structure have some inherent limitations that carry some cost as well.
>>> So you could improve things some by doing some other implementation on
>>> the file system level.
>>> But mainly, no matter what the file system design is, you are still
>>> going to have the pain of RMS, which is the majority of the cost. And
>>> you'll never get away from this as long as you use VMS.
>> SYS$QIO(W) for files works fine on VMS too.
>>
>> But a bit of a hassle to use.
>>
>> There are two alternative ways to to bypass RMS:
>> * SYS$IO_PERFORM(W) - the "fast I/O" thingy
>> * SYS$CRMPSC - mapping the file to memory
>>
>> Arne
> Arne,
> 
> One can bypass RMS, but it is not RMS that is the inherent problem. In my experience, it is not so much using RMS, but using RMS poorly that is the source of most problems.
> 
> As I noted in another post in this thread, increasing buffer factors and block sizes often virtually eliminates "RMS" performance problems. File extensions are costly, extending files by large increments also reduces overhead, increasing performance.

I would agree that you can certainly make RMS give better performance 
than it does by default. Caching data, getting it to do fewer copying 
operations when possible... Extend files by larger increments. 
Definitely helps...

But depending on how far you want to push I/O, at some point, skipping 
RMS is obviously always going to give you more I/O performance. But at 
the cost of either have to replicate parts of what RMS do yourself, or 
really just deal with raw disk blocks.

But it is also true that the ODS-2 (or -5 I guess) filesystem could also 
be improved on. But that's a bit more work, and it's not horribly bad 
most of the time.

The worst part is that for large files, you need to walk the retrieval 
pointers in order to find which logical block to fetch when you request 
a virtual block in a file. The retrieval pointer table is not that great 
when you have a large file, as you cannot skip parts of it, but need to 
always scan it from start up to the virtual block you want. Which might 
require reading additional blocks for the file header, getting to the 
extension headers.

Compared to, for example ffs in Unix, this is not as fast. In ffs, you 
can directly compute where to find the mapping for a virtual block to 
it's logical block without traversing a list of unknown size. Even for 
very large files. It might require reading up to 3 additional disk 
blocks in order to find the mapping, when we talk about really, really 
big files. But that is still probably cheap compared to ODS-2 for 
equally large files. Not to mention that caching makes it much cheaper 
on the ffs side than on the ODS-2 side. Other Unix file systems have 
improved some over ffs as well, so they do even better. ODS-2 is old, 
and have it's less good sides.

Then you have all the complexity of the directory files that makes it 
costly do add and remove information when the directory is large.

So sure. A new file system could help to improve performance of disk I/O 
in VMS, but it's generally not at all as bad as some people try to make 
it out as.

Another thing is memory mapped files, which is a big deal in Unix. The 
reason for that is actually because "normal" I/O always goes through 
intermediary buffers in Unix, so you have some significant overhead and 
additional copying going on all the time. Using memory mapped I/O 
circumvents all that bad cruft in Unix. In VMS, if you use QIO and talk 
directly to the ACP (or XQP), you are already in that good place that 
Unix people achieve with memory mapped I/O. Which is basically that 
reads and writes go directly from disk to the user process memory. You 
can't do any better than that. Memory mapped I/O do not allow you to go 
around the fact that the data still need to be accessed on the disk, and 
DMAed into memory somewhere. That is the absolute minimum that always 
have to happen. And with the direct talking to the ACP, you are already 
there.

   Johnny




More information about the Info-vax mailing list