[Info-vax] OS implementation languages

Ian Miller gxys at uk2.net
Wed Aug 30 05:00:44 EDT 2023


On Wednesday, August 30, 2023 at 9:53:11 AM UTC+1, Johnny Billquist wrote:
> On 2023-08-30 10:04, terry-... at glaver.org wrote: 
> > On Tuesday, August 29, 2023 at 8:29:30 PM UTC-4, Johnny Billquist wrote: 
> >> On 2023-08-29 22:27, bill wrote: 
> >>> Not really. VMS has always been notoriously slow with I/O and I assume 
> >>> that's what Simon was hinting at. 
> >> So? It just means that other systems might achieve higher rate of I/O 
> >> throughput than VMS on a specific piece of hardware. Nothing prevents me 
> >> from throwing faster hardware on the problem until I saturate the 
> >> network, no matter which OS I'm using. 
> > 
> > I discovered massive speed differences way back when on a VAX- 
> > 11/780 with a TU78 tape drive - $ BACKUP/IMAGE made the tape 
> > drive go "bloop... bloop... bloop" while a $ BACKUP/PHYSICAL made 
> > the tape drive go "neeeeeeeeeeeeeee" with the same block size. 
> > Same disk, same tape, filesystem overhead. 
> > 
> > Since then, both speeds have gotten faster but VMS file-structured 
> > I/O is still WAY slower than the physical hardware. I have an x86-64 
> > system running here with a load of enterprise SSDs that give me a 
> > sustained write performance of 1.8GByte/sec under FreeBSD 13. 
> > I'm running an emulated Alpha (AlphaVM) on it as I haven't heard 
> > anything from VSI since I (re) registered for their hobbyist program 
> > quite a few months ago. But from what I've seen, emulated Tru64 is 
> > a lot faster than VMS under the same AlphaVM release on the same 
> > host OS / hardware. 
> > 
> > Yes, that's disk I/O. But I would assume that network paths also have 
> > high overhead (not that it really matters, as real-world high-bandwidth/ 
> > high-volume transfers likely involve filesystem data).
> Which still means, in the end, that VMS do not have a limitation on the 
> speed of I/O as such, and a question of "how fast can VMS push bits" is 
> really just a question of how fast your hardware is. 
> 
> But your observation also raise another good point. I/O is sortof slow 
> in VMS, but it's not the actual I/O that is the main problem, but RMS. 
> The overhead here is pretty massive compared to something stupid like Unix. 
> 
> Not sure how easy it is to dodge RMS under VMS. In RSX, you can just do 
> the QIOs to the ACP yourself and go around the whole thing, which makes 
> I/O way faster. Of course, since files still have this structure thing, 
> most of the time you are still going to have to pay for it somewhere. 
> But if you are happy with just raw disk blocks, the basic I/O do not 
> have near as much penalty. Admitted, the ODS-1 (as well as ODS-2) 
> structure have some inherent limitations that carry some cost as well. 
> So you could improve things some by doing some other implementation on 
> the file system level. 
> But mainly, no matter what the file system design is, you are still 
> going to have the pain of RMS, which is the majority of the cost. And 
> you'll never get away from this as long as you use VMS. 
> 
> It's like if you were to always access all files in Unix via BDB. If 
> people were to do that, the numbers on Unix systems would also look very 
> different. 
> 
> Johnny

QIO to files is a documented option in I/O User’s Reference Manual Chapter 1 and there's Chapter 10 on Fast I/O. 
https://docs.vmssoftware.com/vsi-openvms-io-user-s-reference-manual/



More information about the Info-vax mailing list