[Info-vax] OS implementation languages
Ian Miller
gxys at uk2.net
Wed Aug 30 07:24:54 EDT 2023
On Wednesday, August 30, 2023 at 10:41:28 AM UTC+1, Bob Gezelter wrote:
> On Tuesday, August 29, 2023 at 6:49:51 PM UTC-4, Craig A. Berry wrote:
> > On 8/29/23 3:27 PM, bill wrote:
> > > On 8/29/2023 3:18 PM, Johnny Billquist wrote:
> > >> On 2023-08-29 19:25, Simon Clubley wrote:
> >
> > >>> On a more serious note, I wonder what the maximum rate VMS is capable
> > >>> of emitting data at if it was using the fastest network hardware
> > >>> available.
> > >>
> > >> What a weird question. VMS in itself don't have any limits. It's all
> > >> always just about the hardware.
> > >
> > > Not really. VMS has always been notoriously slow with I/O and I assume
> > > that's what Simon was hinting at.
> > Right, and differently so for different kinds of I/O. See posts from a
> > few years ago by (I think) Eric Johnson on performance testing of the
> > network stack. And I wish I could remember the name of the guy who
> > posted about slow disk I/O even longer ago (Dave something?) including
> > code to do the testing.
> >
> > VSI has canceled two different file system projects, one of which was
> > GFS and one of which was "not Spiralog" by Andy Goldstein (I don't know
> > if it ever had a name but Clair Grant posted here that it inherited some
> > concepts but was not the same thing as Spiralog). Something will have to
> > be done eventually for disk I/O, and while the file system isn't the
> > whole enchilada, it's certainly one big part of it.
> >
> > The network stack improvements described here:
> >
> > http://www.vmsconsultancy.com/download/NL-VMSUpdate-2015/Vienna%20LAN%20Performance%20Improvements.pdf
> >
> > will hopefully be revisited at some point. If they aren't, then VMS
> > will remain slower at network performance than other systems using the
> > same networking hardware. I totally get why the port had to take
> > precedence for a small company, but holding the line is not the same
> > thing as moving forward.
> Craig,
>
> Indeed. There have been a number of projects in the I/O area, few of which have emerged into released form.
>
> There are a number of issues. I dug into them rather deeply while writing my Ph.D. dissertation. OpenVMS has a good collection of them, as do essentially all of the other extant operating systems. IMHO, they can be remediated in many ways.
>
> With disk I/O (more properly referred to as "mass storage" I/O these days), there are unnecessary serializations forced by driver processing. IMHO, they are a remnant of the days when kernel storage was far more constrained, e.g. IBM System/360 under OS/360 or DEC PDP-11 under RSX. Now that kernel memory is far more available, less serial approaches become viable.
>
> Other mass storage issues can often be addressed, at least on sequential files, by adjusting buffering limits. The "as shipped" defaults were set back in the memory-constrained days of the VAX-11/780 and are kept at those settings for back compatibility. Changing them is straightforward, although one has to also change quotas in the UAF correspondingly. I have spoken for over thirty years on that particular performance issue, starting at NASA Marshall in the late 1980s. I have sample programs that have gone from less than 10% CPU utilization on a MicroVAX 3100 to 100%, merely by changing the RMS buffer factors.
>
> The network stack has issues when it comes to high performance transfers. As I recall, several years ago there was an early-adoptor iSCSI kit, later withdrawn for a number of issues. The accompanying documentation included negative comments about performance. Efficiency of the IP stack would allow the use of increasingly popular iSCSI hardware.
>
> At the User API-level, FAST I/O does reduce some overhead, but requires application-level changes. Fast I/O makes sense for high I/O libraries, e.g., Pathworks.
>
> Some of these issues remains relevant when running as a virtualized guest. Some becomes significantly less relevant when one is paravirtualized as a guest on a virtual machine. However, paravirtualization is not a panacea. The unnecessary serialization of mass storage requests is at a level that it remains when the OS is paravirtualized. IMHO, the more levels between the application and the actual hardware, the more buffer factors become relevant. The overhead of the network stack is in the way requests are processed, and needs to be improved.
>
> IMHO, much of this is a remnant of the days when memory resources were far more limited. It is fixable.
>
> Am in the process of publishing a monograph with the analysis, but it has been delayed by tune availability and the pandemic. I have published a paper at an IEEE conference with some of the material and relevant diagrams. The pre-print is at: http://rlgsc.com/r/20220506.html
>
> - Bob Gezelter, http://www.rlgsc.com
I'm having trouble with this link http://rlgsc.com/r/20220506.html - page not found
More information about the Info-vax
mailing list