[Info-vax] naming convention in VMS MAIL

johnwallace4 at yahoo.co.uk johnwallace4 at yahoo.co.uk
Wed Dec 19 18:59:32 EST 2018


On Wednesday, 19 December 2018 23:03:12 UTC, terry-... at glaver.org  wrote:
> On Wednesday, December 19, 2018 at 5:02:19 PM UTC-5, Stephen Hoffman wrote:
> > And I doubt the answer has changed since some yutz wrote the following 
> > an aeon or two ago:
> > https://groups.google.com/d/msg/comp.os.vms/l7L2SbFdP2I/RsteYU4bjnYJ
> 
> Ages ago (VMS 5, long before Alpha), somone wrote a "turbomail" patch for VMS mail that a) split the mail directory into multiple subdirectories and b) used shorter filenames so more would fit in a given number of directory blocks. It worked quite well, as long as you could figure out the changes needed to the patch whenever a VMS update replaced any of the relevant images.
> 
> However, this was entirely due to design choices made by VMS (127 block directory limit, no index of directory blocks so things like $ PURGE took approximately forever, etc.). I can say that old hardware was definitely not the cause - although disk transfer rates, etc. were glacial (3MB/sec was considered exceptionally fast) because simply booting BSD Unix on the same VAX gave lightning-fast access to the same number of mail messages (in native Unix format) despite maintaining them in plain old text files.
> 
> Since then there have been some improvements in VMS, and a lot of the speed loss was papered over by vastly faster hardware.
> 
> This is one of the things that people looking at a finished VMS on x86 will likely be testing - how long (both CPU and elapsed time) does VMS/x86 take to perform a task compared to Windows Server/x86, Linux/x86, or whatever. Unfortunately for VMS, the things that will be tested will likely be things that do not take advantage of VMS's features - in the above example, mail. Likely also things like Perl scripts, MySQL or other database benchmarking, etc.

Benchmarking the performance of a particular application on 
a particular set of low-end hardware is often relatively easy.

Benchmarking productivity is not so easy at all; it involves
rather more than the speed of a particular set of hardware and
software, and takes rather more than a box and a few scripts.

NT started life as a relatively robust piece of software, at
least relative to DOS and legacy Windows - who else remembers
"what did you want to work today?".

By the time Gates wanted NT to *replace* rather than supplement 
their DOS-based desktop products, as well as the MS Server family (too insecure, too incapable, etc), NT was ready, but when the trade rags 
came to run their favourite benchmarks on WNT, some of the 
popular benchmarks were slower under NT than they were on 
legacy OSes on the same x86 hardware (this was reported in 
a few places at the time).

Gates wasn't happy with that, and quite soon, various
features of NT which made it more robust (and also slowed 
it down on simple benchmarks) e.g. because various subsystems 
in NT were quite understandably in separate address spaces in 
separate processes with distinct levels of privilege) were 
reverted to the old way of doing things. Code and data which
really should have been kept separate was often lumped together 
to make it run faster, and sod the productivity implications. 

That kind of change got rid of the context switch 
overhead and that kind of stuff, and also got rid of the
added features and added protection afforded by using separate 
processes etc.

Oh well, never mind, what could possibly go wrong.

Well, actually, it seems like Intel are starting to realise 
what can go wrong, e.g. x86-64 implementations start 
ignoring important but performance-inhibiting aspects of 
things like device/memory consistency models, and memory 
access controls, and visibility of stuff that shouldn't
be generally visible (but costs time and chipspace to do 
properly). 



More information about the Info-vax mailing list