[Info-vax] naming convention in VMS MAIL
Dave Froble
davef at tsoft-inc.com
Wed Dec 19 20:29:42 EST 2018
On 12/19/2018 7:36 PM, Stephen Hoffman wrote:
> On 2018-12-19 23:59:32 +0000, johnwallace4 at yahoo.co.uk said:
>
>> On Wednesday, 19 December 2018 23:03:12 UTC, terry-... at glaver.org wrote:
>>> On Wednesday, December 19, 2018 at 5:02:19 PM UTC-5, Stephen Hoffman
>>> wrote:
>>>> And I doubt the answer has changed since some yutz wrote the
>>>> following an aeon or two ago:
>>>> https://groups.google.com/d/msg/comp.os.vms/l7L2SbFdP2I/RsteYU4bjnYJ
>>>
>>> Ages ago (VMS 5, long before Alpha), somone wrote a "turbomail" patch
>>> for VMS mail that a) split the mail directory into multiple
>>> subdirectories and b) used shorter filenames so more would fit in a
>>> given number of directory blocks. It worked quite well, as long as
>>> you could figure out the changes needed to the patch whenever a VMS
>>> update replaced any of the relevant images.
>
> All sorts of creative shenaniganing happened because VAX was
> comparatively slow. Particularly late-era VAX, as compared with various
> of the then-current Unix boxes. (Mid-1980s-era Apollo and its storage
> and networking completely blew away mid-1980s-era VAX-11 performance,
> for instance.) OpenVMS preserves more than a few design compromises
> that track directly back to VAX/VMS and occasionally as far as RSX-11M
> designs and limits; to computers and designs and hardware from forty and
> fifty years ago.
>
> Part of that I/O slowness specific to MAIL was around the older
> directory file sizes and caching limitations of ODS-2 and ODS-5 on older
> releases and as was mentioned, but—as I'd alluded to else-thread—the
> MAIL (opaque) file naming was not entirely optimally selected for how
> the XQP works with its (opaque) directory file processing when mail
> files are added and removed from the mail subdirectory. VAFS will
> hopefully help here.
>
> My comments from an aeon or two ago—around migrating to and using a more
> modern database—also remain. Various folks working with OpenVMS have
> been far too fond of rolling their own databases, using RMS. Largely
> because OpenVMS (still) doesn't ship with a relational database
> integrated, nor with a NoSQL database past what RMS indexed files can
> offer. Upgrading app record formats in indexed files in a rolling
> environment is Not Fun. That's rather easier with a relational
> database, or with an object store or related marshalling and
> unmarshalling support, depending on what you're up to. And in the
> specific case of SQLite, the same database file can be dropped onto any
> system with SQLite, and successfully accessed.
>
>>> This is one of the things that people looking at a finished VMS on
>>> x86 will likely be testing - how long (both CPU and elapsed time)
>>> does VMS/x86 take to perform a task compared to Windows Server/x86,
>>> Linux/x86, or whatever. Unfortunately for VMS, the things that will
>>> be tested will likely be things that do not take advantage of VMS's
>>> features - in the above example, mail. Likely also things like Perl
>>> scripts, MySQL or other database benchmarking, etc.
>
> The core OpenVMS design features are inherently slow. SSDs have helped
> there as have caching controllers, but the default behavior is a write
> to persistent storage. There are more than a few cases where that's
> less than desirable, or where it would be preferable to cache a bunch of
> operations and flush that data less frequently. That's certainly
> possible on OpenVMS, but OpenVMS isn't good at presenting this trade-off
> to the developers. There've been cases where—for instance—temporary
> stub files opened and written and closed frequently can utterly saturate
> a system. Bad design, but other systems can keep all that in memory
> until flushed, so the storage I/O paths don't get hammered. And
> clustering works because the data is on static storage and the OpenVMS
> I/O caches are usually write-through and not write-back. Trade-offs
> abound here, of course.
And some of us are rather happy with that. When I do a write, I really
want to believe that the data is truly written. One can mention
batteries, UPS, and such, but, that doesn't give me the same warm fuzzy
feeling.
I seem to recall, sometime in the far past, a physics course and
learning about energy and work. No such thing as a free lunch,
perpetual motion, and such. If you wanted some amount of work done, you
had to provide the required energy. Maybe that applies here. If you
want to do the entire job, it takes required effort. If you're happy
with half the job, then perhaps it might seem to be faster.
>> Benchmarking the performance of a particular application ona
>> particular set of low-end hardware is often relatively easy.
>>
>> Benchmarking productivity is not so easy at all; it involves rather
>> more than the speed of a particular set of hardware and software, and
>> takes rather more than a box and a few scripts.
>
> "Lies, damned lies, statistics and benchmarking". Or
> "Benchmarketeering", for those that prefer it.
>
>> Well, actually, it seems like Intel are starting to realisewhat can go
>> wrong, e.g. x86-64 implementations startignoring important but
>> performance-inhibiting aspects ofthings like device/memory consistency
>> models, and memoryaccess controls, and visibility of stuff that
>> shouldn't be generally visible (but costs time and chipspace to
>> doproperly).
>
> There's reportedly a digital logic signal analyzer embedded in recent
> Intel x86-64 processor chips.
> https://www.blackhat.com/asia-19/briefings/schedule/index.html#intel-visa-through-the-rabbit-hole-13513
That was interesting. The way I read it (remember, I don't get out
much, and could be mistaken) is Intel builds a back door for the
hackers. Is that right?
Then I wonder if AMD is just as stupid?
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: davef at tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
More information about the Info-vax
mailing list