[Info-vax] Calling standards, was: Re: Byte range locking - was Re: Oracle on VMS
Kerry Main
kemain.nospam at gmail.com
Fri Nov 25 10:44:32 EST 2016
> -----Original Message-----
> From: Info-vax [mailto:info-vax-bounces at rbnsn.com] On Behalf
> Of Johnny Billquist via Info-vax
> Sent: 25-Nov-16 9:02 AM
> To: info-vax at rbnsn.com
> Cc: Johnny Billquist <bqt at softjar.se>
> Subject: Re: [Info-vax] Calling standards, was: Re: Byte range
> locking - was Re: Oracle on VMS
>
> On 2016-11-25 13:38, VAXman- at SendSpamHere.ORG wrote:
> > In article <o175ok$2f0$1 at Iltempo.Update.UU.SE>, Johnny
> Billquist <bqt at softjar.se> writes:
> >> On 2016-11-23 18:09, Bill Gunshannon wrote:
> >>> True. I just looked in my RSTS manual and the RSX
executive
> doesn't
> >>> have .ANYTHING directives at all. Now I have to go look at
> some of
> >>> my other manuals and see just who else used null
> terminated strings.
> >>> UNIVAC-1100 did not. It, too, had descriptors. I have a
> number of
> >>> other assembler manuals be interesting to know just how
> many used
> >>> null termination as a common method. I know it was fairly
> common in
> >>> Z80 code i worked with even before a C compiler became
> common.
> >>
> >> In most processors, using a NUL to indicate the end of a
string
> makes
> >> it efficient to write the code. So you'll probably see it on
> almost
> >> any architecture where people want to deal with dynamic
> length strings.
> >>
> >> The other alternative is to keep a count, but that uses more
> memory,
> >> and in some cases adds a bit of complexity, which people
> often try to
> >> avoid (programmers being lazy and all).
> >
> > Memory is cheap! Considering other coding practices today
> that bloat
> > code, a byte count of a string is pale by comparison.
>
> Today that is true for most cases. Historically, not so much...
>
> Johnny
Agree .. the rapidly increasing capacity of cheap non-volatile
memory and huge storage (google "10TB Seagate" single drive) is
likely to cause not only Programmers, but also SysAdmins to
re-think their traditional way of doing things.
If TB's of relatively cheap local non-volatile memory (3D
XPoint/others in 2017), large numbers of core's are available on
single / tightly coupled clustered systems (OpenVMS/BL890C blade
today supports 1.5TB/64 cores), and local interconnects capable
of 100GB+ with ultra-low latency are available, what does this do
to the programming and SysAdmin practices normally associated
with the historical (legacy?) distributed model of N-Tier
computing?
Granted, some ISV's like Oracle and Microsoft will continue to
live like DEC in the past (over confidence, arrogance with market
share) and use hugely expensive and crazy complicated per core
licensing models, but this is a big opening for VSI to adopt a
more Linux-like financial model for OpenVMS/X86-64.
Historically, one should remember that very few Customers
migrated from Solaris, HP-UX, AIX, OpenVMS, NonStop to
Linux/Windows because Linux/Windows was technically "better". The
biggest driver, by far, was that up-front costs were cheaper. The
drive to reduce IT costs is not easing up. If anything, the drive
to reduce IT costs is increasing even more rapidly than in the
past.
Key to future success is to adopt and market creativity,
innovative features while at the same time, driving down costs.
Regards,
Kerry Main
Kerry dot main at starkgaming dot com
More information about the Info-vax
mailing list