[Info-vax] Current VMS Usage Survey

johnwallace4 at yahoo.co.uk johnwallace4 at yahoo.co.uk
Thu Dec 5 14:23:10 EST 2013


On Thursday, 5 December 2013 16:37:53 UTC, John Reagan  wrote:
> On Thursday, December 5, 2013 11:21:00 AM UTC-5, Bill Gunshannon wrote:
> 
> > In article <36bcad1f-a91d-48fa-8eaa-bcec922da3ae at googlegroups.com>,
> 
> > 
> 
> > 	John Reagan <xyzzy1959 at gmail.com> writes: 
> 
> 
> 
> 
> 
> >  The HP-UX Itanium compiler provides PBO (Profile Based Optimization).  The NSK Itanium compiler provides PGO (Profile Guided Optimization).  Same concept, different TLA.  Compiler instruments the code, you run it to count various things, a data file written out, you compile again using that data file.
> 
>  
> 
> > 
> 
> > > On Alpha, the Tru64 compilers had some PGO-like tools.  GEM has the support to use profiling data.  It was never enabled/ported to OpenVMS.
> 
> > 
> 
> > Why?  Because no one saw any real value in it in the real world?
> 
> > 
> 
> The decision that HPTC would be a Tru64 solution.  Enabling it for OpenVMS would have been trivial.  Plus, Alpha SPECmark numbers were never reported from OpenVMS, only from Tru64.  You wanted PGO there to help get the best numbers.
> 
> 
> 
>  
> 
> > > GCC provides PGO for x86 targets (-fprofile-generate and -fprofile-use) and so does Open64.  I've seen discussions to add PGO to LLVM in the future.
> 
> > 
> 
> > How many commercial application developers collect these profiles from 
> 
> > their customers and provide customized binaries afterwards?
> 
> >  
> 
> 
> 
> Probably not that approach, but many certainly run "test loads" against their code to help detect the various branch likely/unlikely paths.  Without any source syntax to indicate branch ratios, the compiler guesses.  With any "test load", you can easily detect which branches are rare (ie, error paths and such) vs the common ones.  The NSK kernel itself is built with PGO profiling data based on a "test load".  Same with HPUX if I remember correctly.

[apologies if this is a duplicate]

See also "Spike", profile directed optimisation for NT/Alpha, in this
Digital Technical Journal article:
http://www.hpl.hp.com/hpjournal/dtj/vol9num4/vol9num4art1.pdf 

Of course, NT on anything other than x86 became academic once MS decided they 
weren't interested.

Prior to MS making that decision, Samsung were a licenced designer of Alpha 
chips and had designed some interesting chips and some interesting systems. I 
recently got rid of a 21164PC-based prototype. It wasn't very different in 
hardware terms from any other Pentium-class board of its time (ATX, PCI, DIMMs, 
etc).

These cheap+cheerful licenced Alpha designs were intentionally incapable of 
running VMS, but perfectly capable of running NT or Tru64 or Linux or BSD or...

Because of MS's (and app and device vendors') reluctance to support anything 
non-x86, only certain application sectors ever stood much chance on non-x86. 

One such sector for Alpha might have been CAD, where performance might have 
mattered. But CAD had too much diversity (too many apps, too many graphics 
cards, too many etc) for Alpha to quickly catch on even when it had a 
substantial performance lead.

One sector with less diversity was the pre-press sector, where the aim of the 
game is to take a PostScript file from a publishing application on a desktop 
(frequently but not always a Mac) and quickly (1) check and preview it (2) 
convert it to a raster image for use with a high quality "computer to plate" 
print subsystem. During the brief lifetime of NT/Alpha, Alpha went from nowhere 
to being a serious player in that performance-critical sector where only a 
handful of players mattered and DEC could make sure they were all on board. 

Then MS decided NT was going to be x86-only. Why would MS care whether NT sold
on Alpha or on x86? All they wanted was the NT sale.

More recently, for many years the trend in IT departments has been to accept 
that the fix for slow applications is to spend more money on faster hardware. 
Unfortunately it's now several years since x86 single-stream throughput got 
maxed out, and not everything gains from multicore.

Despite that, feedback-directed optimisation hasn't caught on. Advanced source 
level optimisation as formerly practiced by e.g. Kuch and Associates 
Preprocessor, aka KAP, which DEC and others used to sell, has also faded away. 
KAP were independent, but then someone bought them (Intel) and their KAP 
product is no longer available and doesn't seem to have been replaced.

NT didn't need Alpha. But once NT was x86-only again, Alpha outside DEC wasn't
really going anywhere. Linux on NT existed but this was long ago. Linux was 
much less credible then than it is today.

VMS doesn't need Alpha. But some customers still think they need VMS, at least for a little while longer.



More information about the Info-vax mailing list