[Info-vax] x86-64 VMS executable image sizes and memory requirements ?

Bob Gezelter gezelter at rlgsc.com
Sun Dec 22 12:12:35 EST 2019


On Wednesday, December 18, 2019 at 8:21:55 PM UTC-5, Simon Clubley wrote:
> Now that VSI are starting to get user level programs such as EDT
> running, I was wondering how the x86-64 VMS image sizes and memory
> usage compares to other VMS architectures.
> 
> Or is it still too early to get enough data to be meaningful ?
> 
> Thanks,
> 
> Simon.
> 
> -- 
> Simon Clubley, clubley at remove_me.eisner.decus.org-Earth.UFP
> Walking destinations on a map are further away than they appear.

Simon,

The whole question of code size on x64 is a bit of a "red herring".

In the days or the PDP-11 or early VAX processors, code size in both primary and secondary storage was a serious issue. Both types of storage were relatively expensive. To put it politely, size was King.

The environment today is dramatically different, both in cost and capacity. CPU storage has gone from hundreds of kilobytes to multiple terabytes. Mass storage has gone from hundreds of megabytes (e.g., RP04) to terabytes. In 1978, an 176-megabyte drive weighed hundreds of pounds and cost USD 35K (if memory serves, pardon the pun). Today, a multi-terabyte drive costs approximately USD 100 and easily fits in my hand.

It is also important to compare like with like. VAX images were routinely stripped of debugging information for a variety of reasons. On newer architectures, this may or may not be the case.

Optimization is another issue. Unrolling loops, inline code replication, and other optimizations can increase the size of the code while at the same time increasing speed of execution. Early PDP-11 CPUs did not have caches, Later PDP-11 and VAX CPUs did have caches, but they were minuscule by present standards. Combine the effects of large, multi-level caches with code optimization creates complex tradeoff effects (Hint: Temporal locality can be quite significant in achieving high performance, but may increase code size). Pipelining also advantages code with no control transfers, which means replication code can be advantageous). In the end, size is a poor metric not particularly related to performance.

Remember that on OpenVMS, which is a demand paged system, some image pages will never be loaded, if they are not actually referenced during execution. 

Unless the x64 toolchain produces production code which is grossly larger (10x) than the other 64-bit architectures, I would not consider it a matter for concern.

- Bob Gezelter, http://www.rlgsc.com



More information about the Info-vax mailing list