[Info-vax] VUPS.COM relevance for modern CPUs

Arne Vajhøj arne at vajhoej.dk
Mon Dec 19 19:19:36 EST 2022


On 12/19/2022 12:01 AM, Mark Daniel wrote:
> On 17/12/2022 10:13 am, Mark Daniel wrote:
>> Also his pointer to BogoMips.  Most interesting.  I read the FAQ and 
>> accessed the github code.  Quite straighforward.  Might be a good 
>> replacement as a general performance metric.
>>
>> https://github.com/vitalyvch/Bogo/blob/BogoMIPS_v1.3/bogomips.c
> 8< snip 8<
> 
> For general VMS comparative usage something more VMS-measuring is 
> needed.  I looked about the 'net and nothing sprang out.  I wonder what 
> VSI are using for metrics on X86 development?  Anything lurking in the 
> DECUS/Freeware repositories I missed?
> 
> Anyway, in the absence of anything else, I was thinking about what may 
> consume "non-productive" VMS cycles (i.e. non-USER mode crunching :-) 
> and all I could think of were the transitions between USER, EXEC and 
> KERNEL modes.  As required by RMS, $QIO, drivers, etc., etc.  No SUPER 
> modes measured here.
> 
> With this in mind I knocked-up a small program to repeatedly call a 
> function using $CMEXEC which calls a function using $CMKRNL and that is 
> that.  It measures how much effort is required compared to the simple 
> USER mode loop and reports it as b[ogo]VUPs.
> 
> https://wasd.vsm.com.au/wasd_tmp/bogovups.c
> 
> The real disappointment is my X86 VM.  The rest of the results seem in 
> line with expectations.
> 
> PS.  Looking for ideas, suggestions, criticism(s), etc. here...

First, then I assume neither your code nor VMS
itself are optimized - I believe John Reagan said that
the cross compilers do not optimize much.

But besides that I am not convinced that the time spent to
do mode switches is a particular relevant test. It should
never be a large enough part of total CPU usage to
matter much.

In general the CPU bottlenecks should be in
user mode. So back to BogoMips or DhryStone/WhetStone
or SPEC or whatever.

If something in VMS should be tested then I think a more
relevant test would be to test what the scheduler can handle.
When does overhead start hurting throughput - 500 threads?
1000 threads? 2000 threads? 4000 threads? 8000 threads?

Arne





More information about the Info-vax mailing list