[Info-vax] VUPS.COM relevance for modern CPUs

Arne Vajhøj arne at vajhoej.dk
Tue Dec 20 19:07:41 EST 2022


On 12/20/2022 11:43 AM, abrsvc wrote:
> On Tuesday, December 20, 2022 at 11:33:13 AM UTC-5, chris wrote:
>> On 12/20/22 15:50, abrsvc wrote:
>>>> None of this makes much sense. spec.org have been devising cpu tests
>>>> for decades and have specialist tests for different workloads. That
>>>> includes all the info on compilers and code used. Probably the most
>>>> accurate data around and is supported by system and cpu vendors as
>>>> well. Too many variables involved, so some sort of level playing
>>>> field approach is the only way to get accuracy.
>>>>
>>>> Can be fun devising simple tests, but would never used that as a
>>>> basis for purchasing decisions...
>>>
>>> The big problem with these standard benchmarks is that some
>>> compilers will look for these and insert some "special"
>>> optimizations specifically for those benchmarks. You are better
>>> served using a homegrown benchmark of some type that more closely
>>> reflects your application environment. >>>
>> All the conditions are published, including compiler flags,
>> which compiler and more. Must be more accurate than a home
>> grown ad hoc test which ignores so many variables that could
>> influence the results.
>>
>> If you want to measure something, use the best and most
>> accurate tools available...
>>
> I will disagree.  How many standard benchmarks bear any relevance to
> an actual application?  I suppose you can use them for relative
> machine performance information, but without knowing how your own
> application performs relative to those, they are useless.  SPEC
> benchmarks mean little to I/O bound applications.  Great, my new
> machine can perform calculations 10 times as fast.  But...  the
> application is bound by disk performance limits, so I see little to
> nothing for the speed improvement.  just one extreme example.

Testing with the actual application instead of an
artificial benchmark is obviously better.

But given how much effort has gone into developing
the modern benchmarks, then they should be better
than a simple homegrown benchmark unless one has a rather
unique context.

Obviously one need to pick the right benchmark. Like:

CPU integer => SPEC CPU SPECint
CPU floating point => SPEC CPU SPECfp
CPU floating point linear algebra => LINPACK
Database OLTP => TPC-C
Database DWH => TPC-H
Java app servers => SPECjEnterprise

If we talk old 1980's benchmarks like Dhrystone/Whetstone, then
it is probably not too much work to come up a homegrown benchmark
as good or better.

Arne




Arne





More information about the Info-vax mailing list