[Info-vax] VUPS.COM relevance for modern CPUs
abrsvc
dansabrservices at yahoo.com
Wed Dec 21 07:42:01 EST 2022
On Tuesday, December 20, 2022 at 7:07:50 PM UTC-5, Arne Vajhøj wrote:
> On 12/20/2022 11:43 AM, abrsvc wrote:
> > On Tuesday, December 20, 2022 at 11:33:13 AM UTC-5, chris wrote:
> >> On 12/20/22 15:50, abrsvc wrote:
> >>>> None of this makes much sense. spec.org have been devising cpu tests
> >>>> for decades and have specialist tests for different workloads. That
> >>>> includes all the info on compilers and code used. Probably the most
> >>>> accurate data around and is supported by system and cpu vendors as
> >>>> well. Too many variables involved, so some sort of level playing
> >>>> field approach is the only way to get accuracy.
> >>>>
> >>>> Can be fun devising simple tests, but would never used that as a
> >>>> basis for purchasing decisions...
> >>>
> >>> The big problem with these standard benchmarks is that some
> >>> compilers will look for these and insert some "special"
> >>> optimizations specifically for those benchmarks. You are better
> >>> served using a homegrown benchmark of some type that more closely
> >>> reflects your application environment. >>>
> >> All the conditions are published, including compiler flags,
> >> which compiler and more. Must be more accurate than a home
> >> grown ad hoc test which ignores so many variables that could
> >> influence the results.
> >>
> >> If you want to measure something, use the best and most
> >> accurate tools available...
> >>
> > I will disagree. How many standard benchmarks bear any relevance to
> > an actual application? I suppose you can use them for relative
> > machine performance information, but without knowing how your own
> > application performs relative to those, they are useless. SPEC
> > benchmarks mean little to I/O bound applications. Great, my new
> > machine can perform calculations 10 times as fast. But... the
> > application is bound by disk performance limits, so I see little to
> > nothing for the speed improvement. just one extreme example.
> Testing with the actual application instead of an
> artificial benchmark is obviously better.
>
> But given how much effort has gone into developing
> the modern benchmarks, then they should be better
> than a simple homegrown benchmark unless one has a rather
> unique context.
>
> Obviously one need to pick the right benchmark. Like:
>
> CPU integer => SPEC CPU SPECint
> CPU floating point => SPEC CPU SPECfp
> CPU floating point linear algebra => LINPACK
> Database OLTP => TPC-C
> Database DWH => TPC-H
> Java app servers => SPECjEnterprise
>
> If we talk old 1980's benchmarks like Dhrystone/Whetstone, then
> it is probably not too much work to come up a homegrown benchmark
> as good or better.
>
> Arne
>
>
>
>
> Arne
Perhaps the point I wsa trying to make has not been clear.
Standard benchmarks can provide raw throughput numbers for certain classes of functions (CPU, raw I/O , database functions, etc.).
But... How these relate to a real application environment is required in order to use these to predict performance of a system. A home grown benchmark is less of a raw performance indicator than a more accurate predictor of the specific application environment for any new hardware. If you know the relationship, then I would guess that industry standard benchmarks are useful. In many cases where I have been involved, no simple correlation could be made. You mileage will vary...
More information about the Info-vax
mailing list