[Info-vax] VUPS.COM relevance for modern CPUs
chris
chris-nospam at tridac.net
Tue Dec 20 12:22:35 EST 2022
On 12/20/22 16:43, abrsvc wrote:
> On Tuesday, December 20, 2022 at 11:33:13 AM UTC-5, chris wrote:
>> On 12/20/22 15:50, abrsvc wrote:
>>>> None of this makes much sense. spec.org have been devising cpu tests
>>>> for decades and have specialist tests for different workloads. That
>>>> includes all the info on compilers and code used. Probably the most
>>>> accurate data around and is supported by system and cpu vendors as
>>>> well. Too many variables involved, so some sort of level playing
>>>> field approach is the only way to get accuracy.
>>>>
>>>> Can be fun devising simple tests, but would never used that as a
>>>> basis for purchasing decisions...
>>>>
>>>> Chris
>>>
>>> The big problem with these standard benchmarks is that some compilers will look for these and insert some "special" optimizations specifically for those benchmarks. You are better served using a homegrown benchmark of some type that more closely reflects your application environment.
>>>
>>> Dan
>> All the conditions are published, including compiler flags,
>> which compiler and more. Must be more accurate than a home
>> grown ad hoc test which ignores so many variables that could
>> influence the results.
>>
>> If you want to measure something, use the best and most
>> accurate tools available...
>>
>> Chris
> I will disagree. How many standard benchmarks bear any relevance to an actual application? I suppose you can use them for relative machine performance information, but without knowing how your own application performs relative to those, they are useless. SPEC benchmarks mean little to I/O bound applications. Great, my new machine can perform calculations 10 times as fast. But... the application is bound by disk performance limits, so I see little to nothing for the speed improvement. just one extreme example.
The spec tests do target various workloads, database, web,
scientific and more, so why not use them ?. I;m sure there
must be other sites that do similar work, though haven't looked
recently.
If you want to find out where the bottlenecks are, you need to
start with single core throughput, to establish a baseline. That
without io, which would otherwise dominate most measurements,
orders of magnitude difference. How can you determine anything
by measuring at vm level only, which means you can have no idea
if it's the cpu, os or vm layer having the most influence ?.
Suspect there is little difference between most server vendors,
since they are all using the same cpu ranges and designs and it will
be some variant of the cpu vendors reference design anyway. Same
for disk and network io, as they are all using common controller chips
and vendors as well. Ever increasing complexity and R&D cost means
that only those like IBM can afford to go their own way. Not worth
the investment otherwise.
What will probably make the most difference is the OS and the
intimate knowhow that allows a vendor to make best use of the
processor, cache size and more. Same for application software
as well, some better than others. Too many variables
really to allow any meaningful results based on ad hoc tests.
So you dream up an ad hoc test and get results, so what does that
tell you comparison to anything else ?...
Chris
More information about the Info-vax
mailing list