[Info-vax] The Future of Server Hardware?

Johnny Billquist bqt at softjar.se
Tue Oct 2 08:22:18 EDT 2012


On 2012-09-30 20:49, JF Mezei wrote:
> Neil Rieck wrote:
>> I attended an IEEE seminar a few years back where the speaker informed the audience that Google was the number four computer manufacturer in the USA by volume. The big surprise here was that Google only built these computers for their own infrastructure which now goes by the name "cloud".
>
> With more and more focus on energy consumption of those data centres,
> I'd be interested to know whether the Google way (huge amount of simple
> home grown machines) is even in the same ballpark as having fewer but
> more complex "mainframe class" X86s.
>
> Does a blade with  16 computers in it end up consuming less electrical
> power than 16 separate 1U compurters ?
>
> When you scale to Google's size, shaving off a few percent in the
> leectricak bill makes a big difference.
>
>
>
> Also consider that an OS insance consumes CPU power. They are overhead
> to mamage applications just like an accounting dept is overhead to
> manage a company.
>
>
> At the scale use by Google, how much power is wasted by having a
> gazillion separate instances of Linux consuming a small percentale of
> CPU (and thus electrical energy, heat production (requiring air
> conditioning) ?
>
> If Google were to move to "mainframe" class machines with more
> applicatiosn per instance and thus fewer Linux instances, would this end
> up saving electrical consumption ?

You know, you keep saying "mainfraime class x86". I'm not sure I know 
what that is. In reality, when we talk about the CPUs themself, they are 
pretty much the same in all machines. There isn't any faster ones in 
"mainframe class" machines. In other words - you do not get more bang 
for the buck with "mainframe class machines". They will not be any 
faster, or consume less power. What it normally means is that you go 
with (supposedly) higher quality components, which (hopefully) don't 
break down as often.

Based on this - I'd say the number of machines would be the same no 
matter if you go for el-cheapo machines or really expensive "mainframe 
class" machines.

Once we can establish this fact, the next question then is power. Since 
we need the same number of machines, and "mainframe class" normally 
means more overengineered, I think it's a fair assumption that they will 
not draw less power. And if we talk about off-the-shelf machines instead 
of custom built ones with just the bits and pieces you really need, I 
think it's fair to say that your "mainframe class" machines will draw 
more power for the same computing power, as there will be parts sitting 
in the machines that you don't even use.

And more power means more heat means you need more cooling.

So, with your "mainframe class" machines, you end up with the same 
number of machines, requiring more physical space (that overengineering 
again), more power, more cooling, not to mention that they are actually 
more expensive to buy.

Now, where did you see the win here? The fact that they are more 
reliable is the only thing speaking for them. But when you get into 
thousands of machines, even a failure rate of 0.1% of machines becomes 
significant enough numbers that you have to design your software to be 
able to deal with hardware failure. And once you have designed your 
software to deal with hardware failure, why buy expensive hardware when 
cheap will do the work just as well? It's simple math actually...

Or did you think that google builds machines with really slow CPUs?

	Johnny




More information about the Info-vax mailing list