[Info-vax] Microsoft: Alpha architecture responsible for poor Windows file compression
johnwallace4 at yahoo.co.uk
johnwallace4 at yahoo.co.uk
Wed Nov 2 18:28:49 EDT 2016
On Wednesday, 2 November 2016 19:32:09 UTC, David Froble wrote:
> Simon Clubley wrote:
> > According to:
> >
> > http://www.theregister.co.uk/2016/11/02/ghost_of_dec_alpha_sees_microsoft_dumb_down_windows_file_compression/
> >
> > Microsoft are saying that limits in the Alpha architecture are responsible
> > for poor Windows file compression in today's world. Sample quote:
> >
> > |Chen says one of his "now-retired colleagues worked on real-time compression,
> > |and he told me that the Alpha AXP processor was very weak on bit-twiddling
> > |instructions. For the algorithm that was ultimately chosen, the smallest unit
> > |of encoding in the compressed stream was the nibble; anything smaller would
> > |slow things down by too much. This severely hampers your ability to get good
> > |compression ratios."
> >
> > Do any Alpha architecture experts here know if this is the full story ?
> >
> > Simon.
> >
>
> Alpha EV4 did not have the WORD and BYTE instructions, thus working on such was
> not efficient. My understanding is that EV5 added the WORD and BYTE
> instructions, and did much better. Don't know what might have been added in EV6.
>
> As for "bit-twiddling", don't have a clue.
>
> Keep in mind, Alpha was designed to be very fast and simple. Some things aren't
> so simple.
The lack of byte/word operations made some sense in the early
Alpha vision, where Alpha was mostly for servers and
workstations.
Servers and workstations were expected to have lots of main
memory and it was therefore expected to be ECC memory. ECC
memory isn't capable of handling updates smaller than an ECC
item e.g. 64 bits of data plus 8 ECC bits, so some magick
has to happen in hardware if you only want to update part of
the 64 bits and leave the rest unchanged. Or you do
something in software such that the only stores to memory
are full-width items, no part-width items.
So it was initially left to software to handle the
load/modify/store sequence for items smaller than an ECC
item. (I'm going back a long time here but that's the gist
of it).
When it became clear that Alpha was going to have to sell
in volume in systems without ECC memory, and it was going
to have to efficiently run code that had rather more
byte/word operations that DEC were used to, the BWX stuff
was added.
BWX also rather conveniently solved some of the fun that
had been happening with accesses to IO space which also
needed to be narrower than a full-width item.
What's the difference between unprotected memory, parity
memory, and ECC? Here's one real example.
I worked with a utility supplier with typically a few
dozen VMS worktations per control room, installations
which were critical to keeping the lights on across the
UK (and maybe elsewhere)
At one point they started buying VMS AlphaStation 400s
maxed out with parity memory (96MB?).
They ended up with one or two crashes a week per control
room due to memory parity errors. That was unacceptable
to this VMS customer so they were replaced with similarly
configured AlphaServer 1000s which had ECC rather than
parity. The single bit memory errors which crashed a box
with parity memory were still occurring, but they were
invisible to the software on a box with ECC memory, the
system just kept on running.
On the other hand, heavily loaded NT boxes were hardly
expected to stay up for a fortnight back then,
regardless of memory protection.
Anecdote is not evidence, YMMV, etc.
More information about the Info-vax
mailing list