[Info-vax] Canceling SYSMAN shutdown possible?

IanD iloveopenvms at gmail.com
Sat May 2 00:00:06 EDT 2015


On Thursday, April 30, 2015 at 11:06:58 PM UTC+10, Stephen Hoffman wrote:

<snip>

> 
> The RA series is from the 1980s.  That's multiple generations of 
> storage gear ago, and from a vastly more expensive and less reliable 
> time.   In comparison with current systems, those old VAX boxes were 
> not very reliable even when new, either.  The early RA series disks 
> were around US$12K each, IIRC.   RAID was rare and flaky.  It was rare 
> to have any spare disks, much less having spare disks in each rack and 
> increasingly often in each storage shelf.   Modern RAID arrays are 
> substantially more capable and reliable than the RA disks and the 
> UDA/KDA/XDM controllers, and much less expensive.
> 
> Pure Personal Opinion | HoffmanLabs LLC

They used to cost around 16K in Australia from memory, ouch! That was around the late 80's I think

The other thing about the RA81's is the cables kept breaking on them

They had a cable running down the right hand side that used to rub against the frame (I think it was the frame it rubbed against), eventually they would wear through the plastic wrapper on the steel cable and then bite through the cable itself and bang, the disk would fault

As an operator I had to oversee the engineers replacing these things on numerous occasions. The whole HDA had to come out just to replace the cable. At least the data was still intact unless the cable break somehow caused a head crash

I remember horror stories about third party stingray disks but I don't know whether or not that was Digital using the IBM FUD principal or not to dissuade people from buying them

Modern disks are a godsend compared to the old rubbish, I certainly wouldn't want to go back to that era. Modern disks pack a ton more data and with SSD's/flash why would one ever want to go back to those old days

Where I work, the large storage arrays have already moved to SSD/flash with HDD's being relegated to near-line / standby storage now

Disks are cheap. Even in the past 4 years, the business used to whimper about disk expansion and you used to have to justify every GB asked for, now, when requesting storage expansion often your request is satisfied with an over-allocation - times have changed for the better

I have not tracked if drives have become more reliable per MB or not but with so many raid levels available and standby storage on hand and the most to ssd/flash, downtime has dramatically reduced to the point where hardware failures really don't cause the disruption they used to, at least not when it comes to disks failing. I'm not across what happens in the DC's but I do see reports of disk failures popping up but they are almost never associated with a system/application being marked unavailable. The place houses 1000's of servers, disks are not a source of failure like they once used to be, at least from my perspective

OpenVMS would be better to spend it's efforts on process/application resilience. Things like having shadow processes that can be flipped over to on another node, or migrated to another node should a nide fail would stand OpenVMS as a more future acceptable OS than attempting to harden disk infrastructure

What about raid levels for OpenVMS processes? or for OpenVMS clusters themselves? :-)  One can but dream I guess...

Surely, gone are the days where a system goes down and it interrupts your processing? 

Clusters need to grow up beyond being just a collection of nodes. That wonderful redundancy at the disk level needs to be expanded to the process / Application / OS level so that there is no single point of failure in a cluster anymore irrespective of what happens to the hardware underneath it



More information about the Info-vax mailing list