[Info-vax] RAID 1 across cluster nodes?
Richard B
richard.brock57 at gmail.com
Mon Jan 28 12:10:58 EST 2019
On Monday, January 28, 2019 at 11:52:00 AM UTC-5, Stephen Hoffman wrote:
> On 2019-01-28 16:26:13 +0000, Richard B said:
>
> > I have a 2-node OpenVMS cluster (2x rx2800 i2) which is currently being
> > migrated away from SAN to internal storage. (Yeah, I know)
> >
> > My question is this. If I wanted to set each drive to RAID 1 could I
> > have one drive on one node and the other drive on the other node? Or
> > is RAID 1 in this case node specific?
>
> OpenVMS Host Based Volume Shadowing (HBVS) RAID 1 can operate across up
> to six hosts on current OpenVMS releases, and with each RAID-1 member
> volume located on any mix of simple storage controllers or RAID
> hardware controllers.
>
> In this case, you could place three volumes on each of the two hosts.
> Or five and one. Etc.
>
> Your write I/O will need to be completed across all spindles, so
> there's a downside to adding volumes into a RAID-1 configuration.
>
> If you don't have a shared storage bus available here—SCSI would be the
> low-end choice here, with multi-host-supported parallel SCSI
> controllers and an MSA30-MI box et al—then this configuration is
> inherently a primary-secondary configuration and cannot transparently
> survive the loss of the primary transparently. Manual intervention is
> always required when the primary fails.
>
> There are folks that use manual fail-overs with hardware RAID and
> external storage and with a cable swap in these configurations, as
> those configurations eliminate the exorbitantly-priced cluster license.
>
>
>
> --
> Pure Personal Opinion | HoffmanLabs LLC
Thanks for the info Steve. Actually, my dilemma is this. Each of the rx2800's has 8 drive bays which I plan on populating with hard drives of various sizes. So for all intents and purposes I will have 16 drive bays across the cluster. Currently, in our SAN environment the number of "drives" being used totals 23. So I have to do some trickery in order to get this 23 drives down to 16. No big deal. Due to our operating environment, we have some applications running on one node only, other applications running on the other node only and then some applications running on both. Thus the dilemma on how to set up some sort of data redundancy. HBVS might be the way to go IF I can further consolidate data to, in effect, 8 drives. (8 drives in a shadow-set = 16 total drives or all of the bays across the two rx2800 nodes.)
More information about the Info-vax
mailing list