[Info-vax] VSI OpenVMS V9.1 Field Test beginning.
kemain.nospam at gmail.com
kemain.nospam at gmail.com
Sat Jul 3 10:02:28 EDT 2021
>-----Original Message-----
>From: Info-vax <info-vax-bounces at rbnsn.com> On Behalf Of Phillip Helbig
>undress to reply via Info-vax
>Sent: July-03-21 4:07 AM
>To: info-vax at rbnsn.com
>Cc: Phillip Helbig undress to reply <helbig at asclothestro.multivax.de>
>Subject: Re: [Info-vax] VSI OpenVMS V9.1 Field Test beginning.
>
>In article <sboj0u$nfb$1 at dont-email.me>, "John H. Reinhardt"
><johnhreinhardt at thereinhardts.org> writes:
>
>> On 7/2/2021 12:37 PM, Phillip Helbig (undress to reply) wrote:
>> > In article <sbnids$4o7$1 at dont-email.me>, "John H. Reinhardt"
>> > <johnhreinhardt at thereinhardts.org> writes:
>> >
>> >> VSI OpenVMS x86-64 V9.1 only supports SATA disks. Support for other
>> > disk types will be added in future releases of VSI OpenVMS x86-64.
>> >
>> > I had always assumed that I would have a mixed cluster and add some
>> > x86 disks to shadow sets, then removed the SCSI members and the
>> > nodes hosting them one by one until everything is new. If x86 will
>> > support SCSI, could I plug my Top-Gun Blue 40 MB/s SCSI disks in the
>> > Top-Gun Blue BA356 boxes into x86?
>> >
>>
>> From the OpenVMS x86 Release notes:
>>
>> 2. Hardware Support
>> Direct support for x86-64 hardware systems (models to be specified)
will
>be added in later releases.
>>
>>
>> Not initially. The current release of FT9.1 only runs on virtual
>> hosts. While you could probably get a SCSI card to go into whatever
>> machine you use as a virtual host, you'd need some sort of pass thru
>> connection to get those SCSI disks to the OpenVMS
>
>I plan to wait for bare metal in any case.
>
>> The field test V9.1 does support MSCP served disks, however so *iof*
>> you can cluster with an Alpha, then it could serve the disks such that
>> the x86 OpenVMS can access them.
>
>Presumably MSCP-served disks will always be supported. That's what I was
>thinking of originally: use MSCP to serve all disks to all nodes, then make
>shadow sets of SCSI members on Alpha and whatever is available on x86.
>
Interesting how history always repeats itself - especially in the IT world.
For those not familiar with an emerging very hot software defined VM hosting
technology called HCI (Hyperconverged Infrastructure) from companies like
Nutanix, HPE, Dell etc., one of the key ways HCI drastically reduces overall
costs, is to eliminate expensive and complex fibre based SAN switches, SAN
controllers etc. and instead, use cheap local drives and "serve" this local
storage in a distributed manner to other commodity X86-64 server nodes in
the HCI cluster. Overall integrated cluster mgmt. solutions are also a key
component of HCI.
While thy support VMware as well, Nutanix core product also provides their
own hypervisor to host VM's without the very high costs with VMware
licensing that is now becoming a big concern for med-large IT shops.
While HCI solutions also support SAN infrastructure, the biggest cost
savings usually touted is to use cheap local drives and then serve these
local drives storage to other server nodes in the cluster. HCI also uses
host-based RAID strategies (replication factors determine RAID level) to
mitigate local drive failures.
Does this not sound like MSCP and HBVS?
Reference:
< https://www.nutanix.com/hpe>
< https://www.hpe.com/ca/en/integrated-systems/hyper-converged.html>
<
https://www.itcentralstation.com/questions/what-is-the-biggest-difference-be
tween-nutanix-and-vmware-vsan>
The more things change, the more they stay the same.
Regards,
Kerry Main
Kerry dot main at starkgaming dot com
--
This email has been checked for viruses by AVG.
https://www.avg.com
More information about the Info-vax
mailing list