[Info-vax] OpenVMS, and Memory and Storage Addressing (was: Re: VMS Software needs to port VAX DIBOL to OpenVMS X86 platform
Jan-Erik Söderholm
jan-erik.soderholm at telia.com
Sat Dec 26 17:39:56 EST 2020
Den 2020-12-26 kl. 22:21, skrev Stephen Hoffman:
> On 2020-12-26 18:00:03 +0000, <kemain.nospam at gmail.com> said:
>
>>> -----Original Message-----
>>> From: Info-vax <info-vax-bounces at rbnsn.com> On Behalf Of Arne Vajhøj via
>>> Info-vax
>>> Sent: December-24-20 2:38 PM
>>>
>>> So they [IBM POWER] can do 2 PB and x86-64 can only do 256 TB. ...
>>>
>>
>> The market is changing very rapidly.
>>
>> Seagate and Western Digital now offer 18TB disks. (google "18TB disk")
>>
>> Large enterprise class VMware servers hosting large numbers of VM's are
>> configured today with 1-2TB memory per server.
>>
>> How many would have predicted these capabilities even 2 or 3 years ago?
>>
>> Now, fast forward 5 years - its not hard to see what appear to be far
>> away limits being pushed.
>
>
> Currently-available Intel x86-64 processors support 57-bit physical
> addressing. That permits 128 PiB of physical memory.
>
> https://software.intel.com/sites/default/files/managed/2b/80/5-level_paging_white_paper.pdf
>
>
> Linux added support for that, and likely too most other virtual machines
> either have or are adding 57-bit support. I expect that VSI is aware of
> Intel 57-bit physical addressing support, as well.
>
> Interestingly for the "just add more memory" approach, particularly as the
> I/O speeds increase, the need for larger physical memory can be
> lessened—benchmarks from one recent commercially-available Arm-based system
> are showing that less can RAM work well, when the memory and I/O paths are
> reworked. Big memory is a good cache when main storage access is HDD-speed
> glacial. When main storage is NVMe speeds, paging and swapping can (again)
> be a viable trade-off. And with byte-addressable non-volatile storage
> becoming available, these same server design trade-offs trade-offs shift
> again.
>
> It'll be a while before we're using 18 TB HDDs with OpenVMS, though.
Hm. If ever. My guess is that most OpenVMS systems using large amount
of disk storage will be be using some SAN of some sort, and then the
actual physical storage become less interesting. Seems as SAN's prefer
larger amunts of smaller disks for performance reasons also.
At the last time our storage was moved from an IBM SAN DS8000 to the
current IBM SAN Storewiz V7000, I was told that our disks at the same
time was moved from HD to SSD. Not that it matters to us, of course.
Not a big difference using our current 2 Gb FC adapters... :-)
> That's
> all dependent on the 64-bit I/O system updates (queued for the unreleased
> V8.5, prolly arriving in production at V9.2), on a new file system, and/or
> on app updates.
>
> And as for big memory and big storage and OpenVMS as most will probably see
> it, the VM can present a four-level page table to the OpenVMS guest, as
> well as the 2 TiB and smaller disks expected by existing apps and the ODS-2
> and ODS-5 file systems. Those folks that need or want native boot will be
> picking their configurations to operate within whatever limits are then
> applicable to OpenVMS—or finding a person or a vendor to configure and test
> their servers for them.
>
> Most (all?) vendors with 64-bit are running a subset of bits for physical
> addressing and the unused or unimplemented address bits all 0 or all 1
> bits, which means the servers will be available as the demand warrants. And
> I can do pretty well with SSDs and less than 48 bits of memory for the apps
> that I'm dealing with and know about, and NVMe storage will speed that.
> Folks that are stuck by 48- or 50-bit addressing, or planning for OpenVMS
> supercomputer-scale configurations, have a chat with VSI. And have a look
> at your I/O speeds and feeds and related app designs, too.
>
More information about the Info-vax
mailing list