[Info-vax] Current VMS engineering quality, was: Re: What's VMS up to these

Johnny Billquist bqt at softjar.se
Sat Mar 17 16:48:02 EDT 2012


On 2012-03-17 12.35, JF Mezei wrote:
> Johnny Billquist wrote:
>
>> I've never seen any such thing. How would you expect each Unix instance
>> to manage the disk structure with caching and writing the same blocks as
>> another instance?
>
> isn't that why Unix shops put everything into the hands of Oracle which
> does have a DLM and does let multiple instances of an OS access the same
> databases ?

Over the network, talking to servers who provide the data yes. No direct 
access from computers to disks. Lock management for databases are not 
the same thing as lock management for a cluster. Databases have their 
own problems and semantics, which also you to implement lock management 
at a database level, totally independent of OS level locking. Databases 
have this concept of transactions, which are atomic, but which can 
modify a lot of different data.
To improve database availability you have replication, in which 
transactions are committed to additional machines, so that any of them 
then can serve the data if you want to read. However, writes needs to go 
to the master. If the master goes out, you can have an election and 
promote one of the slaves as the new master.

In addition, for large databases, you shard them, where each shard have 
the same schema, but each shard only handles part of the range of the 
data. And that way you can distribute data over several machines. But 
when one goes down, you loose access to part of your data.

And then you combine the two above in order to scale to large databases, 
and have a bit better availability. Of course, whenever a master goes 
out, you get a brief time of stoppage anyway, since it takes some time 
to detect the fact, and recover, before some slave have been promoted to 
new master.

> Or do disk arrays really serve files via NFS  to multiple instances of
> Unix ?

Yes. :-)
When you talk about NAS, that's how it is done. I don't even know of any 
technology today which allows direct access by several computers to the 
disks. SAS, SCSI, SATA and so on are all disk to a single controller 
sitting in a single computer.

SCSI atleast allows you to have several computers on the same SCSI bus, 
but that is very unusual to see in real life, just because the OSes in 
general just can't deal with it (yes, VMS can, I know...).

> The thing is that the world continues to function and the sky has not
> fallen despite Unix lacking a DLM and more and more shops having
> multiple instances of Unix running at the same time.

The solution (as always) is just to throw more hardware at the problem. 
Good software engineering is not held in high esteem these days.

Larger disks, so that we can continue running everything local on one 
machine. Faster network, so that we can send more information over the 
net. Faster cpus so that we can compute even more locally on a single 
cpu. More memory so that we can deal with larger data problems in memory.

> How does Google handle having 50,000 Linux nodes (or whatever number
> they have have shared access to databases ?

Same thing. Sharding, replicating, network traffic...

	Johnny



More information about the Info-vax mailing list