[Info-vax] free shell accounts?

Matthew H McKenzie news.deleteme at swellhunter.org
Sun Jan 25 00:42:24 EST 2015


OK,
       in the case of Deathrow, the user's files were shared and visible on 
each node, and there were two remaining nodes.
One could log into either machine. This provided some redundancy.

The architectures of the nodes differed, so while you could edit files and 
run DCL, any user programs needed to be recompiled.
Also not all system software was replicated, and only GEIN ran the webserver 
and NNTP server. So there was no "failover" as you might hope for when 
hearing the word "cluster".  In reality some some storage resources are 
shared and messaging between the nodes was expedited as as system manager 
might intend.

There are a very small number of extensions to system calls that "only apply 
to ......" and would not be portable.

Matt.

"Stan Radford" <sradford at noemail.net> wrote in message 
news:m9r1cq$p2q$1 at speranza.aioe.org...
> On 2015-01-22, Phillip Helbig (undress to reply) 
> <helbig at asclothestro.multivax.de> wrote:
>> In article <m9q6li$5er$1 at speranza.aioe.org>, Stan Radford
>><sradford at noemail.net> writes:
>>
>>> I don't understand what a cluster does. If they don't have shared disks
>>> somewhere wouldn't they have to have multiple copies of everything? How 
>>> does
>>> a cluster still remain usable if you are editing a file and the machine 
>>> the
>>> file lives on fails? I can see for serving applications a cluster would 
>>> be
>>> great but I don't understand how it helps development users. And even 
>>> that
>>> would seem like it would take a lot of planning and wouldn't just 
>>> automatically
>>> "work" because of the need for shared storage somewhere.
>>
>> There are several possibilities, but probably most clusters have all
>> disks mounted on all nodes,
>
> What does that mean actually? Does that mean they share physical disks
> because they all have physical connections or something else? If it's
> multiple boxes connected to one or more disk arrays that make sense. If 
> not
> I don't understand how it can work.
>
>> is working on.  One or more nodes can go down, for planned or unplanned
>> reasons, and the cluster continues to exist.
>
> As long as they're all physically attached to the same drives. But if
> they're attached to network drives and the owning box goes down then 
> there's
> a problem.
>
>> Of course, if a node goes down, the processes running on it will.  When
>> editing a file, then obviously you can't just continue editing
>> elsewhere.  All you have to do, though, is log in again (things can be
>> set up so that there is a virtual address for the cluster) and type
>> EDIT/RECOVER to get back to where you were when the crash occurred.
>
> I don't understand how this can work until somebody explains the above
> issues!
>
> Stan
> 





More information about the Info-vax mailing list