[Info-vax] VMS and MFA?

geze...@rlgsc.com gezelter at rlgsc.com
Fri Aug 21 09:58:45 EDT 2020


On Thursday, August 20, 2020 at 9:51:54 PM UTC-4, Dave Froble wrote:
> On 8/20/2020 6:22 PM, geze... at rlgsc.com wrote: 
> > On Thursday, August 20, 2020 at 4:07:48 PM UTC-4, Dave Froble wrote: 
> >> On 8/20/2020 12:45 PM, geze... at rlgsc.com wrote: 
> >>> On Thursday, August 20, 2020 at 12:02:21 PM UTC-4, Stephen Hoffman wrote: 
> >>>> On 2020-08-20 07:12:01 +0000, Dave Froble said: 
> >>>> 
> >>>>> I'm aware there are multiple methods to achieve desired results. But 
> >>>>> I'm curious, why get into the complexity of rightslist entries? 
> >>>> That's using OpenVMS enforcement for access. 
> >>>>> A captive account, with a menu of possible apps to run, pretty much 
> >>>>> locks a user into just those apps. Of course a menu utility that 
> >>>>> allows for custom menus for each user makes this simple. If a user 
> >>>>> somehow gets out of the allowed apps, being captive, the process is 
> >>>>> killed. 
> >>>> The difference here is that OpenVMS enforces the access, in addition to 
> >>>> whatever enforcement logic is in the captive command procedure. 
> >>>> 
> >>>> This approach likely on the assumption that a captive command 
> >>>> procedure—any app, for that matter—might be vulnerable. 
> >>>> 
> >>>> And it means that the site folks don't have to mess with the DCL 
> >>>> procedure to change access, and don't need to implement their own 
> >>>> user-to-access mapping. 
> >>>> 
> >>>> Sandboxes use a similar approach, though those can permit or can block 
> >>>> APIs beyond what OpenVMS considers security-relevant objects. (As 
> >>>> differentiated from OOP.) 
> >>>> 
> >>>> In a way of consideration around isolation and permissions, sandboxes 
> >>>> are to identifiers as identifiers are to UIC-based protections. 
> >>>> -- 
> >>>> Pure Personal Opinion | HoffmanLabs LLC 
> >>> Dave, 
> >>> 
> >>> Hoff stated what I did not make explicit. 
> >>> 
> >>> The rightslist identifiers controlled the display of the menu items. However, that was not the end of the configuration. The specific executables in the menu items were similarly protected with only the identifier granting execute access. 
> >>> 
> >>> Of course, the creation, granting, and revocation of identifiers was, from the perspective of day-to-day use completely encapsulated in a series of command procedures only usable by the designated supervisors. Besides the systems manager, no individuals had access to DCL outside of a captive command procedure. 
> >>> 
> >>> - Bob Gezelter, http://www.rlgsc.com 
> >>> 
> >> That's how our customers are set up. Everybody, including the system 
> >> manager, is in captive processes. 
> >> 
> >> Perhaps "system manager" is a bit much, all the selected individual(s) 
> >> can do is add or delete user accounts. That's all tightly controlled. 
> >> 
> >> I'm guessing your approach is similar to Steve's desire for "everything" 
> >> to be OS provided. My approach was (many years ago) to design and 
> >> implement a menu utility that controls all access for the captive users. 
> >> I'd believe that it's much more inclusive and capable than any OS 
> >> capability. Lots of options, no changing the utility, everything driven 
> >> by file data. 
> >> 
> >> And the menu maintainer is also captive. 
> >> 
> >> :-) 
> >> 
> >> Examples: 
> >> 
> >> IDX LEN POS TYPE RMASK WMASK ----------NAME---------- 
> >> 1 4 0 L-string 0 0 MENU KEY 
> >> 2 1 4 Byte 0 0 MENU SEQUENCE 
> >> 3 2 5 Integer 0 0 STATUS FLAG 
> >> 4 10 7 L-string 0 0 Function mnemonic 
> >> 5 30 17 L-string 0 0 Function-Menu name 
> >> 6 16 47 L-string 0 0 SPECIFIC-MENU DATA AREA 
> >> 7 16 63 L-string 0 0 SPECIFIC-MENU PROG AREA 
> >> 8 10 79 L-string 0 0 PROGRAM NAME-MENU KEY 
> >> 9 2 89 Integer 0 0 LINE NO. - MENU STATUS 
> >> 10 2 91 Date 0 0 START DATE 
> >> 11 2 93 Date 0 0 END DATE 
> >> 12 2 95 Integer 0 0 START TIME 
> >> 13 2 97 Integer 0 0 END TIME 
> >> 14 6 99 L-string 0 0 ABSOLUTE PASSWORD 
> >> 15 1 105 Byte 0 0 PASSWORD LEVEL 
> >> 16 30 106 L-string 0 0 SPECIAL DATA 
> >> 17 1 136 Byte 0 0 LINES TO SKIP 
> >> 18 6 137 L-string 0 0 CODES FILENAME 
> >> 
> >> ! Bit settings for record status flag 
> >> ! 
> >> ! 1 - <CR> prompt upon re-entry 
> >> ! 2 - codes look-up for special data 
> >> ! 4 - password required 
> >> ! 8 - keyer i.d. required 
> >> ! 16 - KB check required 
> >> ! 32 - USERNAME check required 
> >> ! 64 - use sequence from file 
> >> ! 128 - element is a menu 
> >> ! 256 - disable access of element 
> >> ! 512 - append data area to special data 
> >> ! 1024 - kill work file, program will detach 
> >> ! 8192 - action is a specific DCL command 
> >> 
> >> And quite a bit more. 
> >> 
> >> I guess I just believe in application specific stuff rather than one 
> >> size fits all. 
> >> -- 
> >> David Froble Tel: 724-529-0450 
> >> Dave Froble Enterprises, Inc. E-Mail: da... at tsoft-inc.com 
> >> DFE Ultralights, Inc. 
> >> 170 Grimplin Road 
> >> Vanderbilt, PA 15486 
> > David, 
> > 
> > There is an important principle of security. Rules enforced from within are, in effect, a version of the honor system. 
> > 
> > Internal controls are enforced from within. An example is array bounds checking. Erroneous code that is well-behaved (admittedly, a bit of an oxymoron), gets stopped by a bounds check. However, the validity of the check depends upon how pervasive and how correct the bounds checks are. 
> > 
> > Memory protection is imposed from the outside. If you access memory that is not yours, the hardware will generate a fault. How your program handles the fault is up to you. In most cases, the fault is fatal and your program is terminated. 
> > 
> > Early multiprogramming systems (e.g., OS/360, pre-Windows95 Windows) tried various variants of this. It does not work. There are too many ways in which the rules can be broken. Systems with memory mapping and virtual memory make it impossible for one process to affect another process' memory, or for that matter, the state of the running system. 
> > 
> > The OS security mechanisms are outside of the applications control. If set properly, applications have no choice. An application bug in a non-privileged, user state application cannot cause a cascading security hazard. 
> > 
> > In effect, bounds checking and applications-resident security is the equivalent of instructing your 3-year old "Do not touch the stove.". OS security measures are putting a card key lock on the door to the kitchen. Whether the toddler adheres to the instruction or not, they are not getting into the kitchen without the card key. 
> > 
> > - Bob Gezelter, http://www.rlgsc.com 
> >
> It's all design and programming. Why give greater trust to something 
> included in an OS? That's a false trust. 
> 
> Software not part of an OS distribution can be every bit as secure, and 
> sometimes more so. It is quite often more useful. 
> 
> I sense bigotry. That's Ok, if one wishes to place their trust in that 
> manner. But I will suggest that it may be more work, and significantly 
> less useful than software designed and implemented for specific needs. 
> 
> I can state that in over 40 years of users, not once has my menu 
> software had security violations. Not saying it cannot, just that it 
> has not. It also does a rather good job meeting the requirements.
> -- 
> David Froble Tel: 724-529-0450 
> Dave Froble Enterprises, Inc. E-Mail: da... at tsoft-inc.com 
> DFE Ultralights, Inc. 
> 170 Grimplin Road 
> Vanderbilt, PA 15486
Dave,

I hear where you are coming from. If one has complete individual control over one's codebase, it is a tempting thought. Why go through all of the rigamarole of dealing with an OS facility when it is so simple to write your own. But the simplicity is often an illusion, a version of "Stone Soup", a classic children's story.

Throughout my career, I often encounter such reasoning, when it goes awry. One of my earliest clients got shut down when a programmer implemented his own equivalent of what amounted to a lock manager, only to get one of the cases incorrect, causing a twenty-station deadlock, shutting down an entire facility. I deleted over TWO THOUSAND LINES of needless code, replacing it with a far simpler (and correct) use of a system-facility.

With security and authorization, it is often a similar story. COMP.RISKS and other security and privacy reports are littered with stories of organizations which implemented their own security and authentication, only to do it badly. For years, it was, and is regrettably common to find web sites which have databases of plaintext passwords. The site-wide compromise that results when the database is accessed is embarrassing, costly, and potentially a legal liability. 

Not to be outdone, organizations have discovered programmed backdoors in applications, as well as other weaknesses. Some are deliberate, others are covert or unremoved debugging code. In all cases, the result is the same: system or application compromise. Lately, some of the most public cases have involved IoT devices.

In the PC applications space, I encountered this problem regularly. Many applications did "validation" using a plaintext SQL database. 

If one's code is managing anything in the PII category, this is not defensible. 

Writing one's own security system separate from the underlying OS creates, at a minimum, the potential for a rupture between the OS-level security and the local security code. The gap or inconsistency leads to hazards. I have seen applications with elevated privileges and roll-your-own security. This is a bad combination.

Separate accounts, no privileges, and shared files protected by rightlist identifiers is safe, efficient, and effective. A user leaves or changes duties, a single set of changes has system-wide effect. 

Elsewhere, I will go into what the impact of roll-your-own is in terms of audit.

- Bob Gezelter, http://www.rlgsc.com



More information about the Info-vax mailing list