[Info-vax] Computer Security idea (old, reworked some, useful today maybe)

Glenn Everhart sysxge at gmail.com
Sat Feb 6 15:35:01 EST 2016


Rate Limit Security
by Glenn C. Everhart
2/5/2016

This article expands on a suggestion I published sometime in the 1990s but never
implemented in my Safety program due to lack of interest in VMS security add-ins
back then. At that time I suggested that access rate controls, setting maximum
access rates for files or datasets from processes (and note, in VMS one would 
limit these, not process trees, most of the time since subprocess creation is
expensive in VMS) would be useful in detecting or preventing fraudulent accesses.

If you examine how computers are used, it is pretty common that programs will
access datasets, doing opens, at normally a fairly low rate. Accesses for
deletion or renaming tend to be lower yet. However, there are a number of situations
where high access rates are signatures of bad things happening.
* People accessing corporate records who are up to no good might access such records
far more than others in similar job functions. A real example was the woman working
as an IRS customer help contact who had the right to access individuals' tax
records. It turned out she was accessing loads more peoples' records than
any others in her group, because she would take down some of their personal information
and pass it to a fraud ring. If you can detect access rate, you can tell this is
going on. If you have to wait till the outsiders get detected and arrested, the
rate information is not helpful.
* Malware that tries to steal information might open and read loads of files to
find what it wants. If it does this one or two at a time, it probably will escape
notice unless something can detect the large amounts of file access going on and
ask why this is happening in a program that normally does not do this.
* Malware that tries to alter information (like the currently popular kind
that encrypts all files of some types and holds the user to ransom to get them
decrypted) tends to stomp on lots of files too.  Malware that just deletes files
also has a lot of calls per second, normally.

These are not exhaustive. Spying applications might well also trip a detector for
access rate.

At the moment, I know of no facilities which do this kind of checking. It would
I hope be useful to consider what might be done to detect wrongful uses with this
signature, yet not prevent normal and legitimate computer use. I will start with
considerations of file accesses. Accesses to databases, or to network connections,
are generalizations.

In cases where files are being damaged, it is of course useful to be able to
prevent ANY damage. In a case where all writes to a disk make a copy-on-reference
and write to the copy, damage like a ransomware program encrypting a file can
be reversed, so long as there is space and the copy-on-reference material
can be kept around. This will not always be possible.
Such kinds of protection are worth while if they can be done.

A system like the one Safety implements (full source code for Safety including a few
bits I never productized is available www.gce.com and has been on some of the old
VMS SIG tapes from DECUS) allows control over delete operations, so that files
are copied somewhere else when deleted, but with space control measures so
that space can be reclaimed. Safety does not protect directly against rename
(it can be done by protecting directory files) and did not implement counts of
access per time. It does implement in VMS a file access intercept that can
readily do such counts and controls.
  Where file deletion and rename access calls (mv or rm in Linux) are monitored,
and where deletes get preceded by something that saves copies awhile, some protection
against deletes is available. This too can run out of room, so is not a long term
way to keep data safe from malware by itself. It buys some time.

The main benefit of detecting something accessing a load of files it should not
is however that the bogus access might be blocked after a few percent of your
data rather than all of it. (The foregoing protections would reduce the percent
that might be exposed.)

Technically it is pretty straightforward to build an intercept that counts accesses
to files. Simplest might be to count opens for read, opens for write, deletes,
and renames, though in unix/Linux it will be desirable to do these counts per
process tree since it is common for commands to be handled by multiple processes
chained together. Setting rate limits and blocking accesses while these are too high
is again not hard. The kinds of protections used in Safety that allow "exempt"
programs to be defined are also not hard. (See the Safety documents for details
of these if desired.)

What is harder is that simply rate limiting access to files is likely not to
depend on the files' identity or (as Safety does it) also on the code being used
to do the access.

Consider commands like

cd /foo/bar/mumble/mytemp
rm -r *

or 

find /somewhere/lord/knows -iname \*my-search-string\*

or again

find /somewhere/lord/knows -exec chmod 755 \{\} \;


These are normal and legitimate kinds of things a user might need to do,
and they can result in loads of file opens. Where there is evidence a user
wanted them, they should just be permitted. Where the computer cannot be sure,
there needs to be a way to tell. One method that could be tried is to ask
permission and remember what was granted. In cases of commands like the above
examples, finding these as user inputs from a terminal in the process tree
doing the access would tend to be good evidence the access was indeed
authorized by the user. It needs to be noted: you want to be sure that the
programs allegedly doing the access are expected to be doing this. It is
altogether too easy for malware to insert itself into a process tree which
has permissions and (ab)use these to work evil, functioning apparently as
part of a program that is believed safe. A usable system will probably have
a "training" mode which will note what programs do high rate I/O (of all
4 types mentioned) and thus allow unusual behavior to be noted later. 
(Training is not foolproof. Routine events like end-of-week, end-of-month,
end-of-quarter, or end-of-year processing can give access patterns much
higher than at other times. Even these rates are still finite though and
can be compared, and user questions when the rare case occurs can fil the
gap.) There will always be some cases where excess rates have to be asked
about. People will get used to this, so long as they are not swamped, and
so long as someone is always able to answer alarms.

A system (as I suggested for Safety) where each file gets a limit, and
where the limit can be sensitive to the program(s) doing the access,
can be a starting point. In Linux and the like, limits should be per
process tree and the action on hitting a limit might be suggested to be
that further accesses slow down (say, doubling a delay every time) while
alarms are sent to someone. In that way a program running won't be 
aborted, possibly doing damage, but slowed enough to keep too much harm
from being done until someone can decide whether it should be allowed
to go full speed, or killed. Setting access rates limits would probably be done
in bulk at first, or set during a training period, but they would need to be
able to be reset as time went on.

Finding evidence of human authorization of access rates should use whatever
information a computer can get hold of. That includes commands found in
the process tree's command lines, time and place of access, what code
is being used, and more. I had logic in Safety that said that if a process
was found with too many privileges, it would be considered suspect and might
be denied access to files. Any other information about user behavior - especially
very recent behavior - might be incorporated. (Did the user access oddball
network addresses? Did he move anything large outside the internal network?
That kind of thing...)

What is being suggested is that high access rates be treated as a separately
authorized feature of filesystems, with surrounding facilities to enable this
to be widely used.

When this is available, anything doing such access might be found out a bit
more easily and perhaps damage prevented.

I will mention some of the features in Safety can be worth having as part
of a full solution. The ability to save files being deleted, either by
renaming or copying somewhere, even over a network, and allowing auto retrieval
if need be, is useful. The ability to detect storage filling up and clearing out
the files deleted longest-ago until space enough is obtained for what is currently
needed in extending a file or creating a new file, is also handy. The ability
to slow down access has been mentioned above, and I would think it worth
implementing. Safety could also lower priority of a possibly-misbehaving
program if desired, and could (somewhat interestingly) decide, when a file
access was to be denied, to open a different file instead so the apparently
misbehaving program would see information prepared for a "honeypot"
system.

The basic idea here was published during the 1990s, so is by now over 20 years
old. Nobody should be able to patent it, but I believe that a good implementation
might be useful, what with the growing amount of ransomware and spying code
coming from evildoers. While this sort of thing would normally be implemented
with at least a kernel intercept, and some folks would argue that kernel malware
could disable it, it would give yet another thing that even kernel malware would
have to figure how to disable, and if implemented cleverly (how about making
the intercept code polymorphic?) it could be hard for any generic malware to
get rid of. Where malware can go and encrypt anything found not only locally
but in an entire network, being able to stop it before it trashes very much
can be an installation-saving event.

Glenn C. Everhart
gce at gce.com

2/2016



More information about the Info-vax mailing list