[Info-vax] OpenVMS - DCL - Data entry filtering
David Froble
davef at tsoft-inc.com
Sat Mar 28 13:53:45 EDT 2015
Stephen Hoffman wrote:
> On 2015-03-28 04:04:43 +0000, David Froble said:
>
>> Bob Gezelter wrote:
>>> David,
>>>
>>> The point is that on modern systems, it is not impossible to get
>>> identical filenames with the output of F$CVTIME.
>>> At the very least, one needs to trap the possibility when creating a
>>> file, and take appropriate action (e.g., appending index numbers).
>>>
>>> - Bob Gezelter, http://www.rlgsc.com
>>
>> Yeah, I do that. But it's rather rare, and if there are additional
>> components to the filespec, it's even more rare.
>
> Ayup. If it works for your case, go for it. But I'm posting some
> caveats here because there are some less-experienced folks reading this
> thread, and because different applications have different profiles.
>
> I've seen folks bagged by using a common logging routine in a way that
> differed from what its original author had intended. Or had even
> conceived of, having looked at the code involved.
>
> Mistakes I have made, or mistakes I have encountered, in other words.
>
>> It also will depend upon the application. I don't do anything that
>> spins out many files, quickly. I'm not sure what type of application
>> would do that.
>
> A lot of them these days, actually. Those that are writing gazillions
> of turd files, that is.
>
>> Sometimes what's theoretically possible and what's practically
>> possible are far apart.
>
> Undoubtedly true in your environment. But in others? Whether because
> the primary app is rolling out a lot of log files due to some unexpected
> spike in load or due to some gradual increase in load over time, or
> because the app is operating in a cluster and multiple logs are being
> started around the same time. It happens. More than I'd like.
>
> The calculations involved in a GUID or maintaining a sequence number
> counter are inevitably masked by the glacial speed of the I/O, so being
> defensive here isn't usually a problem, either.
>
> This is also where some folks will start using a common journal file,
> rather than log files. This is also where some folks use journaling
> servers, because transmitting the journal data over a network to a
> separate journaling server is substantially faster than writing to a
> local disk. SSDs and inboard flash are tipping that performance
> calculation somewhat, but... hard... disks... are... staggeringly... slow.
>
>> I personally find the ascending sequence of time stamps very useful.
>
> Ayup, and it is a fine way to avoid the failures that the use of VMS
> versions can cause in an application, too. I use an ascending
> application-maintained counter where contention is likely. In some
> applications I've worked, simply maintaining that unique index counter
> value can itself be a performance bottleneck, though I'm almost never
> writing out individual log files by then. At those sorts of counter
> allocation rates, that's where the GUIDs or counter allocation blocks or
> other approaches come into play.
>
>
As noted, what may work in one application, is not suited for another.
In the applications I'm working with, we're not putting out a bunch of
log or other type of files. Temp files are guaranteed to not have the
same file name. About the worst we see is running a batch job multiple
times in a single day, and for that we're checking the file name to
insure it's unique, and using a suffix that can be incremented.
VMS and file versions is the final safety net. I will not use them
normally, but if it ever happens, we'd still have the log file, which
usually is never looked at and deleted after a few days.
For things that might need to be around for a while, the common journal
file is a good idea, and I've used such where appropriate.
But, I confess to curiosity ....
What type of application will be creating 10,000 or more files each day,
and, after created, what is done with the files?
The only thing in our applications that comes close is the creating of a
temp file for each incoming sales order, and each is queued to a poster
that deletes the file when finished with it. It's nothing we ever see
unless there is some problem.
I just cannot imagine what a human would do with 10,000 new files each
day ....
More information about the Info-vax
mailing list