[Info-vax] OpenVMS x64 Atom project
John Wallace
johnwallace4 at yahoo.co.uk
Sun Jun 6 05:56:52 EDT 2021
On 06/06/2021 00:55, Arne Vajhøj wrote:
> On 6/5/2021 7:28 AM, Phillip Helbig (undress to reply) wrote:
>> In article <mn.2aa97e56e8d0753c.104627 at invalid.skynet.be>, Marc Van Dyck
>> <marc.gr.vandyck at invalid.skynet.be> writes:
>>>> One of the ransom cases I've cleaned up after some years ago had the
>>>> perpetrator silently corrupt multiple backups over time, deeper than
>>>> the
>>>> organization's backup rotation schedule. The perpetrator then
>>>> ransomed the
>>>> only remaining good copy of the organization's databases. In recent
>>>> ransom
>>>> attacks on other platforms, the attackers have been active in the
>>>> target
>>>> organization's networks for weeks and months, too.
>>>>
>>> I suppose that people in this organization never tried restores ? Doing
>>> regular restores to ensure the integrity of your backups is one of the
>>> major recommendations, isn't it ?
>>
>> Yes, there is little point in doing a backup if you don't test the
>> restore. But imagine, say, a database of several hundred terabytes.
>> Even if you can restore it, you can't necessarily tell if the data are
>> somehow corrupt. Yes, checksums and so on will catch some things, but
>> not all.
>
> Traditional BACKUP only works good on a system with no activity.
> BACKUP/IGNORE=INTERLOCK does not solve the problem.
>
> To get a consistent backup of a large database, without significant
> downtime, then one need a snapshot capability where updates after
> time T does not change what is being backed up.
>
> I believe modern storage systems can do that easily. Even though
> I do not know much about the details - last time I was responsible
> for backups then DAT tapes was cool.
>
> Arne
>
>
You don't even need an upmarket storage system to take a snapshot,
depending on particular needs. If the right things are done to quiesce
the applications and their IO before the snapshot is taken, the snapshot
may even contain useful (self-consistent?) data.
One way of doing this is to have a filesystem (or filesystem add on)
which can snapshot the state of a filesystem and then use "copy on
write" technology to preserve the snapshot while allowing updates to
continue to the "original" filesystem. Or some variant on that theme.
DEC/Compaq's StorageWorks Virtual Replicator for Windows NT (which was
pure software) did this in the late 20th century. ZFS or similar seems
to be a popular way of doing it in software in the 21st century.
Or you can do something equivalent in hardware storage controllers.
Or perhaps both, as the two approaches may have different features and
benefits.
Which of these approaches makes most sense in a relatively complex setup
(heavy duty ERP, for example, or even a "simple" ticket reservation and
booking system, or other cases where database contents have to match
realworld values for inventory etc) is a matter for the application
designers as much as it is for the storage and system admin folks.
Obviously not everything's quite that complicated. I'm sure a sufficient
application of virtualisation, DevoPS, and HYPErconverged infrastructure
(maybe with a sprinkling of Industrie 4.0) will make it all work just
fine. Or maybe not, but the salesfolk and CONsultants and the gullible
PHBs will usually be long gone by the time the snags show up.
More information about the Info-vax
mailing list