Re: ransomware - Mailing list pgsql-general

From Tim Cross
Subject Re: ransomware
Date
Msg-id 87wnvq1cjf.fsf@gmail.com
Whole thread Raw
In response to Re: ransomware  (Marc Millas <marc.millas@mokadb.com>)
List pgsql-general
Marc Millas <marc.millas@mokadb.com> writes:

> Hi,
>
> I know its quite general. It is as I dont know what approaches may exist.
>
> Requirement is extremely simple: Is there anyway, from a running postgres
> standpoint, to be aware that a ransomware is currently crypting your data ?
>
> answer can be as simple as: when postgres do crash.....
>
> something else ?
>
> Marc MILLAS
> Senior Architect
> +33607850334
> www.mokadb.com
>
>
>

Ransomeware tends to work at the disk level rather than the application
level. There is too much work/effort required to focus ransomeware at
an application level because of the amount of variation in applications
and versions, to be profitable.

This means any form of detection you may try to implement really needs
to be at the disk level, not the application level. While it could be
possible to add some sort of monitoring for encryption/modification to
underlying data files, by the time this occurs, it will likely be too
late (and unless your monitoring is running on a different system, the
binaries/scripts are likely also encrypted and won't run as well).

The best protection from ransomeware is a reliable, regular and TESTED
backup and restoration solution which runs frequently enough that any
lost data is acceptable from a business continuity position and which
keeps multiple backup versions in case your ransomeware infection
occurs some time before it is actually triggered i.e. in case your
most recent backups are already infected. Backups should be stored in
multiple locations. For large data sets, this can often mean having the
ability to take fast filesystem snapshots as more traditional 'copy'
approaches are often too slow to perform backups frequently enough to
meet business continuity requirements.

Bar far, the most common failure in backup solutions is around failure
to test the restoration component. I've seen way too many places where
they thought they had adequate backups only to find when they needed to
perform a restoration, key data was missing. This can greatly increase
the time it takes to perform a restoration and in extreme cases can mean
restoration is not possible. regular testing of restoration processes is
critical to any reliable backup solution.

As it is also a good idea to have some sort of testing/staging
environment for testing code/configuration changes, new versions etc, it
can make sense to use your backups as part of your staging/testing
environment 'refresh' process. A regular refresh of your staging/testing
environment from backups then provides you with assurances your backups
are working and that your testing etc is being performed on systems with
data most similar to your production systems.

Tim



pgsql-general by date:

Previous
From: Helmut Bender
Date:
Subject: Re: Segmentation fault on startup
Next
From: Hellmuth Vargas
Date:
Subject: Re: count(*) vs count(id)