Обсуждение: ransomware
Hi,
I have been asked the following question:
is there anyway, from within postgres, to detect any ""abnormal"" disk writing activity ?
obvious goal would be to alert if...
its quite clear that the underlying OS is the place to do the checks, but, still
--to my understanding, a simple script can check various inner counters, but this will imply that the "undesired" soft uses postgres to do the crypting (any experience on this ???)
--another approach would be based on the fact that, if anything do change any postgres file (data, current wal, ...) postgres should somehow "hang"
there are various ways to do those checks but I was wandering if any ""standard''" solution exist within postgres ecosystem, or someone do have any feedback on the topic.
thanks for your help
On Mon, Feb 01, 2021 at 03:38:35PM +0100, Marc Millas wrote: > there are various ways to do those checks but I was wandering if any > ""standard''" solution exist within postgres ecosystem, or someone do have > any feedback on the topic. It seems to me that you should first write down on a sheet of paper a list of all the requirements you are trying to satisfy. What you are describing here is a rather general problem line, so nobody can help without knowing what you are trying to achieve, precisely. -- Michael
Вложения
Hi,
I know its quite general. It is as I dont know what approaches may exist.
Requirement is extremely simple: Is there anyway, from a running postgres standpoint, to be aware that a ransomware is currently crypting your data ?
answer can be as simple as: when postgres do crash.....
something else ?
On Tue, Feb 2, 2021 at 2:37 AM Michael Paquier <michael@paquier.xyz> wrote:
On Mon, Feb 01, 2021 at 03:38:35PM +0100, Marc Millas wrote:
> there are various ways to do those checks but I was wandering if any
> ""standard''" solution exist within postgres ecosystem, or someone do have
> any feedback on the topic.
It seems to me that you should first write down on a sheet of paper a
list of all the requirements you are trying to satisfy. What you are
describing here is a rather general problem line, so nobody can help
without knowing what you are trying to achieve, precisely.
--
Michael
On 2021-02-02 15:44:31 +0100, Marc Millas wrote: > I know its quite general. It is as I dont know what approaches may exist. > > Requirement is extremely simple: Is there anyway, from a running postgres > standpoint, to be aware that a ransomware is currently crypting your data ? PostgreSQL can be set up to store a checksum with every page (I think that's even the default in recent releases). If an external process encrypts a data file used by PostgreSQL it is unlikely to get the checksums correct (unless it was written explicitely with PostgreSQL in mind). So the next time PostgreSQL reads some data from that file it will notice that the data is corrupted. Of course is would notice that anyway since all the other structures it expects aren't there either. > answer can be as simple as: when postgres do crash..... Yep. That's what I would expect to happen pretty quickly on a busy database. The question is: Does that help you? At that point the data is already gone (at least partially), and you can only restore it from backup. hp -- _ | Peter J. Holzer | Story must make more sense than reality. |_|_) | | | | | hjp@hjp.at | -- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!"
Вложения
Marc Millas <marc.millas@mokadb.com> writes: > Hi, > > I know its quite general. It is as I dont know what approaches may exist. > > Requirement is extremely simple: Is there anyway, from a running postgres > standpoint, to be aware that a ransomware is currently crypting your data ? > > answer can be as simple as: when postgres do crash..... > > something else ? > > Marc MILLAS > Senior Architect > +33607850334 > www.mokadb.com > > > Ransomeware tends to work at the disk level rather than the application level. There is too much work/effort required to focus ransomeware at an application level because of the amount of variation in applications and versions, to be profitable. This means any form of detection you may try to implement really needs to be at the disk level, not the application level. While it could be possible to add some sort of monitoring for encryption/modification to underlying data files, by the time this occurs, it will likely be too late (and unless your monitoring is running on a different system, the binaries/scripts are likely also encrypted and won't run as well). The best protection from ransomeware is a reliable, regular and TESTED backup and restoration solution which runs frequently enough that any lost data is acceptable from a business continuity position and which keeps multiple backup versions in case your ransomeware infection occurs some time before it is actually triggered i.e. in case your most recent backups are already infected. Backups should be stored in multiple locations. For large data sets, this can often mean having the ability to take fast filesystem snapshots as more traditional 'copy' approaches are often too slow to perform backups frequently enough to meet business continuity requirements. Bar far, the most common failure in backup solutions is around failure to test the restoration component. I've seen way too many places where they thought they had adequate backups only to find when they needed to perform a restoration, key data was missing. This can greatly increase the time it takes to perform a restoration and in extreme cases can mean restoration is not possible. regular testing of restoration processes is critical to any reliable backup solution. As it is also a good idea to have some sort of testing/staging environment for testing code/configuration changes, new versions etc, it can make sense to use your backups as part of your staging/testing environment 'refresh' process. A regular refresh of your staging/testing environment from backups then provides you with assurances your backups are working and that your testing etc is being performed on systems with data most similar to your production systems. Tim