Обсуждение: PG-8.2 backup strategies

Поиск
Список
Период
Сортировка

PG-8.2 backup strategies

От
Henrik
Дата:
Hello list,

I know backup of pg is a well discussed topic and ranges in solution
from simple pg_dump to more advanced PITR with Slony.

Even though I've studied most of them I can't really decide the best
solution for a new situation and would be grateful for any inputs on
this.

The situation is as follow.

We want to do a daily backups from many installations to a remote
location and also want easy restores when disaster strikes. Preferable
the backup site would only need an ftp server to store the files on.

My optimal solution would be differential pg_dumps but that is not
possible as far as I know. Doing pg_dumps every day is a little to
heavy even though the db's are not huge. I like the fact that I have
one big SQL file which is really simple to restore with.

The next best solution would probably be weekly pg_dumps with daily
wal shipping. But how would this handle tables with columns that get
tiny updates several times per second? Do I get huge WALs or?

I really would like to avoid a PG installation at the backup site but
maybe that is the best solution? And then use Slony or similar but
only do replication once a day? Then I can make a dump if I need to
restore and ship that SQL file to restore location.

Maybe, I have really weird ideas about this but it would be nice with
some pointers.

Thanks!

//Henke



Re: PG-8.2 backup strategies

От
Magnus Hagander
Дата:
On Mon, Jan 21, 2008 at 04:05:13PM +0100, Henrik wrote:
> Hello list,
>
> I know backup of pg is a well discussed topic and ranges in solution
> from simple pg_dump to more advanced PITR with Slony.
>
> Even though I've studied most of them I can't really decide the best
> solution for a new situation and would be grateful for any inputs on
> this.
>
> The situation is as follow.
>
> We want to do a daily backups from many installations to a remote
> location and also want easy restores when disaster strikes. Preferable
> the backup site would only need an ftp server to store the files on.
>
> My optimal solution would be differential pg_dumps but that is not
> possible as far as I know. Doing pg_dumps every day is a little to
> heavy even though the db's are not huge. I like the fact that I have
> one big SQL file which is really simple to restore with.

Heavy where? If it's just heavy on the transfer, it might be possible to
do the dump locally and then rsync the file off to the server. (dumping
without compression is likely to make rsync a lot more efficient here, but
I don't have any numbers to back up that guess)


> The next best solution would probably be weekly pg_dumps with daily
> wal shipping. But how would this handle tables with columns that get
> tiny updates several times per second? Do I get huge WALs or?

First, you don't combine pg_dump with WAL shipping. If you do WAL shipping
with PITR, you do filesystem base backups and shipping. Not pg_dump.

Second, yes, they might be huge - depending on your defition. If you have
udpated a row a million times, there will be a million updates recorded in
the WAL log, whereas a pg_dump will only show the latest version.

//Magnus

Re: PG-8.2 backup strategies

От
Henrik
Дата:
21 jan 2008 kl. 16.18 skrev Magnus Hagander:

> On Mon, Jan 21, 2008 at 04:05:13PM +0100, Henrik wrote:
>> Hello list,
>>
>> I know backup of pg is a well discussed topic and ranges in solution
>> from simple pg_dump to more advanced PITR with Slony.
>>
>> Even though I've studied most of them I can't really decide the best
>> solution for a new situation and would be grateful for any inputs on
>> this.
>>
>> The situation is as follow.
>>
>> We want to do a daily backups from many installations to a remote
>> location and also want easy restores when disaster strikes.
>> Preferable
>> the backup site would only need an ftp server to store the files on.
>>
>> My optimal solution would be differential pg_dumps but that is not
>> possible as far as I know. Doing pg_dumps every day is a little to
>> heavy even though the db's are not huge. I like the fact that I have
>> one big SQL file which is really simple to restore with.
>
> Heavy where? If it's just heavy on the transfer, it might be
> possible to
> do the dump locally and then rsync the file off to the server.
> (dumping
> without compression is likely to make rsync a lot more efficient
> here, but
> I don't have any numbers to back up that guess)
True, I tried again on a test installation with a 26G database and
noticed that pg_dump didn't take the fraction of the time it took
before (only about 4 minuntes). Maybe I had some weird parameters
before. That is why I didn't want to do pg_dump but now this is a
feasible solution. Also doing the rsync should make the network impact
minimal. Is the something I should think about when running pg_dump to
make it faster?

Thanks!