Re: PG_DUMP backup

Поиск
Список
Период
Сортировка
От Josh Kupershmidt
Тема Re: PG_DUMP backup
Дата
Msg-id 25CE55AD-6515-4C0B-A330-4BE6E7C324A4@gmail.com
обсуждение исходный текст
Ответ на PG_DUMP backup  (Renato Oliveira <renato.oliveira@grant.co.uk>)
Ответы Re: PG_DUMP backup
Re: PG_DUMP backup
Список pgsql-admin
On Feb 12, 2010, at 4:58 AM, Renato Oliveira wrote:

> Dear all,
>
> I have a server running 8.2.4 and has a database 170GB in size.
> Currently I am backing it up using pg_dump and it takes around 28 hours, sadly.

That's suspiciously slow for a pg_dump alone. I have a ~168 GB database which gets pg_dumped nightly, taking about 2.5
hours,all on 2+ year-old commodity hardware. 

> I was asked to check and compare the newly created DUMP file to the live database and compare records.
>

If you really must run this comparison, maybe you can check out "pg_comparator" (I think you would restore first, then
usepg_comparator to run the diffs). However, it sounds like your assignment really is more about making sure that your
backupserver is functional and ready to take over if the master dies. There are easier, and better, ways to establish
thisthan doing a row-by-row comparison of your backup and live server 

> I personally cannot see an easy or quick way of doing this, and even the point in doing so.
> I am already restoring the full database to a separate server and no errors were reported.
>

There's probably a much easier way of ensuring the validity of your backup server without running this diff, but
that'llof course depend on your environment and your boss' wishes.  

> My question is:
> 1 - Is there a more efficient way of backing up such large database, using pg_dump or any other tool?

Only other ways, other than PITR which you rule out, are documented here, but I doubt you'll like them:
http://developer.postgresql.org/pgdocs/postgres/backup-file.html

> 2 - Is there an easy way to compare the live database with the DUMP file just created?

Take another dump, and compare the two dumps? This borders on absurdity, of course.

> Idea:
> Pg_dump to split the file into smaller usable chuncks, which could be restored one at time, is that possible?

You can dump a table at a time, or a few at a time, using pg_dump --table=... I doubt this will speed the restore up,
though.If you can upgrade to 8.4, or upgrade the backup server to 8.4, your pg_restore should be faster with parallel
restores. 

Also, I would look into tuning your backup server to make pg_restore as fast as possible. See e.g.
http://wiki.postgresql.org/wiki/Bulk_Loading_and_Restores


Josh

В списке pgsql-admin по дате отправления:

Предыдущее
От: Albert Shih
Дата:
Сообщение: Re: How to manage WAL
Следующее
От: Renato Oliveira
Дата:
Сообщение: Re: PG_DUMP backup