Re: PG_DUMP backup

Поиск
Список
Период
Сортировка
От Renato Oliveira
Тема Re: PG_DUMP backup
Дата
Msg-id 7965A9DCF12CC14984420BCC37B1608F25A9E2D721@Elzar.grant.co.uk
обсуждение исходный текст
Ответ на Re: PG_DUMP backup  (Josh Kupershmidt <schmiddy@gmail.com>)
Список pgsql-admin
Josh,

Thank you again for the links you sent to me, really appreciated.
I actually was thinking of that to backup the server, here is the idea, maybe one of you guys can tell me if it would
work.

Backup idea, pick holes please?
Create a script which does at the beggning:
        SELECT pg_start_backup('label');
                Then tar -cf backup.tar /usr/local/pgsql/data
                Restore the tar file to a secondary DB server
                Rsync -avf /usr/local/pgsql/data to remote machine
        SELECT pg_stop_backup();

I think rsync will sinc the files and as it can copy deltas it will keep the files in sync, will this work?
I must bring the database window to below 24 hours so I can backup daily.
The server hardware is old and we will not change it right now.

Do you guys think I have any hope to achieve this? ;-)

Thank you very much

I would welcome ideas and any help.

Really appreciated.

Renato




Renato Oliveira
Systems Administrator
e-mail: renato.oliveira@grant.co.uk

Tel: +44 (0)1763 260811
Fax: +44 (0)1763 262410
http://www.grant.co.uk/

Grant Instruments (Cambridge) Ltd

Company registered in England, registration number 658133

Registered office address:
29 Station Road,
Shepreth,
CAMBS SG8 6GB
UK

-----Original Message-----


From: Josh Kupershmidt [mailto:schmiddy@gmail.com]
Sent: 12 February 2010 15:30
To: Renato Oliveira
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] PG_DUMP backup
Importance: High


On Feb 12, 2010, at 4:58 AM, Renato Oliveira wrote:

> Dear all,
>
> I have a server running 8.2.4 and has a database 170GB in size.
> Currently I am backing it up using pg_dump and it takes around 28 hours, sadly.

That's suspiciously slow for a pg_dump alone. I have a ~168 GB database which gets pg_dumped nightly, taking about 2.5
hours,all on 2+ year-old commodity hardware. 

> I was asked to check and compare the newly created DUMP file to the live database and compare records.
>

If you really must run this comparison, maybe you can check out "pg_comparator" (I think you would restore first, then
usepg_comparator to run the diffs). However, it sounds like your assignment really is more about making sure that your
backupserver is functional and ready to take over if the master dies. There are easier, and better, ways to establish
thisthan doing a row-by-row comparison of your backup and live server 

> I personally cannot see an easy or quick way of doing this, and even the point in doing so.
> I am already restoring the full database to a separate server and no errors were reported.
>

There's probably a much easier way of ensuring the validity of your backup server without running this diff, but
that'llof course depend on your environment and your boss' wishes. 

> My question is:
> 1 - Is there a more efficient way of backing up such large database, using pg_dump or any other tool?

Only other ways, other than PITR which you rule out, are documented here, but I doubt you'll like them:
http://developer.postgresql.org/pgdocs/postgres/backup-file.html

> 2 - Is there an easy way to compare the live database with the DUMP file just created?

Take another dump, and compare the two dumps? This borders on absurdity, of course.

> Idea:
> Pg_dump to split the file into smaller usable chuncks, which could be restored one at time, is that possible?

You can dump a table at a time, or a few at a time, using pg_dump --table=... I doubt this will speed the restore up,
though.If you can upgrade to 8.4, or upgrade the backup server to 8.4, your pg_restore should be faster with parallel
restores.

Also, I would look into tuning your backup server to make pg_restore as fast as possible. See e.g.
http://wiki.postgresql.org/wiki/Bulk_Loading_and_Restores


Josh


-----Original Message-----


P Please consider the environment before printing this email
CONFIDENTIALITY: The information in this e-mail and any attachments is confidential. It is intended only for the named
recipients(s).If you are not the named recipient please notify the sender immediately and do not disclose the contents
toanother person or take copies. 

VIRUSES: The contents of this e-mail or attachment(s) may contain viruses which could damage your own computer system.
WhilstGrant Instruments (Cambridge) Ltd has taken every reasonable precaution to minimise this risk, we cannot accept
liabilityfor any damage which you sustain as a result of software viruses. You should therefore carry out your own
viruschecks before opening the attachment(s). 

OpenXML: For information about the OpenXML file format in use within Grant Instruments please visit our
http://www.grant.co.uk/Support/openxml.html


В списке pgsql-admin по дате отправления:

Предыдущее
От: "Thorne, Francis"
Дата:
Сообщение: Re: AIX - Out of Memory
Следующее
От: Brad Nicholson
Дата:
Сообщение: Re: AIX - Out of Memory