Обсуждение: Automating backup
I would like each database to encapsulate its own backup procedure. Each database contains a table (fsyslog) which has a record of each backup taken: <rowid><Operator><timestamp><destination path> At the application level I presently a) check for time since the last backup b) if older than <x> hrs (i) perform a vacuum analyze (ii) force a call to dbbackup (iii) add a new record into fsyslog I would like to add a procedure to the database that does the backup and thus ensure that all applications use the same methodology. The procedure would take only the backup path as a parameter. presently I send the command : pch := pchar('pg_dump -C -h '+host+' -U '+usr+' -p '+pswd+ ' -f '+bckup_path+' '+dbase); to postgres. as the operator is obviously logged in how do I (1) trap their user id (2) Send the call to pg_dump without knowing their password? I expect this is a well worn route and am hoping not to have to reinvent a wheel. thanks very much as always Richard
Richard Sydney-Smith <richard@ibisau.com> writes: > pch := pchar('pg_dump -C -h '+host+' -U '+usr+' -p '+pswd+ ' -f > '+bckup_path+' '+dbase); > > to postgres. > > as the operator is obviously logged in how do I > (1) trap their user id > (2) Send the call to pg_dump without knowing their password? > > I expect this is a well worn route and am hoping not to have to > reinvent a wheel. I don't think it's "well-worn" at all--everyone I've ever heard of runs pg_dump from a cron script. Why not have a shell script run by the operator that runs pg_dump and then calls psql to insert the log record (assuming the dump succeeds)? Putting the logic inside of the database doesn't seem to buy you anything AFAICS. -Doug
Hi Doug. Many users are haphazard in their approach until the machine fails and then they expect to be pulled from the poo. Done it too many times. I now will get the application to enforce an additional integrity check. It must be backed up or else! Seems futile to put all the effort into a database design that checks and ensures everything except that a backup copy exists! Essentially I need access to a database record that gives the timestamp for the last backup. Of course pgdump/vacuum could maintain such timestamps themselves. Is that possible? If not then a proc in the database that calls pgdump/vacuum and records the event is needed to give me peace of mind. regards Richard Doug McNaught wrote: Richard Sydney-Smith <richard@ibisau.com> writes: pch := pchar('pg_dump -C -h '+host+' -U '+usr+' -p '+pswd+ ' -f '+bckup_path+' '+dbase); to postgres. as the operator is obviously logged in how do I (1) trap their user id (2) Send the call to pg_dump without knowing their password? I expect this is a well worn route and am hoping not to have to reinvent a wheel. I don't think it's "well-worn" at all--everyone I've ever heard of runs pg_dump from a cron script. Why not have a shell script run by the operator that runs pg_dump and then calls psql to insert the log record (assuming the dump succeeds)? Putting the logic inside of the database doesn't seem to buy you anything AFAICS. -Doug ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Thanks Doug. Think hacking the source may be the way to go. I will ask the Postgres bosses if this the idea is acceptable. We are only going to store two data items somewhere. One key-timestamp for each of autovacuum and pgdump Doug McNaught wrote: >Richard Sydney-Smith <richard@ibisau.com> writes: > > > >> Hi Doug. >> When the application runs I want it to KNOW that the user is regularly >> backing up the data. Many users are haphazard in their approach until >> the machine fails and then they expect to be pulled from the poo. >> Done it too many times. I now will get the application to enforce an >> additional integrity check. It must be backed up or else! Seems futile >> to pull all the effort into a database design that checks and ensures >> everything except that a backup copy exists! >> >> > >Very good points. > > > >> Running in a cron job is great if the sysadmin is doing their job but >> how can I tell? I want access to a database record that gives me the >> timestamp for the last backup. >> >> > >You could certainly include a standard script that performs your >backup and then inserts into your log table, and have the application >installer create a cronjob that calls that script. The operator could >also run it by hand if necessary. > > > >> Does postgres perhaps already have a timestamp for the last time >> vacuum was run and the last time a backup was taken. Could >> pgdump/vacuum maintain such a record? >> >> > >Well, anything's possible if you're willing to hack the source code. :) > >If you're running autovacuum, you can tell it to log what it does to a >separate logfile, so there'll be log entries when it vacuums tables. >Autovacuum is probably the best way to go for applications like yours >anyway (especially with 8.1, where it's built-in and started >automatically). > >As for pg_dump, I'm not aware that it logs anything. If you turned on >full query logging on the server, you'd see the queries that pg_dump >executes, but that would give you pretty big logfiles... > >-Doug > > > > >
Richard Sydney-Smith wrote: > Hi Doug. > > Many users are haphazard in their approach until the machine fails and > then they expect to be pulled from the poo. > > Done it too many times. I now will get the application to enforce an > additional integrity check. It must be backed up or else! Seems futile > to put all the effort into a database design that checks and ensures > everything except that a backup copy exists! > > Essentially I need access to a database record that gives the timestamp > for the last backup. > Of course pgdump/vacuum could maintain such timestamps themselves. Is > that possible? If not then a proc in the database that calls > pgdump/vacuum and records the event is needed to give me peace of mind. > > regards > > Richard > I have just implemented a backup scheme on my Linux box. /etc/cron.daily/postgres su postgres /var/lib/pgsql/maintain /var/lib/pgsql/maintain vacuumdb --all --full pg_dumpall --clean | gzip > /var/lib/pgsql/backups/pg_`date --iso`.gz # remove any backups older than 30 days (this is a little dangerous) tmpwatch 720 /var/lib/pgsql/backups I created a directory #> mkdir /var/lib/pgsql/backups I suppose you could get the /var/lib/pgsql/maintain script to generate a sql statement in a temporary file (e.g. /tmp/setbackupdate.sql) then use psql to then update a record in your database with a command like (psql -c /tmp/setbackupdate.sql) Regards Neil. > Doug McNaught wrote: > >>Richard Sydney-Smith <richard@ibisau.com> writes: >> >> >> >>> pch := pchar('pg_dump -C -h '+host+' -U '+usr+' -p '+pswd+ ' -f >>>'+bckup_path+' '+dbase); >>> >>>to postgres. >>> >>>as the operator is obviously logged in how do I >>>(1) trap their user id >>>(2) Send the call to pg_dump without knowing their password? >>> >>>I expect this is a well worn route and am hoping not to have to >>>reinvent a wheel. >>> >>> >> >>I don't think it's "well-worn" at all--everyone I've ever heard of >>runs pg_dump from a cron script. >> >>Why not have a shell script run by the operator that runs pg_dump and >>then calls psql to insert the log record (assuming the dump succeeds)? >>Putting the logic inside of the database doesn't seem to buy you >>anything AFAICS. >> >>-Doug >> >>---------------------------(end of broadcast)--------------------------- >>TIP 4: Have you searched our list archives? >> >> http://archives.postgresql.org >> >> >> >> >>