Naomi Walker wrote:
>
> I'm not sure of the correct protocol for getting things on the "todo"
> list. Whom shall we beg?
>
Uh, you just ask and we discuss it on the list.
Are you using INSERTs from pg_dump? I assume so because COPY uses a
single transaction per command. Right now with pg_dump -d I see:
--
-- Data for Name: has_oids; Type: TABLE DATA; Schema: public; Owner:
postgres
--
INSERT INTO has_oids VALUES (1);
INSERT INTO has_oids VALUES (1);
INSERT INTO has_oids VALUES (1);
INSERT INTO has_oids VALUES (1);
Seems that should be inside a BEGIN/COMMIT for performance reasons, and
to have the same behavior as COPY (fail if any row fails). Commands?
As far as skipping on errors, I am unsure on that one, and if we put the
INSERTs in a transaction, we will have no way of rolling back only the
few inserts that fail.
---------------------------------------------------------------------------
> >
> >That brings up a good point. It would be extremely helpful to add two
> >parameters to pg_dump. One, to add how many rows to insert before a
> >commit, and two, to live through X number of errors before dying (and
> >putting the "bad" rows in a file).
> >
> >
> >At 10:15 AM 3/19/2004, Mark M. Huber wrote:
> > >What it was that I guess the pg_dump makes one large transaction and our
> > >shell script wizard wrote a perl program to add a commit transaction
> > >every 500 rows or what every you set. Also I should have said that we were
> > >doing the recovery with the insert statements created from pg_dump. So...
> > >my 500000 row table recovery took < 10 Min.
> > >
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073