Обсуждение: Problem with pg_dumpall

Поиск
Список
Период
Сортировка

Problem with pg_dumpall

От
Ryan Bradetich
Дата:
Hello all,

I am having a new problem with pg_dumpall that I have not seen before.
I've been
browsing the documentation and could not find anything related to this
problem.  Any
ideas or pointers would greatly be appreciated.

boi260 sanity $ /opt/pgsql/bin/pg_dumpall -v -o | /usr/contrib/bin/gzip
-c > /opt/pgsql/backup/db.pgdump.gz

[ Extra verbage snipped ]

--  dumping out the contents of Table 'medusa'
FATAL 1:  Memory exhausted in AllocSetAlloc()
PQendcopy: resetting connection
SQL query to dump the contents of Table 'medusa' did not execute
correctly.  After we read all the table contents from the backend,
PQendcopy() failed.  Explanation from backend: 'FATAL 1:  Memory
exhausted in AllocSetAlloc()
'.
The query was: 'COPY "medusa" WITH OIDS TO stdout;
'.
pg_dump failed on procman, exiting






procman=# select version();
version
-------------------------------------------------------------------PostgreSQL 7.0.0 on hppa2.0w-hp-hpux11.00, compiled
bygcc 2.95.2
 
(1 row)


Thanks,

- Ryan




Re: Problem with pg_dumpall

От
Tom Lane
Дата:
Ryan Bradetich <ryan_bradetich@hp.com> writes:
> --  dumping out the contents of Table 'medusa'
> FATAL 1:  Memory exhausted in AllocSetAlloc()
> PQendcopy: resetting connection
> SQL query to dump the contents of Table 'medusa' did not execute
> correctly.  After we read all the table contents from the backend,
> PQendcopy() failed.  Explanation from backend: 'FATAL 1:  Memory
> exhausted in AllocSetAlloc()
> '.
> The query was: 'COPY "medusa" WITH OIDS TO stdout;

Hmm.  What is the full definition of that table?  (pg_dump -s -t medusa
would do.)
        regards, tom lane


Re: Problem with pg_dumpall

От
Ryan Bradetich
Дата:
Tom Lane wrote:

> Ryan Bradetich <ryan_bradetich@hp.com> writes:
> > --  dumping out the contents of Table 'medusa'
> > FATAL 1:  Memory exhausted in AllocSetAlloc()
> > PQendcopy: resetting connection
> > SQL query to dump the contents of Table 'medusa' did not execute
> > correctly.  After we read all the table contents from the backend,
> > PQendcopy() failed.  Explanation from backend: 'FATAL 1:  Memory
> > exhausted in AllocSetAlloc()
> > '.
> > The query was: 'COPY "medusa" WITH OIDS TO stdout;
>
> Hmm.  What is the full definition of that table?  (pg_dump -s -t medusa
> would do.)
>
>                         regards, tom lane

Tom,


boi260 /data08 $ pg_dump -s -t medusa procman
\connect - postgres
CREATE TABLE "medusa" (       "host_id" int4,       "timestamp" timestamp,       "current" int4,       "catagory" text,
     "cat_desc" text,       "anomaly" text
 
);
CREATE  INDEX "medusa_catagory_key" on "medusa" using btree ( "catagory"
"text_ops" );
CREATE  INDEX "medusa_host_id_key" on "medusa" using btree ( "host_id"
"int4_ops" );
CREATE  INDEX "medusa_current_key" on "medusa" using btree ( "current"
"int4_ops" );

Ryan




Re: Problem with pg_dumpall

От
Tom Lane
Дата:
Ryan Bradetich <ryan_bradetich@hp.com> writes:
> Tom Lane wrote:
>> Ryan Bradetich <ryan_bradetich@hp.com> writes:
>>>> --  dumping out the contents of Table 'medusa'
>>>> FATAL 1:  Memory exhausted in AllocSetAlloc()
>>>> PQendcopy: resetting connection
>>>> SQL query to dump the contents of Table 'medusa' did not execute
>>>> correctly.  After we read all the table contents from the backend,
>>>> PQendcopy() failed.  Explanation from backend: 'FATAL 1:  Memory
>>>> exhausted in AllocSetAlloc()
>>>> '.
>>>> The query was: 'COPY "medusa" WITH OIDS TO stdout;

Now that I look at it, it appears that COPY WITH OIDS leaks the memory
used for the string representation of the OIDs.  That'd probably cost
you 32 bytes or so of backend memory per row --- which you'd get back
at the end of the COPY, but small comfort if you ran out before that.

Is the table large enough to make that a plausible explanation?
        regards, tom lane


Re: Problem with pg_dumpall

От
Ryan Bradetich
Дата:
Tom Lane wrote:

> Ryan Bradetich <ryan_bradetich@hp.com> writes:
> > Tom Lane wrote:
> >> Ryan Bradetich <ryan_bradetich@hp.com> writes:
> >>>> --  dumping out the contents of Table 'medusa'
> >>>> FATAL 1:  Memory exhausted in AllocSetAlloc()
> >>>> PQendcopy: resetting connection
> >>>> SQL query to dump the contents of Table 'medusa' did not execute
> >>>> correctly.  After we read all the table contents from the backend,
> >>>> PQendcopy() failed.  Explanation from backend: 'FATAL 1:  Memory
> >>>> exhausted in AllocSetAlloc()
> >>>> '.
> >>>> The query was: 'COPY "medusa" WITH OIDS TO stdout;
>
> Now that I look at it, it appears that COPY WITH OIDS leaks the memory
> used for the string representation of the OIDs.  That'd probably cost
> you 32 bytes or so of backend memory per row --- which you'd get back
> at the end of the COPY, but small comfort if you ran out before that.
>
> Is the table large enough to make that a plausible explanation?
>
>                         regards, tom lane

Tom,

This table is very large so that could be the problem.

Here are the startup parameters I am using (in case it matters):   -B 1024   -S   -o -F   -o -o
/home/postgres/nohup.out  -i   -p 5432   -D/data08
 


nohup su - postgres -c "/opt/pgsql/bin/postmaster -B 1024 -S -o \"-F\" -o
\"-o /home/postgres/nohup.out\" -i -p 5432 -D/data08"
procman=# select count(*) from medusa; count
---------6986499
(1 row)



FYI:

That was the problem.  Good job at spotting that Tom.  I just successfully
completed a backup without using the -o
option to pg_dumpall.

Thanks again for the help!

- Ryan


--
Ryan Bradetich
AIT Operations
Unix Platform Team





Re: Problem with pg_dumpall

От
Tom Lane
Дата:
Ryan Bradetich <ryan_bradetich@hp.com> writes:
> That was the problem.  Good job at spotting that Tom.  I just successfully
> completed a backup without using the -o
> option to pg_dumpall.

OK, if you need it with -o try the following patch against 7.0.2.
        regards, tom lane


*** src/backend/commands/copy.c.orig    Wed Jun 28 02:07:58 2000
--- src/backend/commands/copy.c    Wed Jun 28 02:13:01 2000
***************
*** 484,491 ****          if (oids && !binary)         {
!             CopySendString(oidout(tuple->t_data->t_oid), fp);             CopySendChar(delim[0], fp);         }
  for (i = 0; i < attr_count; i++)
 
--- 484,493 ----          if (oids && !binary)         {
!             string = oidout(tuple->t_data->t_oid);
!             CopySendString(string, fp);             CopySendChar(delim[0], fp);
+             pfree(string);         }          for (i = 0; i < attr_count; i++)