Re: pg_dump's over 2GB

Поиск
Список
Период
Сортировка
От Ross J. Reedstrom
Тема Re: pg_dump's over 2GB
Дата
Msg-id 20000929115711.B5635@rice.edu
обсуждение исходный текст
Ответ на Re: pg_dump's over 2GB  (Jeff Hoffmann <jeff@propertykey.com>)
Список pgsql-general
On Fri, Sep 29, 2000 at 11:41:51AM -0500, Jeff Hoffmann wrote:
> Bryan White wrote:
> >
> > I am thinking that
> > instead I will need to pipe pg_dumps output into gzip thus avoiding the
> > creation of a file of that size.
>
> sure, i do it all the time.  unfortunately, i've had it happen a few
> times where even gzipping a database dump goes over 2GB, which is a real
> PITA since i have to dump some tables individually.  generally, i do


> something like
>     pg_dump database | gzip > database.pgz

Hmm, how about:

pg_dump database | gzip | split -b 1024m - database_

Which will give you 1GB files, named database_aa, database_ab, etc.

> to dump the database and
>     gzip -dc database.pgz | psql database

cat database_* | gunzip | psql database

Ross Reedstrom
--
Open source code is like a natural resource, it's the result of providing
food and sunshine to programmers, and then staying out of their way.
[...] [It] is not going away because it has utility for both the developers
and users independent of economic motivations.  Jim Flynn, Sunnyvale, Calif.

В списке pgsql-general по дате отправления:

Предыдущее
От: Jeff Hoffmann
Дата:
Сообщение: Re: pg_dump's over 2GB
Следующее
От: Peter Eisentraut
Дата:
Сообщение: Re: Redhat 7 and PgSQL