Обсуждение: pg_dump large-file support > 16GB

Поиск
Список
Период
Сортировка

pg_dump large-file support > 16GB

От
Rafael Martinez Guerrero
Дата:
Hello

We are having problems with pg_dump.

We are trying to dump a 30GB+ database using pg_dump with the --file
option. In the beginning everything works fine, pg_dump runs and we get
a dumpfile. But when this file becomes 16GB it disappears from the
filesystem, pg_dump continues working without giving an error until it
finnish (even when the file does not exist)(The filesystem has free
space).

I can generate without problems files bigger than 16GB with other
programs.

Some information:
---------------------------
OS: Red Hat Enterprise Linux WS release 3 (Taroon Update 4)
Kernel: 2.4.21-27.0.2.ELsmp #1 SMP i686
PG: 7.4.7

LVM version 1.0.8-2(26/05/2004)
EXT3 FS 2.4-0.9.19, 19 August 2002 on lvm(58,6), internal journal
EXT3-fs: mounted filesystem with ordered data mode.
----------------------------

Any ideas? It looks like pg_dump has a limit of 16GB? How can we solve
this?

--
Rafael Martinez, <r.m.guerrero@usit.uio.no>
Center for Information Technology Services
University of Oslo, Norway

PGP Public Key: http://folk.uio.no/rafael/

Вложения

Re: pg_dump large-file support > 16GB

От
Michael Kleiser
Дата:
I found on  http://www.madeasy.de/7/ext2.htm (in german)
ext2 can't have bigger files than 16GB if blocksize is 1k.
Ext3 is ext2 with journaling.

Rafael Martinez Guerrero wrote:

>Hello
>
>We are having problems with pg_dump.
>
>We are trying to dump a 30GB+ database using pg_dump with the --file
>option. In the beginning everything works fine, pg_dump runs and we get
>a dumpfile. But when this file becomes 16GB it disappears from the
>filesystem, pg_dump continues working without giving an error until it
>finnish (even when the file does not exist)(The filesystem has free
>space).
>
>I can generate without problems files bigger than 16GB with other
>programs.
>
>Some information:
>---------------------------
>OS: Red Hat Enterprise Linux WS release 3 (Taroon Update 4)
>Kernel: 2.4.21-27.0.2.ELsmp #1 SMP i686
>PG: 7.4.7
>
>LVM version 1.0.8-2(26/05/2004)
>EXT3 FS 2.4-0.9.19, 19 August 2002 on lvm(58,6), internal journal
>EXT3-fs: mounted filesystem with ordered data mode.
>----------------------------
>
>Any ideas? It looks like pg_dump has a limit of 16GB? How can we solve
>this?
>
>
>


Re: pg_dump large-file support > 16GB

От
Lonni J Friedman
Дата:
On Thu, 17 Mar 2005 14:05:35 +0100, Rafael Martinez Guerrero
<r.m.guerrero@usit.uio.no> wrote:
> Hello
>
> We are having problems with pg_dump.
>
> We are trying to dump a 30GB+ database using pg_dump with the --file
> option. In the beginning everything works fine, pg_dump runs and we get
> a dumpfile. But when this file becomes 16GB it disappears from the
> filesystem, pg_dump continues working without giving an error until it
> finnish (even when the file does not exist)(The filesystem has free
> space).
>
> I can generate without problems files bigger than 16GB with other
> programs.
>
> Some information:
> ---------------------------
> OS: Red Hat Enterprise Linux WS release 3 (Taroon Update 4)
> Kernel: 2.4.21-27.0.2.ELsmp #1 SMP i686
> PG: 7.4.7
>
> LVM version 1.0.8-2(26/05/2004)
> EXT3 FS 2.4-0.9.19, 19 August 2002 on lvm(58,6), internal journal
> EXT3-fs: mounted filesystem with ordered data mode.
> ----------------------------
>
> Any ideas? It looks like pg_dump has a limit of 16GB? How can we solve
> this?
>

Have you tried piping the dump through split so that each file's size
is limited?


--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
L. Friedman                                    netllama@gmail.com
LlamaLand                       http://netllama.linux-sxs.org

Re: pg_dump large-file support > 16GB

От
Rafael Martinez Guerrero
Дата:
On Thu, 2005-03-17 at 15:05, Michael Kleiser wrote:
> I found on  http://www.madeasy.de/7/ext2.htm (in german)
> ext2 can't have bigger files than 16GB if blocksize is 1k.
> Ext3 is ext2 with journaling.
>
[............]

We use 4k. And as I said, we can generate files bigger than 16GB with
other programs in the same filesystem.

From tune2fs:
---------------------------------------------------------------
tune2fs 1.32 (09-Nov-2002)
Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem UUID:          80ecbce4-dcef-4668-ae84-887de850ed57
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal filetype needs_recovery
sparse_super large_file
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              5128192
Block count:              10240000
Reserved block count:     102400
Free blocks:              10070728
Free inodes:              5128097
First block:              0
Block size:               4096
Fragment size:            4096
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         16384
Inode blocks per group:   512
---------------------------------------------------------------

Regards.
--
Rafael Martinez, <r.m.guerrero@usit.uio.no>
Center for Information Technology Services
University of Oslo, Norway

PGP Public Key: http://folk.uio.no/rafael/

Вложения

Re: pg_dump large-file support > 16GB

От
Rafael Martinez Guerrero
Дата:
On Thu, 2005-03-17 at 15:09, Lonni J Friedman wrote:
> On Thu, 17 Mar 2005 14:05:35 +0100, Rafael Martinez Guerrero
> <r.m.guerrero@usit.uio.no> wrote:
> > Hello
> >
> > We are having problems with pg_dump.
> >
> > We are trying to dump a 30GB+ database using pg_dump with the --file
> > option. In the beginning everything works fine, pg_dump runs and we get
> > a dumpfile. But when this file becomes 16GB it disappears from the
> > filesystem, pg_dump continues working without giving an error until it
> > finnish (even when the file does not exist)(The filesystem has free
> > space).
> >
> > I can generate without problems files bigger than 16GB with other
> > programs.
> >
> > Some information:
> > ---------------------------
> > OS: Red Hat Enterprise Linux WS release 3 (Taroon Update 4)
> > Kernel: 2.4.21-27.0.2.ELsmp #1 SMP i686
> > PG: 7.4.7
> >
> > LVM version 1.0.8-2(26/05/2004)
> > EXT3 FS 2.4-0.9.19, 19 August 2002 on lvm(58,6), internal journal
> > EXT3-fs: mounted filesystem with ordered data mode.
> > ----------------------------
> >
> > Any ideas? It looks like pg_dump has a limit of 16GB? How can we solve
> > this?
> >
>
> Have you tried piping the dump through split so that each file's size
> is limited?

It is a possibility. But when you have several hundred databases, the
amount of backup files you have to take care of, grow in a exponencial
form ;).

We already have several well tested backup/maintenace/restore scripts
that we would like to continue using.

My question is why is this limit (16GB) there, when my OS does not have
that limit? Is it possible to take it away in a easy way? It looks like
pg_dump is compiled with large-file support because it can work with
files bigger than 4GB.

More ideas? :)

Regards.
--
Rafael Martinez, <r.m.guerrero@usit.uio.no>
Center for Information Technology Services
University of Oslo, Norway

PGP Public Key: http://folk.uio.no/rafael/

Вложения

Re: pg_dump large-file support > 16GB

От
Marco Colombo
Дата:
On Thu, 17 Mar 2005, Rafael Martinez Guerrero wrote:

> My question is why is this limit (16GB) there, when my OS does not have
> that limit? Is it possible to take it away in a easy way? It looks like
> pg_dump is compiled with large-file support because it can work with
> files bigger than 4GB.
>
> More ideas? :)

Things to try:

a) shell redirection:
$ pg_dump ... > outfile

b) some pipes:
$ pg_dump ... | cat > outfile
$ pg_dump ... | dd of=outfile

a) may fail if there's something with pg_dump and large files.
b) is different in that it's the right side of the pipe that outputs
to the filesystem.

.TM.
--
       ____/  ____/   /
      /      /       /            Marco Colombo
     ___/  ___  /   /              Technical Manager
    /          /   /             ESI s.r.l.
  _____/ _____/  _/               Colombo@ESI.it

Re: pg_dump large-file support > 16GB

От
Tom Lane
Дата:
Rafael Martinez Guerrero <r.m.guerrero@usit.uio.no> writes:
> We are trying to dump a 30GB+ database using pg_dump with the --file
> option. In the beginning everything works fine, pg_dump runs and we get
> a dumpfile. But when this file becomes 16GB it disappears from the
> filesystem, pg_dump continues working without giving an error until it
> finnish (even when the file does not exist)(The filesystem has free
> space).

Is that a plain text, tar, or custom dump (-Ft or -Fc)?  Is the behavior
different if you just write to stdout instead of using --file?

            regards, tom lane

Re: pg_dump large-file support > 16GB

От
Aly Dharshi
Дата:
Would it help to use a different filesystem like SGI's XFS ? Would it be
possible to even implement that at you site at this stage ?

Tom Lane wrote:
> Rafael Martinez Guerrero <r.m.guerrero@usit.uio.no> writes:
>
>>We are trying to dump a 30GB+ database using pg_dump with the --file
>>option. In the beginning everything works fine, pg_dump runs and we get
>>a dumpfile. But when this file becomes 16GB it disappears from the
>>filesystem, pg_dump continues working without giving an error until it
>>finnish (even when the file does not exist)(The filesystem has free
>>space).
>
>
> Is that a plain text, tar, or custom dump (-Ft or -Fc)?  Is the behavior
> different if you just write to stdout instead of using --file?
>
>             regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org

--
Aly Dharshi
aly.dharshi@telus.net

     "A good speech is like a good dress
      that's short enough to be interesting
      and long enough to cover the subject"

Re: pg_dump large-file support > 16GB

От
Rafael Martinez
Дата:
On Thu, 2005-03-17 at 10:17 -0500, Tom Lane wrote:
> Rafael Martinez Guerrero <r.m.guerrero@usit.uio.no> writes:
> > We are trying to dump a 30GB+ database using pg_dump with the --file
> > option. In the beginning everything works fine, pg_dump runs and we get
> > a dumpfile. But when this file becomes 16GB it disappears from the
> > filesystem, pg_dump continues working without giving an error until it
> > finnish (even when the file does not exist)(The filesystem has free
> > space).
>
> Is that a plain text, tar, or custom dump (-Ft or -Fc)?  Is the behavior
> different if you just write to stdout instead of using --file?
>
>             regards, tom lane

- In this example, it is a plain text (--format=p).
- If I write to stdout and redirect to a file, the dump finnish without
problems and I get a dump-text-file over 16GB without problems.


--
Rafael Martinez, <r.m.guerrero@usit.uio.no>
Center for Information Technology Services
University of Oslo, Norway

PGP Public Key: http://folk.uio.no/rafael/

Вложения

Re: pg_dump large-file support > 16GB

От
Rafael Martinez
Дата:
On Thu, 2005-03-17 at 10:41 -0700, Aly Dharshi wrote:

Hello

> Would it help to use a different filesystem like SGI's XFS ?

I do not see the connection between this problem and using another
filesystem. I think we would have this problem with all the programs in
the system if we had a problem with the filesystem we are using.

>  Would it be
> possible to even implement that at you site at this stage ?
>

We can not do this if we want support from our "operative system
departament", they do not support XFS at the present.

--
Rafael Martinez, <r.m.guerrero@usit.uio.no>
Center for Information Technology Services
University of Oslo, Norway

PGP Public Key: http://folk.uio.no/rafael/

Вложения

Re: pg_dump large-file support > 16GB

От
Tom Lane
Дата:
Rafael Martinez <r.m.guerrero@usit.uio.no> writes:
> On Thu, 2005-03-17 at 10:17 -0500, Tom Lane wrote:
>> Is that a plain text, tar, or custom dump (-Ft or -Fc)?  Is the behavior
>> different if you just write to stdout instead of using --file?

> - In this example, it is a plain text (--format=3Dp).
> - If I write to stdout and redirect to a file, the dump finnish without
> problems and I get a dump-text-file over 16GB without problems.

In that case, you have a glibc or filesystem bug and you should be
reporting it to Red Hat.  The *only* difference between writing to
stdout and writing to a --file option is that in one case we use
the preopened "stdout" FILE* and in the other case we do
fopen(filename, "w").  Your report therefore is stating that there
is something broken about fopen'd files.

            regards, tom lane

Re: pg_dump large-file support > 16GB

От
Rafael Martinez
Дата:
On Fri, 2005-03-18 at 09:58 -0500, Tom Lane wrote:
> Rafael Martinez <r.m.guerrero@usit.uio.no> writes:
> > On Thu, 2005-03-17 at 10:17 -0500, Tom Lane wrote:
> >> Is that a plain text, tar, or custom dump (-Ft or -Fc)?  Is the behavior
> >> different if you just write to stdout instead of using --file?
>
> > - In this example, it is a plain text (--format=3Dp).
> > - If I write to stdout and redirect to a file, the dump finnish without
> > problems and I get a dump-text-file over 16GB without problems.
>
> In that case, you have a glibc or filesystem bug and you should be
> reporting it to Red Hat.  The *only* difference between writing to
> stdout and writing to a --file option is that in one case we use
> the preopened "stdout" FILE* and in the other case we do
> fopen(filename, "w").  Your report therefore is stating that there
> is something broken about fopen'd files.
>

Thanks for the information. I will contact RH.

--
Rafael Martinez, <r.m.guerrero@usit.uio.no>
Center for Information Technology Services
University of Oslo, Norway

PGP Public Key: http://folk.uio.no/rafael/

Вложения

Re: pg_dump large-file support > 16GB

От
Tom Lane
Дата:
Rafael Martinez Guerrero <r.m.guerrero@usit.uio.no> writes:
> We are trying to dump a 30GB+ database using pg_dump with the --file
> option. In the beginning everything works fine, pg_dump runs and we get
> a dumpfile. But when this file becomes 16GB it disappears from the
> filesystem,

FWIW, I tried and failed to duplicate this problem on a Fedora Core 3
machine using an ext3 filesystem.  I set up a dummy database that would
produce an approximately 18GB text dump and did
    pg_dump big --file spare/big.dump
Seemed to work fine.

            regards, tom lane

Re: pg_dump large-file support > 16GB

От
Rafael Martinez Guerrero
Дата:
On Fri, 2005-03-18 at 15:58, Tom Lane wrote:
> Rafael Martinez <r.m.guerrero@usit.uio.no> writes:
> > On Thu, 2005-03-17 at 10:17 -0500, Tom Lane wrote:
> >> Is that a plain text, tar, or custom dump (-Ft or -Fc)?  Is the behavior
> >> different if you just write to stdout instead of using --file?
>
> > - In this example, it is a plain text (--format=3Dp).
> > - If I write to stdout and redirect to a file, the dump finnish without
> > problems and I get a dump-text-file over 16GB without problems.
>
> In that case, you have a glibc or filesystem bug and you should be
> reporting it to Red Hat.  The *only* difference between writing to
> stdout and writing to a --file option is that in one case we use
> the preopened "stdout" FILE* and in the other case we do
> fopen(filename, "w").  Your report therefore is stating that there
> is something broken about fopen'd files.
>

Hello again

I have been testing a little more before I open a bug report at RH. I
have a simple test program to test 'fopen' in the samme filesystem I am
having problems. I can not reproduce the problem and the files I produce
with this program can get bigger than 16GB without problems.

Do you use any spesial option when you compile pg_dump, or in the
program that could influence how the program behaves and can help me to
reproduce the problem?

PS.- Be careful with this program ..... it won't stop and will consume
all the free space in your filesystem ;)

----------------------------------------------------------------
-bash-2.05b$ cat test_fopen.c

#include <stdio.h>
#include <unistd.h>

int main(int argc, char **argv){

  FILE *fp;
  char *filename = argv[1];

  char output[1024];
  int counter = 0;

  if ((fp = fopen(filename,"w")) == NULL){
    printf("fopen error\n");
  }

  while (1){

    sprintf(output,"*** Testing the fopen function in a RHEL server -
Counter: %d ***\n",counter);

    if (fputs(output,fp) == EOF){
      printf("fputs error\n");
    }

    counter++;
  }

  fclose(fp);
  return 0;
}

-bash-2.05b$ gcc -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 test_fopen.c -o
test_fopen
------------------------------------------------------------------

Thanks :)
--
Rafael Martinez, <r.m.guerrero@usit.uio.no>
Center for Information Technology Services
University of Oslo, Norway

PGP Public Key: http://folk.uio.no/rafael/

Вложения

Re: pg_dump large-file support > 16GB

От
Tom Lane
Дата:
Rafael Martinez Guerrero <r.m.guerrero@usit.uio.no> writes:
> Do you use any spesial option when you compile pg_dump, or in the
> program that could influence how the program behaves and can help me to
> reproduce the problem?

In a Linux system we'll add "-D_GNU_SOURCE" to the compile command line.
Also, pg_config.h sets some #define's that might affect things,
particularly "#define _FILE_OFFSET_BITS 64".  I see you did both of
those in your test, but you might want to review pg_config.h to see if
anything else looks promising.

Another line of thought is that there is something broken about the
particular build of Postgres that you are using (eg it was affected by a
compiler bug).  You might try building from source, or grabbing the src
RPM and rebuilding from that, and confirming the bug is still there ---
and if so, back off the CFLAGS to minimal optimization and see if it
changes.

            regards, tom lane