On Thu, Jan 10, 2002 at 01:10:35PM -0800, Jeff wrote:
> handle files larger than 2GB. I then dumped the database again and
> noticed the same situation. The dump files truncate at the 2GB limit.
We just had the same happen recently.
> I suppose I need to recompile Postgres now on the system now that it
> accepts large files.
Yes.
> Is there any library that I need to point to manually or some
> option that I need to pass in the configuration? How do I ensure
> Postgres can handle large files (>2GB)
Yes. It turns out that gcc (and maybe other C compilers; I don't
know) doesn't turn on the 64-bit offset by default. You need to add a
CFLAGS setting. The necessaries can be found with
CFLAGS="`getconf LFS_CFLAGS`"
(I stole that from the Python guys:
<http://www.python.org/doc/current/lib/posix-large-files.html>).
Note that this will _not_ compile the binary as a 64-bit binary, so
using "file" to check it will still report a 32-bit binary.
Everything I've read about the subject suggests that gcc-compiled
64-bit binaries on Solaris are sort of flakey, so I've not tried it.
Hope this is helpful.
A
--
----
Andrew Sullivan 87 Mowat Avenue
Liberty RMS Toronto, Ontario Canada
<andrew@libertyrms.info> M6K 3E3
+1 416 646 3304 x110