Tom Lane wrote:
> Jan Wieck <janwieck@yahoo.com> writes:
> >>> I suppose I need to recompile Postgres now on the system now that it
> >>> accepts large files.
> >>
> >> Yes.
>
> > No. PostgreSQL is totally fine with that limit, it will just
> > segment huge tables into separate files of 1G max each.
>
> The backend is fine with it, but "pg_dump >outfile" will choke when
> it gets past 2Gb of output (at least, that is true on Solaris).
>
> I imagine "pg_dump | split" would do as a workaround, but don't have
> a Solaris box handy to verify.
>
> I can envision building 32-bit-compatible stdio packages that don't
> choke on large files unless you actually try to do ftell or fseek beyond
> the 2G boundary. Solaris' implementation, however, evidently fails
> hard at the boundary.
Meaning what? That even if he'd recompile PostgreSQL to
support large files, the "pg_dump >outfile" would still choke
... duh!
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck@Yahoo.com #
_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com