Andrew Dunstan wrote:
>
>
> Bruce Momjian wrote:
>
> >
> >As was stated before, the use-case for this is by people we don't
> >normally have contact with.
> >
> >
> >
>
> I do think we need a use case for what we do.
>
> The main use case seems to me to be where you are exporting a whole
> database or most of it with a very large number of tables, and it is
> convenient to have all the CSVs created for you rather than have to make
> them manually one at a time. You could get these out of, say, a tar
> format dump very easily.
>
> I said near the beginning of this that a pgfoundry project to create a
> tool for this might be an alternative way to go. If that's the consensus
> then Ok. I just bristle a bit at the suggestion that we might not get
> back what we started with from a CSV dump, because we can, AFAIK.
For me, the use case would be, what format do I want a dump in if it is
for long-term storage? Do I want it in a PostgreSQL-native format, or
in a more universal format that can be loaded into PostgreSQL tomorrow,
and perhaps loaded into some other database, with modification, ten
years from now.
I just had that issue on my home system for file system backups, going
from cpio to ustar (POSIX.1-1988 / IEEE Std1003.2), but it seems that
POSIX.1-2001 would be best if my operating system supported it.
(Perhaps cpio was better?)
Anyway, I never thought there would be a large demand for COPY CSV, but
obviously there is, so I have concluded that other people's environment
and skills are different enough from my own that I am willing to accept
the idea there is a use case when I don't understand it. I will let the
people who work in those environments make that decision.
-- Bruce Momjian http://candle.pha.pa.us EnterpriseDB http://www.enterprisedb.com
+ If your life is a hard drive, Christ can be your backup. +