And how long is normal for Vacuum analyze to take on bout 350 megs of
DATA?
Jason
---
Jason Slagle
Network Administrator - Toledo Internet Access - Toledo Ohio
- raistlin@tacorp.net - jslagle@toledolink.com - WHOIS JS10172
On Wed, 10 Mar 1999, Tom Lane wrote:
> "Brett W. McCoy" <bmccoy@lan2wan.com> writes:
> >> I have 1,400,000 entries (200MB) I'm inserting into a database. Radius
> >> detail files as a matter of fact. Apart from COPY taking forever to load
> >> that (probably due to my several indexes), it seems the select is VERY
> >> slow. Any tips?
>
> > I found that if you create an index before doing a bulk COPY, yes, it does
> > take forever to load, and the select is slow. What I did was drop the
> > indices built from the COPY and rebuild them. Speeded the selects up
> > significantly. So now I don't build any indices until after I load my
> > huge databases in.
>
> Not building the indexes until you've done the bulk load is good advice;
> it does seem a lot faster to build an index on an already-loaded table
> than to construct it piecemeal during the COPY. However, either way
> should result in the same index, so I don't see why it'd affect the
> speed of a subsequent SELECT. Did you remember to do VACUUM ANALYZE
> both times? The system is likely to ignore the index until you have
> vacuumed the table.
>
> In short, best bulk load procedure is
>
> CREATE TABLE ...
> COPY ...
> CREATE INDEX(es) on table
> Repeat as needed for all tables being bulk-loaded
> VACUUM ANALYZE
>
> BTW, if you use pg_dump to dump and reload a big database, pg_dump
> knows about the create-indexes-last trick. But it doesn't do a VACUUM
> for you; you have to do that by hand after running the reload script,
> or your database will be slow.
>
> regards, tom lane
>