Обсуждение: Multiple indexes, huge table

Поиск
Список
Период
Сортировка

Multiple indexes, huge table

От
Aram Fingal
Дата:
I have a table which currently has about 500 million rows.  For the most part, the situation is going to be that I will
importa few hundred million more rows from text files once every few months but otherwise there won't be any insert,
updateor delete queries.  I have created five indexes, some of them multi-column, which make a tremendous difference in
performancefor the statistical queries which I need to run frequently (seconds versus hours.)  When adding data to the
table,however, I have found that it is much faster to drop all the indexes, copy the data to the table and then create
theindexes again (hours versus days.)  So, my question is whether this is really the best way.  Should I write a script
whichdrops all the indexes, copies the data and then recreates the indexes or is there a better way to do this?   

There are also rare cases where I might want to make a correction.  For example, one of the columns is sample name
whichis a foreign key to a samples table defined with " ON UPDATE CASCADE."  I decided to change a sample name in the
samplestable which should affect about 20 million rows out of the previously mentioned 500 million.  That query has now
beenrunning for five days and isn't finished yet.   

-Aram

Re: Multiple indexes, huge table

От
Tom Lane
Дата:
Aram Fingal <fingal@multifactorial.com> writes:
> I have a table which currently has about 500 million rows.  For the most part, the situation is going to be that I
willimport a few hundred million more rows from text files once every few months but otherwise there won't be any
insert,update or delete queries.  I have created five indexes, some of them multi-column, which make a tremendous
differencein performance for the statistical queries which I need to run frequently (seconds versus hours.)  When
addingdata to the table, however, I have found that it is much faster to drop all the indexes, copy the data to the
tableand then create the indexes again (hours versus days.)  So, my question is whether this is really the best way.
ShouldI write a script which drops all the indexes, copies the data and then recreates the indexes or is there a better
wayto do this?   

Yes, that's actually recommended practice for such cases.

> There are also rare cases where I might want to make a correction.  For example, one of the columns is sample name
whichis a foreign key to a samples table defined with " ON UPDATE CASCADE."  I decided to change a sample name in the
samplestable which should affect about 20 million rows out of the previously mentioned 500 million.  That query has now
beenrunning for five days and isn't finished yet.   

That sounds like you lack an index on the referencing column of the
foreign key constraint.  Postgres doesn't require you to keep such
an index, but it's a really good idea if you ever update the referenced
column.

            regards, tom lane


Re: Multiple indexes, huge table

От
Aram Fingal
Дата:

On Sep 6, 2012, at 5:54 PM, Tom Lane wrote:

That sounds like you lack an index on the referencing column of the
foreign key constraint.  Postgres doesn't require you to keep such
an index, but it's a really good idea if you ever update the referenced
column.


Thanks.  You're right.  That column (which is a foreign key)  is a component of a multi-column index but I don't have an index just for it.

-Aram

Re: Multiple indexes, huge table

От
Jeff Janes
Дата:
>
>> There are also rare cases where I might want to make a correction.  For example, one of the columns is sample name
whichis a foreign key to a samples table defined with " ON UPDATE CASCADE."  I decided to change a sample name in the
samplestable which should affect about 20 million rows out of the previously mentioned 500 million.  That query has now
beenrunning for five days and isn't finished yet. 
>
> That sounds like you lack an index on the referencing column of the
> foreign key constraint.  Postgres doesn't require you to keep such
> an index, but it's a really good idea if you ever update the referenced
> column.

For updating 20 million out of 500 million rows, wouldn't a full table
scan generally be preferable to an index scan anyway?

But, if he doesn't drop those other indexes during this process, the
maintenance on them is going to kill his performance anyway, just like
it does for bulk loading.  If you figure 20,000,000 * (1 table + 5
index) / 15,000 rpm, it comes out to around 5 days.

Cheers,

Jeff


Re: Multiple indexes, huge table

От
Alan Hodgson
Дата:
On Thursday, September 06, 2012 05:06:27 PM Jeff Janes wrote:
> For updating 20 million out of 500 million rows, wouldn't a full table
> scan generally be preferable to an index scan anyway?
>

Not one table scan for each row updated ...



Re: Multiple indexes, huge table

От
Tom Lane
Дата:
Jeff Janes <jeff.janes@gmail.com> writes:
>> That sounds like you lack an index on the referencing column of the
>> foreign key constraint.  Postgres doesn't require you to keep such
>> an index, but it's a really good idea if you ever update the referenced
>> column.

> For updating 20 million out of 500 million rows, wouldn't a full table
> scan generally be preferable to an index scan anyway?

Foreign key triggers do their checks retail, though, so you really want
the probe for any referencing rows for a particular row-being-updated
to be able to use an index.

(It would be nice if we could replace that with a mass revalidation
once it got to be a big fraction of the table, but we don't have a
mechanism for that.  Yet.)

            regards, tom lane


Re: Multiple indexes, huge table

От
Merlin Moncure
Дата:
On Thu, Sep 6, 2012 at 4:22 PM, Aram Fingal <fingal@multifactorial.com> wrote:
> I have a table which currently has about 500 million rows.  For the most part, the situation is going to be that I
willimport a few hundred million more rows from text files once every few months but otherwise there won't be any
insert,update or delete queries.  I have created five indexes, some of them multi-column, which make a tremendous
differencein performance for the statistical queries which I need to run frequently (seconds versus hours.)  When
addingdata to the table, however, I have found that it is much faster to drop all the indexes, copy the data to the
tableand then create the indexes again (hours versus days.)  So, my question is whether this is really the best way.
ShouldI write a script which drops all the indexes, copies the data and then recreates the indexes or is there a better
wayto do this? 
>
> There are also rare cases where I might want to make a correction.  For example, one of the columns is sample name
whichis a foreign key to a samples table defined with " ON UPDATE CASCADE."  I decided to change a sample name in the
samplestable which should affect about 20 million rows out of the previously mentioned 500 million.  That query has now
beenrunning for five days and isn't finished yet. 

Your case might do well with partitioning, particularly if you are
time bottlenecked during the import.  It will require some careful
though before implementing, but the general schema is to insert the
new data into a child table that gets its own index: this prevents you
from having to reindex the whole table.  Partitioning makes other
things more complicated though (like RI).

merlin


Re: Multiple indexes, huge table

От
Marti Raudsepp
Дата:
On Fri, Sep 7, 2012 at 12:22 AM, Aram Fingal <fingal@multifactorial.com> wrote:
> Should I write a script which drops all the indexes, copies the data and then recreates the indexes or is there a
betterway to do this? 

There's a pg_bulkload extension which does much faster incremental
index updates for large bulk data imports, so you get best of both
worlds: http://pgbulkload.projects.postgresql.org/

Beware though, that this is an external addon and is not as well
tested as core PostgreSQL. I have not tried it myself.

Regards,
Marti


Re: Multiple indexes, huge table

От
Jeff Janes
Дата:
On Thu, Sep 6, 2012 at 5:12 PM, Alan Hodgson <ahodgson@simkin.ca> wrote:
> On Thursday, September 06, 2012 05:06:27 PM Jeff Janes wrote:
>> For updating 20 million out of 500 million rows, wouldn't a full table
>> scan generally be preferable to an index scan anyway?
>>
>
> Not one table scan for each row updated ...

My understanding is that he was only trying to update one row on the
parent table anyway, which then cascaded to 20,000,000 rows on the
child/fact table.

If you mean one table scan for each of the 20,000,000 rows *of the
child* being updated, that isn't what it does now, index or not.

Even if he were updating 10 rows of the parent table, I think it would
still be the case that if one sequential scan of the child/fact was
faster than one (large, low-cardinality, unclustered) index scan, then
10 sequential scans would be faster than 10 index scans.

Cheers,

Jeff


Re: Multiple indexes, huge table

От
Aram Fingal
Дата:

On Sep 7, 2012, at 11:15 AM, Marti Raudsepp wrote:

There's a pg_bulkload extension which does much faster incremental
index updates for large bulk data imports, so you get best of both
worlds: http://pgbulkload.projects.postgresql.org/

Thanks, I'll have to check that out.  This is going to be more and more of an issue as I work with some specialized techniques related to DNA sequencing.  

-Aram