Multiple indexes, huge table

Поиск
Список
Период
Сортировка
От Aram Fingal
Тема Multiple indexes, huge table
Дата
Msg-id 5E885432-4ADD-451A-9682-2033DA27A4F8@multifactorial.com
обсуждение исходный текст
Ответы Re: Multiple indexes, huge table  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: Multiple indexes, huge table  (Merlin Moncure <mmoncure@gmail.com>)
Re: Multiple indexes, huge table  (Marti Raudsepp <marti@juffo.org>)
Список pgsql-general
I have a table which currently has about 500 million rows.  For the most part, the situation is going to be that I will
importa few hundred million more rows from text files once every few months but otherwise there won't be any insert,
updateor delete queries.  I have created five indexes, some of them multi-column, which make a tremendous difference in
performancefor the statistical queries which I need to run frequently (seconds versus hours.)  When adding data to the
table,however, I have found that it is much faster to drop all the indexes, copy the data to the table and then create
theindexes again (hours versus days.)  So, my question is whether this is really the best way.  Should I write a script
whichdrops all the indexes, copies the data and then recreates the indexes or is there a better way to do this?   

There are also rare cases where I might want to make a correction.  For example, one of the columns is sample name
whichis a foreign key to a samples table defined with " ON UPDATE CASCADE."  I decided to change a sample name in the
samplestable which should affect about 20 million rows out of the previously mentioned 500 million.  That query has now
beenrunning for five days and isn't finished yet.   

-Aram

В списке pgsql-general по дате отправления:

Предыдущее
От: John R Pierce
Дата:
Сообщение: Re: pg_dump slow on windows
Следующее
От: Misa Simic
Дата:
Сообщение: Re: pivot functions with variable number of columns