Re: 121+ million record table perf problems

Поиск
Список
Период
Сортировка
От Alan Hodgson
Тема Re: 121+ million record table perf problems
Дата
Msg-id 200705181208.41163@hal.medialogik.com
обсуждение исходный текст
Ответ на Re: 121+ million record table perf problems  ("Joshua D. Drake" <jd@commandprompt.com>)
Список pgsql-performance
On Friday 18 May 2007 11:51, "Joshua D. Drake" <jd@commandprompt.com> wrote:
> > The update query that started this all I had to kill after 17hours.  It
> > should have updated all 121+ million records.  That brought my select
> > count down to 19 minutes, but still a far cry from acceptable.

You're going to want to drop all your indexes before trying to update 121
million records.  Updates in PostgreSQL are really quite slow, mostly due
to all the index updates. Drop indexes, do the updates, create a primary
key, cluster the table on that key to free up the dead space, then recreate
the rest of the indexes. That's about as fast as you can get that process.

Of course, doing anything big on one disk is also going to be slow, no
matter what you do. I don't think a table scan should take 19 minutes,
though, not for 121 million records. You should be able to get at least
60-70MB/sec out of anything modern. I can only assume your disk is
thrashing doing something else at the same time as the select.

--
"We can no more blame our loss of freedom on Congressmen than we can
prostitution on pimps.  Both simply provide broker services for their
customers." -- Dr. Walter Williams


В списке pgsql-performance по дате отправления:

Предыдущее
От: "Tyrrill, Ed"
Дата:
Сообщение: Slow queries on big table
Следующее
От: Ron Mayer
Дата:
Сообщение: Re: Background vacuum