Re: Massive table (500M rows) update nightmare

Поиск
Список
Период
Сортировка
От Scott Marlowe
Тема Re: Massive table (500M rows) update nightmare
Дата
Msg-id dcc563d11001062349y54bdbdcbm5d95fc5f08a332a8@mail.gmail.com
обсуждение исходный текст
Ответ на Massive table (500M rows) update nightmare  ("Carlo Stonebanks" <stonec.register@sympatico.ca>)
Ответы Re: Massive table (500M rows) update nightmare  ("Carlo Stonebanks" <stonec.register@sympatico.ca>)
Список pgsql-performance
On Thu, Jan 7, 2010 at 12:17 AM, Carlo Stonebanks
<stonec.register@sympatico.ca> wrote:
> Our DB has an audit table which is 500M rows and growing. (FYI the objects
> being audited are grouped semantically, not individual field values).
>
> Recently we wanted to add a new feature and we altered the table to add a
> new column. We are backfilling this varchar(255) column by writing a TCL
> script to page through the rows (where every update is a UPDATE ... WHERE id
>>= x AND id < x+10 and a commit is performed after every 1000 updates
> statement, i.e. every 10000 rows.)
>
> We have 10 columns, six of which are indexed. Rough calculations suggest
> that this will take two to three weeks to complete on an 8-core CPU with
> more than enough memory.
>
> As a ballpark estimate - is this sort of performance for an 500M updates
> what one would expect of PG given the table structure (detailed below) or
> should I dig deeper to look for performance issues?

Got an explain analyze of the delete query?

В списке pgsql-performance по дате отправления:

Предыдущее
От: Michael Ruf
Дата:
Сообщение: Re: Optimizer use of index slows down query by factor
Следующее
От: Oleg Bartunov
Дата:
Сообщение: Re: Digesting explain analyze