Re: Hash Aggregate plan picked for very large table == out of memory

Поиск
Список
Период
Сортировка
От Gregory Stark
Тема Re: Hash Aggregate plan picked for very large table == out of memory
Дата
Msg-id 873b0ut9j8.fsf@oxford.xeocode.com
обсуждение исходный текст
Ответ на Hash Aggregate plan picked for very large table == out of memory  ("Mason Hale" <masonhale@gmail.com>)
Список pgsql-general
"Mason Hale" <masonhale@gmail.com> writes:

> The default_statistics_target was originally 200.
> I upped it to 1000 and still get the same results.

You did analyze the table after upping the target right? Actually I would
expect you would be better off not raising it so high globally and just
raising it for this one table with

    ALTER [ COLUMN ] column SET STATISTICS integer

> I am working around this by setting enable_hashagg = off  -- but it just
> seems like a case where the planner is not picking the strategy?

Sadly guessing the number of distinct values from a sample is actually a
pretty hard problem. How many distinct values do you get when you run with
enable_hashagg off?

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com


В списке pgsql-general по дате отправления:

Предыдущее
От: Francisco Reyes
Дата:
Сообщение: Re: pg_restore out of memory
Следующее
От: Gregory Stark
Дата:
Сообщение: Re: pg_restore out of memory