Re: Prevent out of memory errors by reducing work_mem?

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Prevent out of memory errors by reducing work_mem?
Дата
Msg-id 10644.1359124947@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Prevent out of memory errors by reducing work_mem?  (Jan Strube <js@deriva.de>)
Ответы Re: Prevent out of memory errors by reducing work_mem?  (Jan Strube <js@deriva.de>)
Список pgsql-general
Jan Strube <js@deriva.de> writes:
> I'm getting an out of memory error running the following query over 6
> tables (the *BASE* tables have over 1 million rows each) on Postgresql
> 9.1. The machine has 4GB RAM:

It looks to me like you're suffering an executor memory leak that's
probably unrelated to the hash joins as such.  The leak is in the
ExecutorState context:

> ExecutorState: 3442985408 total in 412394 blocks; 5173848 free (16
> chunks); 3437811560 used

while the subsidiary HashXYZ contexts don't look like they're going
beyond what they've been told to.

So the first question is 9.1.what?  We've fixed execution-time memory
leaks as recently as 9.1.7.

If you're on 9.1.7, or if after updating you can still reproduce the
problem, please see if you can create a self-contained test case.
My guess is it would have to do with the specific data types and
operators being used in the query, but not so much with the specific
data, so you probably could create a test case that just uses tables
filled with generated random data.

            regards, tom lane


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


В списке pgsql-general по дате отправления:

Предыдущее
От: Cliff de Carteret
Дата:
Сообщение: Re: Throttling Streamming Replication
Следующее
От: Rodrigo Pereira da Silva
Дата:
Сообщение: Re: Throttling Streamming Replication