Re: is there a way to firmly cap postgres worker memory consumption?

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: is there a way to firmly cap postgres worker memory consumption?
Дата
Msg-id 1161.1397007136@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: is there a way to firmly cap postgres worker memory consumption?  (Steve Kehlet <steve.kehlet@gmail.com>)
Ответы Re: is there a way to firmly cap postgres worker memory consumption?
Список pgsql-general
Steve Kehlet <steve.kehlet@gmail.com> writes:
> Thank you. For some reason I couldn't get it to trip with "ulimit -d
> 51200", but "ulimit -v 1572864" (1.5GiB) got me this in serverlog. I hope
> this is readable, if not it's also here:

Well, here's the problem:

>         ExprContext: 812638208 total in 108 blocks; 183520 free (171
> chunks); 812454688 used

So something involved in expression evaluation is eating memory.
Looking at the query itself, I'd have to bet on this:

>            ARRAY_TO_STRING(ARRAY_AGG(MM.ID::CHARACTER VARYING), ',')

My guess is that this aggregation is being done across a lot more rows
than you were expecting, and the resultant array/string therefore eats
lots of memory.  You might try replacing that with COUNT(*), or even
better SUM(LENGTH(MM.ID::CHARACTER VARYING)), just to get some definitive
evidence about what the query is asking to compute.

Meanwhile, it seems like ulimit -v would provide the safety valve
you asked for originally.  I too am confused about why -d didn't
do it, but as long as you found a variant that works ...

            regards, tom lane


В списке pgsql-general по дате отправления:

Предыдущее
От: Steve Kehlet
Дата:
Сообщение: Re: is there a way to firmly cap postgres worker memory consumption?
Следующее
От: Sameer Kumar
Дата:
Сообщение: Re: Remote troubleshooting session connection?