Re: Limiting memory allocation

Поиск
Список
Период
Сортировка
От Tomas Vondra
Тема Re: Limiting memory allocation
Дата
Msg-id 76b31a7e-2b5d-c361-a79a-05b8c00378b9@enterprisedb.com
обсуждение исходный текст
Ответ на Re: Limiting memory allocation  (Stephen Frost <sfrost@snowman.net>)
Ответы Re: Limiting memory allocation  (Jan Wieck <jan@wi3ck.info>)
Re: Limiting memory allocation  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
On 5/20/22 21:50, Stephen Frost wrote:
> Greetings,
> 
> ...
>
>>>  How exactly this would work is unclear to me; maybe one
>>> process keeps an eye on it in an OS-specific manner,
> 
> There seems to be a lot of focus on trying to implement this as "get the
> amount of free memory from the OS and make sure we don't go over that
> limit" and that adds a lot of OS-specific logic which complicates things
> and also ignores the use-cases where an admin wishes to limit PG's
> memory usage due to other processes running on the same system.  I'll
> point out that the LD_PRELOAD library doesn't even attempt to do this,
> even though it's explicitly for Linux, but uses an environment variable
> instead.
> 
> In PG, we'd have that be a GUC that an admin is able to set and then we
> track the memory usage (perhaps per-process, perhaps using some set of
> buckets, perhaps locally per-process and then in a smaller number of
> buckets in shared memory, or something else) and fail an allocation when
> it would go over that limit, perhaps only when it's a regular user
> backend or with other conditions around it.
> 

I agree a GUC setting a memory target is a sensible starting point.

I wonder if we might eventually use this to define memory budgets. One
of the common questions I get is how do you restrict the user from
setting work_mem too high or doing too much memory-hungry things.
Currently there's no way to do that, because we have no way to limit
work_mem values, and even if we had the user could construct a more
complex query with more memory-hungry operations.

But I think it's also that we weren't sure what to do after hitting a
limit - should we try replanning the query with lower work_mem value, or
what?

However, if just failing the malloc() is acceptable, maybe we could use
this to achieve something like this?

>> What would be useful is a way for Postgres to count the amount of memory
>> allocated by each backend.  This could be advantageous for giving per-backend
>> memory usage to the user, as well as for enforcing a limit on the total amount
>> of memory allocated by the backends.
> 
> I agree that this would be independently useful.
> 

Well, we already have the memory-accounting built into the memory
context infrastructure. It kinda does the same thing as the malloc()
wrapper, except that it does not publish the information anywhere and
it's per-context (so we have to walk the context recursively).

So maybe we could make this on-request somehow? Say, we'd a signal to
the process, and it'd run MemoryContextMemAllocated() on the top memory
context and store the result somewhere.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Nathan Bossart
Дата:
Сообщение: Re: docs: mention "pg_read_all_stats" in "track_activities" description
Следующее
От: Tom Lane
Дата:
Сообщение: Re: allow building trusted languages without the untrusted versions