Re: repeated out of shared memory error - not related to max_locks_per_transaction

Поиск
Список
Период
Сортировка
От MichaelDBA
Тема Re: repeated out of shared memory error - not related to max_locks_per_transaction
Дата
Msg-id 5B51E8DC.3050606@sqlexec.com
обсуждение исходный текст
Ответ на Re: repeated out of shared memory error - not related to max_locks_per_transaction  (MichaelDBA <MichaelDBA@sqlexec.com>)
Список pgsql-admin
Here is another quote from depesz about the dangers of setting work_mem too high...

To show some perspective – Let's assume you have some queries that do 4-10 Sort/Hash operations. And you have 100 max_connections limit. This means that theoretically you can get yp to 1000 x work_mem usage! (100 connections, each running the 10-sort/hash query).

Above means, that while increasing work_mem is cool, you have to be sure not create situation when pg will try to allocate more memory than there physically is. A lot of “out of memory" errors reported to PostgreSQL mailing lists came from users which set their work_mem to large values like 512MB or more, and then ran some complex queries. On machine with (relatively) small amount of ram.

Friday, July 20, 2018 9:28 AM
wrong again, Fabio.  PostgreSQL is not coded to manage memory usage in the way you think it does with work_mem.  Here is a quote from Citus about the dangers of setting work_mem too high.

When you consume more memory than is available on your machine you can start to see out of out of memory errors within your Postgres logs, or in worse cases the OOM killer can start to randomly kill running processes to free up memory. An out of memory error in Postgres simply errors on the query you’re running, where as the the OOM killer in linux begins killing running processes which in some cases might even include Postgres itself.

When you see an 
out of memory error you either want to increase the overall RAM on the machine itself by upgrading to a larger instance OR you want to decrease the amount of memory that work_mem uses. Yes, you read that right: out-of-memory it’s better to decrease work_mem instead of increase since that is the amount of memory that can be consumed by each process and too many operations are leveraging up to that much memory.


https://www.citusdata.com/blog/2018/06/12/configuring-work-mem-on-postgres/

Regards,
Michael Vitale

Friday, July 20, 2018 9:19 AM

Nope Michael,

if 'stuff' gets spilled to disk does not end up in an error. It will silently write a file to disk for the time being and then deleted it when your operation is finished.

period.

Based on your log settings, it might appear in the logs, under 'temporary file created..'.


regards,

fabio pardi



On 20/07/18 15:00, MichaelDBA wrote:

Friday, July 20, 2018 9:00 AM
I do not think that is true.  Stuff just gets spilled to disk when the work_mem buffers would exceed the work_mem constraint.  They are not constrained by what real memory is available, hence the memory error!  They will try to get memory even if it is not available as long as work_mem buffers threshold is not reached.

Regards,
Michael Vitale



Friday, July 20, 2018 8:47 AM

work_mem cannot be the cause of it for the simple reason that if the memory needed by your query overflows work_mem, it will spill to disk


regards,

fabio pardi



On 20/07/18 14:35, MichaelDBA wrote:

Friday, July 20, 2018 8:35 AM
Perhaps your "work_mem" setting is causing the memory problems.  Try reducing it to see if that alleviates the problem.

Regards,
Michael Vitale



В списке pgsql-admin по дате отправления:

Предыдущее
От: MichaelDBA
Дата:
Сообщение: Re: repeated out of shared memory error - not related to max_locks_per_transaction
Следующее
От: Fabio Pardi
Дата:
Сообщение: Re: repeated out of shared memory error - not related tomax_locks_per_transaction