Обсуждение: BUG #17602: Query backend process killed because it uses up all memory.

Поиск
Список
Период
Сортировка

BUG #17602: Query backend process killed because it uses up all memory.

От
PG Bug reporting form
Дата:
The following bug has been logged on the website:

Bug reference:      17602
Logged by:          alun
Email address:      higherone@gmail.com
PostgreSQL version: 12.8
Operating system:   centos
Description:

Hi,

I'm writing to seek for generic help on trouble shooting bugs related to
query backend process killed because of memory exhaust on this single query
process.

The query is a simple update or insert query like this, and sequence_id has
a btree index. The query process will use up all the memory very quickly
(about 10 minutes) and cause the server to restart.
update folders set sequence_id=40 where id=501;

I suspect the index is corrupt, but after drop the btree index, this issue
still can be reproduced. And when I use pg_amcheck tool to check the btree
index, it's showing no error.
And more strange, if I drop another gist index that's on another field of
the same table, this issue can not be reproduced anymore. But I don't think
the gist index is related to that bad update/insert sql. It's a pity that I
don't have a tool that can detect if gist index is corrupt.

I can not reproduce this issue on a clean db instance. However in my
snapshot db with prod data, I can reproduce this issue stably. It's a pity I
can not share the snapshot db for trouble shooting, so here I'm writing to
the group, hoping to get some suggestions on how to trouble shoot further.

Thanks a lot!


Re: BUG #17602: Query backend process killed because it uses up all memory.

От
Tom Lane
Дата:
PG Bug reporting form <noreply@postgresql.org> writes:
> I'm writing to seek for generic help on trouble shooting bugs related to
> query backend process killed because of memory exhaust on this single query
> process.

Try starting the server under a restrictive ulimit, so that it will
get ENOMEM failure before it bloats to the point of receiving an
OOM kill.  Aside from being more friendly to other server processes,
this should result in getting a memory consumption map dumped to
postmaster stderr when the failure is hit.  Seeing which context
is getting bloated would be pretty useful information.

On Linux I believe "ulimit -v" is the most appropriate setting to
change.

            regards, tom lane