Обсуждение: Re: [GENERAL] DELETE statement KILL backend

Поиск
Список
Период
Сортировка

Re: [GENERAL] DELETE statement KILL backend

От
Florian Wunderlich
Дата:
>In my experience, the problem seems to be caused by a lot of data being put
>into the database.  We are using the database to ingest real-time data 24
>hours a day/7 days a week. The data comes in about every three minutes.
>While I was not able to identify what the exact cause has been, I have
>noticed that before the problem becomes critical (before the backend
>terminates abnormally), the (number of) indexes do not correspond to the
>actual table.  That leads me to believe that the indexes do not get created
>on all occasions.  After some time, the table's internal indexes may be come
>corrupted, and vacuuming does no good.
>
>When trying to fix this, I first delete the index, then recreate it., then
>vacuum.  If that doesn't work, then I drop the table, create the index,
>reload the data, and then vacuum the table.
>
>I would be curious to see if anyone else has had this type of problem and
>what their solutions were.

Same with us here, we use a really big database, but in my experience, it
happend only when I killed (with SIGTERM, but anyway) a single postgres
process that made an "endless" query. I think that some internal tables are
left over in the data/base directory tree, and postgres tends to get
confused about them.
Not sure about this anyway.

DROP/CREATE INDEX didn't solve this, I always did a DROP DATABASE and a
complete reload of all data, and then it worked fine again.