Re: Use of inefficient index in the presence of dead tuples

Поиск
Список
Период
Сортировка
От Alexander Staubo
Тема Re: Use of inefficient index in the presence of dead tuples
Дата
Msg-id 2D6F40AA-1385-48AA-9F9F-DA8AA2BF30BC@purefiction.net
обсуждение исходный текст
Ответ на Re: Use of inefficient index in the presence of dead tuples  (Laurenz Albe <laurenz.albe@cybertec.at>)
Ответы Re: Use of inefficient index in the presence of dead tuples
Список pgsql-general
On 28 May 2024, at 13:02, Laurenz Albe <laurenz.albe@cybertec.at> wrote:
> ANALYZE considers only the live rows, so PostgreSQL knows that the query will
> return only few results.  So it chooses the smaller index rather than the one
> that matches the WHERE condition perfectly.
>
> Unfortunately, it has to wade through all the deleted rows, which is slow.

Sounds like the planner _should_ take the dead tuples into account. I’m surprised there are no parameters to tweak to
makethe planner understand that one index is more selective even though it is technically larger. 

> But try to execute the query a second time, and it will be much faster.
> PostgreSQL marks the index entries as "dead" during the first execution, so the
> second execution won't have to look at the heap any more.

Of course. It’s still not _free_; it’s still trawling through many megabytes of dead data, and going through the shared
buffercache and therefore competing with other queries that need shared buffers.  

> I understand your pain, but your use case is somewhat unusual.

I don’t think rapidly updated tables is an unusual use of Postgres, nor is the problem of long-running transaction
preventingdead tuple vacuuming. 

> What I would consider in your place is
> a) running an explicit VACUUM after you delete lots of rows or

The rows are deleted individually. It’s just that there are many transactions doing it concurrently.

> b) using partitioning to get rid of old data

Partitioning will generate dead tuples in the original partition when tuples are moved to the other partition, so I’m
notsure how that would help? 

I did explore a solution which is my “plan B” — adding a “done” column, then using “UPDATE … SET done = true” rather
thandeleting the rows. This causes dead tuples, of course, but then adding a new index with a “… WHERE NOT done” filter
fixesthe problem by forcing the query to use the right index. However, with this solution, rows will still have to be
deleted*sometime*, so this just delays the problem. But it would allow a “batch cleanup”: “DELETE … WHERE done; VACUUM”
inone fell swoop. 




В списке pgsql-general по дате отправления:

Предыдущее
От: Laurenz Albe
Дата:
Сообщение: Re: Use of inefficient index in the presence of dead tuples
Следующее
От: "David G. Johnston"
Дата:
Сообщение: Re: Use of inefficient index in the presence of dead tuples