Hi,
On 2024-05-16 12:49:00 -0400, Peter Geoghegan wrote:
> On Thu, May 16, 2024 at 12:38 PM Andres Freund <andres@anarazel.de> wrote:
> > I'm wondering if there was index processing, due to the number of tuples. And
> > if so, what type of indexes. There'd need to be something that could lead to
> > new snapshots being acquired...
>
> Did you ever see this theory of mine, about B-Tree page deletion +
> recycling? See:
>
>
https://www.postgresql.org/message-id/flat/CAH2-Wz%3DzLcnZO8MqPXQLqOLY%3DCAwQhdvs5Ncg6qMb5nMAam0EA%40mail.gmail.com#d058a6d4b8c8fa7d1ff14349b3a50c3c
>
> (And related nearby emails from me.)
> It looked very much like index vacuuming was involved in some way when
> I actually had the opportunity to use gdb against an affected
> production instance that ran into the problem.
Hm, were the cases you observed that way using parallel vacuuming? And what
index types were involved?
Melanies reproducer works because there are catalog accesses that can trigger
a recomputation of fuzzy horizon. For testing the "easy" window for that is
the vac_open_indexes() < 16, because it happens after determining the horizon,
but before actually vacuuming.
Now I wonder if there is some codepath triggering catalog lookups during bulk
delete.
Greetings,
Andres Freund