Re: New GUC autovacuum_max_threshold ?

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: New GUC autovacuum_max_threshold ?
Дата
Msg-id CA+TgmoaER+O40PA_CB8eSKm7KT37wDT+-y_mnsWXb9-Keivu1Q@mail.gmail.com
обсуждение исходный текст
Ответ на Re: New GUC autovacuum_max_threshold ?  (Frédéric Yhuel <frederic.yhuel@dalibo.com>)
Ответы Re: New GUC autovacuum_max_threshold ?
Список pgsql-hackers
On Mon, May 13, 2024 at 11:14 AM Frédéric Yhuel
<frederic.yhuel@dalibo.com> wrote:
> FWIW, I do agree with your math. I found your demonstration convincing.
> 500000 was selected with the wet finger.

Good to know.

> Using the formula I suggested earlier:
>
> vacthresh = Min(vac_base_thresh + vac_scale_factor * reltuples,
> vac_base_thresh + vac_scale_factor * sqrt(reltuples) * 1000);
>
> your table of 2.56 billion tuples will be vacuumed if there are
> more than 10 million dead tuples (every 28 minutes).

Yeah, so that is about 50x what we do now (twice an hour vs. once a
day). While that's a lot more reasonable than the behavior that we'd
get from a 500k hard cap (every 84 seconds), I suspect it's still too
aggressive.

I find these things much easier to reason about in gigabytes than in
time units. In that example, the table was 320GB and was getting
vacuumed after accumulating 64GB of bloat. That seems like a lot. It
means that the table can grow from 320GB all the way up until 384GB
before we even think about starting to vacuum it, and then we might
not start right away, depending on resource availability, and we may
take some time to finish, possibly considerable time, depending on the
number and size of indexes and the availability of I/O resources. So
actually the table might very plausibly be well above 400GB before we
get done processing it, or potentially even more. I think that's not
aggressive enough.

But how much would we like to push that 64GB of bloat number down for
a table of this size? I would argue that if we're vacuuming the table
when it's only got 1GB of bloat, or 2GB of bloat, that seems
excessive. Unless the system is very lightly loaded and has no
long-running transactions at all, we're unlikely to be able to vacuum
aggressively enough to keep a 320GB table at a size of 321GB or 322GB.
Without testing or doing any research, I'm going to guess that a
realistic number is probably in the range of 10-20GB of bloat. If the
table activity is very light, we might be able to get it even lower,
like say 5GB, but the costs ramp up very quickly as you push the
vacuuming threshold down. Also, if the table accumulates X amount of
bloat during the time it takes to run one vacuum, you can never
succeed in limiting bloat to a value less than X (and probably more
like 1.5*X or 2*X or something).

So without actually trying anything, which I do think somebody should
do and report results, my guess is that for a 320GB table, you'd like
to multiply the vacuum frequency by a value somewhere between 3 and
10, and probably much closer to 3 than to 10. Maybe even less than 3.
Not sure exactly. Like I say, I think someone needs to try some
different workloads and database sizes and numbers of indexes, and try
to get a feeling for what actually works well in practice.

> If we want to stick with the simple formula, we should probably choose a
> very high default, maybe 100 million, as you suggested earlier.
>
> However, it would be nice to have the visibility map updated more
> frequently than every 100 million dead tuples. I wonder if this could be
> decoupled from the vacuum process?

Yes, but if a page has had any non-HOT updates, it can't become
all-visible again without vacuum. If it has had only HOT updates, then
a HOT-prune could make it all-visible. I don't think we do that
currently, but I think in theory we could.

--
Robert Haas
EDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Why is citext/regress failing on hamerkop?
Следующее
От: Noah Misch
Дата:
Сообщение: Re: race condition in pg_class