Re: Shared detoast Datum proposal

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: Shared detoast Datum proposal
Дата
Msg-id CA+TgmoZaY5P=McfT7U=EL0GnNMATS8p=CktRN3MQN5=Z+Z8GuQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Shared detoast Datum proposal  (Nikita Malakhov <hukutoc@gmail.com>)
Ответы Re: Shared detoast Datum proposal
Список pgsql-hackers
Hi,

Andy Fan asked me off-list for some feedback about this proposal. I
have hesitated to comment on it for lack of having studied the matter
in any detail, but since I've been asked for my input, here goes:

I agree that there's a problem here, but I suspect this is not the
right way to solve it. Let's compare this to something like the
syscache. Specifically, let's think about the syscache on
pg_class.relname.

In the case of the syscache on pg_class.relname, there are two reasons
why we might repeatedly look up the same values in the cache. One is
that the code might do multiple name lookups when it really ought to
do only one. Doing multiple lookups is bad for security and bad for
performance, but we have code like that in some places and some of it
is hard to fix. The other reason is that it is likely that the user
will mention the same table names repeatedly; each time they do, they
will trigger a lookup on the same name. By caching the result of the
lookup, we can make it much faster. An important point to note here is
that the queries that the user will issue in the future are unknown to
us. In a certain sense, we cannot even know whether the same table
name will be mentioned more than once in the same query: when we reach
the first instance of the table name and look it up, we do not have
any good way of knowing whether it will be mentioned again later, say
due to a self-join. Hence, the pattern of cache lookups is
fundamentally unknowable.

But that's not the case in the motivating example that started this
thread. In that case, the target list includes three separate
references to the same column. We can therefore predict that if the
column value is toasted, we're going to de-TOAST it exactly three
times. If, on the other hand, the column value were mentioned only
once, it would be detoasted just once. In that latter case, which is
probably quite common, this whole cache idea seems like it ought to
just waste cycles, which makes me suspect that no matter what is done
to this patch, there will always be cases where it causes significant
regressions. In the former case, where the column reference is
repeated, it will win, but it will also hold onto the detoasted value
after the third instance of detoasting, even though there will never
be any more cache hits. Here, unlike the syscache case, the cache is
thrown away at the end of the query, so future queries can't benefit.
Even if we could find a way to preserve the cache in some cases, it's
not very likely to pay off. People are much more likely to hit the
same table more than once than to hit the same values in the same
table more than once in the same session.

Suppose we had a design where, when a value got detoasted, the
detoasted value went into the place where the toasted value had come
from. Then this problem would be handled pretty much perfectly: the
detoasted value would last until the scan advanced to the next row,
and then it would be thrown out. So there would be no query-lifespan
memory leak. Now, I don't really know how this would work, because the
TOAST pointer is coming out of a slot and we can't necessarily write
one attribute of any arbitrary slot type without a bunch of extra
overhead, but maybe there's some way. If it could be done cheaply
enough, it would gain all the benefits of this proposal a lot more
cheaply and with fewer downsides. Alternatively, maybe there's a way
to notice the multiple references and introduce some kind of
intermediate slot or other holding area where the detoasted value can
be stored.

In other words, instead of computing

big_jsonb_col->'a',  big_jsonb_col->'b',  big_jsonb_col->'c'

Maybe we ought to be trying to compute this:

big_jsonb_col=detoast(big_jsonb_col)
big_jsonb_col->'a',  big_jsonb_col->'b',  big_jsonb_col->'c'

Or perhaps this:

tmp=detoast(big_jsonb_col)
tmp->'a', tmp->'b', tmp->'c'

In still other words, a query-lifespan cache for this information
feels like it's tackling the problem at the wrong level. I do suspect
there may be query shapes where the same value gets detoasted multiple
times in ways that the proposals I just made wouldn't necessarily
catch, such as in the case of a self-join. But I think those cases are
rare. In most cases, repeated detoasting is probably very localized,
with the same exact TOAST pointer being de-TOASTed a couple of times
in quick succession, so a query-lifespan cache seems like the wrong
thing. Also, in most cases, repeated detoasting is probably quite
predictable: we could infer it from static analysis of the query. So
throwing ANY kind of cache at the problem seems like the wrong
approach unless the cache is super-cheap, which doesn't seem to be the
case with this proposal, because a cache caters to cases where we
can't know whether we're going to recompute the same result, and here,
that seems like something we could probably figure out.

Because I don't see any way to avoid regressions in the common case
where detoasting isn't repeated, I think this patch is likely never
going to be committed no matter how much time anyone spends on it.

But, to repeat the disclaimer from the top, all of the above is just a
relatively uneducated opinion without any detailed study of the
matter.

...Robert



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Sushrut Shivaswamy
Дата:
Сообщение: Reading timestamp values from Datums gives garbage values
Следующее
От: Robert Haas
Дата:
Сообщение: Re: libpq compression (part 3)