Re: Vacuumdb Fails: Huge Tuple

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Vacuumdb Fails: Huge Tuple
Дата
Msg-id 14606.1254518615@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: Vacuumdb Fails: Huge Tuple  (Teodor Sigaev <teodor@sigaev.ru>)
Список pgsql-general
Teodor Sigaev <teodor@sigaev.ru> writes:
> ginHeapTupleFastCollect and ginEntryInsert checked tuple's size for
> TOAST_INDEX_TARGET, but ginHeapTupleFastCollect checks without one ItemPointer,
> as ginEntryInsert does it. So ginHeapTupleFastCollect could produce a tuple
> which 6-bytes larger than allowed by ginEntryInsert. ginEntryInsert is called
> during pending list cleanup.

I applied this patch after improving the error reporting a bit --- but
I was unable to get the unpatched code to fail in vacuum as the OP
reported was happening for him.  It looks to me like the original coding
limits the tuple size to TOAST_INDEX_TARGET (512 bytes) during
collection, but checks only the much larger GinMaxItemSize limit during
final insertion.  So while this is a good cleanup, I am suspicious that
it may not actually explain the trouble report.

I notice that the complaint was about a VACUUM FULL not a plain VACUUM,
which means that the vacuum would have been moving tuples around and
hence inserting brand new index entries.  Is there any possible way that
we could extract a larger index tuple from a moved row than we had
extracted from the original version?

It would be nice to see an actual test case that makes 8.4 fail this way
...

            regards, tom lane

В списке pgsql-general по дате отправления:

Предыдущее
От: Scott Marlowe
Дата:
Сообщение: Re: Limit of bgwriter_lru_maxpages of max. 1000?
Следующее
От: Tim Landscheidt
Дата:
Сообщение: Re: Procedure for feature requests?