Understanding TupleQueue impact and overheads?

Поиск
Список
Период
Сортировка
От Tom Mercha
Тема Understanding TupleQueue impact and overheads?
Дата
Msg-id DB6PR0201MB24552647BB3C7850A6289E24F4920@DB6PR0201MB2455.eurprd02.prod.outlook.com
обсуждение исходный текст
Ответы Re: Understanding TupleQueue impact and overheads?  (Andres Freund <andres@anarazel.de>)
Список pgsql-hackers
I have been looking at PostgreSQL's Tuple Queue 
(/include/executor/tqueue.h) which provides functionality for queuing 
tuples between processes through shm_mq. I am still familiarising myself 
with the bigger picture and TupTableStores. I can see that a copy (not a 
reference) of a HeapTuple (obtained from TupleTableSlot or SPI_TupTable 
etc) can be sent to a queue using shm_mq. Then, another process can 
receive these HeapTuples, probably later placing them in 'output' 
TupleTableSlots.

What I am having difficulty understanding is what happens to the 
location of the HeapTuple as it moves from one TupleTableSlot to the 
other as described above. Since there most likely is a reference to a 
physical tuple involved, am I incurring a disk-access overhead with each 
copy of a tuple? This would seem like a massive overhead; how can I keep 
such overheads to a minimum?

Furthermore, to what extent can I expect other modules to impact a 
queued HeapTuple? If some external process updates this tuple, when will 
I see the change? Would it be a possiblity that the update is not 
reflected on the queued HeapTuple but the external process is not 
blocked/delayed from updating? In other words, like operating on some 
kind of multiple snapshots? When does DBMS logging kick in whilst I am 
transferring a tuple from TupTableStore to another?

Thanks,
Tom

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Masahiko Sawada
Дата:
Сообщение: Re: [HACKERS] Block level parallel vacuum
Следующее
От: Masahiko Sawada
Дата:
Сообщение: Re: maintenance_work_mem used by Vacuum