Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions
Дата
Msg-id CAA4eK1KMSuED6u6f9+RhmAP+5Gns6S8aWVtAfnHA55v22q_dZQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Dilip Kumar <dilipbalaut@gmail.com>)
Ответы Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Dilip Kumar <dilipbalaut@gmail.com>)
Список pgsql-hackers
On Tue, Feb 4, 2020 at 11:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:
>
> On Tue, Jan 28, 2020 at 11:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
> >
> >
> > One more thing we can do is to identify whether the tuple belongs to
> > toast relation while decoding it.  However, I think to do that we need
> > to have access to relcache at that time and that might add some
> > overhead as we need to do that for each tuple.  Can we investigate
> > what it will take to do that and if it is better than setting a bit
> > during WAL logging.
> >
> I have done some more analysis on this and it appears that there are
> few problems in doing this.  Basically, once we get the confirmed
> flush location, we advance the replication_slot_catalog_xmin so that
> vacuum can garbage collect the old tuple.  So the problem is that
> while we are collecting the changes in the ReorderBuffer our catalog
> version might have removed,  and we might not find any relation entry
> with that relfilenodeid (because it is dropped or altered in the
> future).
>

Hmm, this means this can also occur while streaming the changes.  The
main reason as I understand is that it is because before decoding
commit, we don't know whether these changes are already sent to the
subscriber (based on confirmed_flush_location/start_decoding_at).  I
think it is better to skip streaming such transactions as we can't
make the right decision about these and as this can happen generally
after the crash for the first few transactions, it shouldn't matter
much if we serialize such transactions instead of streaming them.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Andres Freund
Дата:
Сообщение: Re: Portal->commandTag as an enum
Следующее
От: Dilip Kumar
Дата:
Сообщение: Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions