Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions

Поиск
Список
Период
Сортировка
От Dilip Kumar
Тема Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions
Дата
Msg-id CAFiTN-vZ2EHjQQaGBJEbf-NnkrFkTEj+VbjrGtrfv=ck7crd+w@mail.gmail.com
обсуждение исходный текст
Ответ на Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Amit Kapila <amit.kapila16@gmail.com>)
Список pgsql-hackers
On Wed, Feb 5, 2020 at 9:27 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Tue, Feb 4, 2020 at 11:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:
> >
> > On Tue, Jan 28, 2020 at 11:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
> > >
> > >
> > > One more thing we can do is to identify whether the tuple belongs to
> > > toast relation while decoding it.  However, I think to do that we need
> > > to have access to relcache at that time and that might add some
> > > overhead as we need to do that for each tuple.  Can we investigate
> > > what it will take to do that and if it is better than setting a bit
> > > during WAL logging.
> > >
> > I have done some more analysis on this and it appears that there are
> > few problems in doing this.  Basically, once we get the confirmed
> > flush location, we advance the replication_slot_catalog_xmin so that
> > vacuum can garbage collect the old tuple.  So the problem is that
> > while we are collecting the changes in the ReorderBuffer our catalog
> > version might have removed,  and we might not find any relation entry
> > with that relfilenodeid (because it is dropped or altered in the
> > future).
> >
>
> Hmm, this means this can also occur while streaming the changes.  The
> main reason as I understand is that it is because before decoding
> commit, we don't know whether these changes are already sent to the
> subscriber (based on confirmed_flush_location/start_decoding_at).
Right.

>I think it is better to skip streaming such transactions as we can't
> make the right decision about these and as this can happen generally
> after the crash for the first few transactions, it shouldn't matter
> much if we serialize such transactions instead of streaming them.

I think the idea makes sense to me.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Dilip Kumar
Дата:
Сообщение: Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions
Следующее
От: "ideriha.takeshi@fujitsu.com"
Дата:
Сообщение: RE: Global shared meta cache