RE: Logical Replica ReorderBuffer Size Accounting Issues
От | Wei Wang (Fujitsu) |
---|---|
Тема | RE: Logical Replica ReorderBuffer Size Accounting Issues |
Дата | |
Msg-id | OS3PR01MB6275CAC41E8564D60DC66D229E769@OS3PR01MB6275.jpnprd01.prod.outlook.com обсуждение исходный текст |
Ответ на | Re: Logical Replica ReorderBuffer Size Accounting Issues (Masahiko Sawada <sawada.mshk@gmail.com>) |
Ответы |
Re: Logical Replica ReorderBuffer Size Accounting Issues
(Masahiko Sawada <sawada.mshk@gmail.com>)
|
Список | pgsql-bugs |
On Thu, May 9, 2023 at 9:33 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote: > On Fri, Jan 13, 2023 at 8:17 PM wangw.fnst@fujitsu.com > <wangw.fnst@fujitsu.com> wrote: > > > > On Thu, Jan 12, 2023 at 21:02 PM Alex Richman <alexrichman@onesignal.com> > wrote: > > > On Thu, 12 Jan 2023 at 10:44, wangw.fnst@fujitsu.com > > > <wangw.fnst@fujitsu.com> wrote: > > > > I think parallelism doesn't affect this problem. Because for a walsender, I > > > > think it will always read the wal serially in order. Please let me know if I'm > > > > missing something. > > > I suspect it's more about getting enough changes into the WAL quickly > enough > > > for walsender to not spend any time idle. I suppose you could stack the deck > > > towards this by first disabling the subscription, doing the updates to spool a > > > bunch of changes in the WAL, then enabling the subscription again. Perhaps > > > there is also some impact in the WAL records interleaving from the > concurrent > > > updates and making more work for the reorder buffer. > > > The servers I am testing on are quite beefy, so it might be a little harder to > > > generate sufficient load if you're testing locally on a laptop or something. > > > > > > > And I tried to use the table structure and UPDATE statement you said. But > > > > unfortunately I didn't catch 1GB or unexpected (I mean a lot size beyond > > > 256MB) > > > > usage in rb->tup_context. Could you please help me to confirm my test? > Here > > > is > > > > my test details: > > > Here's test scripts that replicate it for me: [1] > > > This is on 15.1, installed on debian-11, running on GCP n2-highmem-80 > (IceLake) > > > /w 24x Local SSD in raid0. > > > > Thanks for the details you shared. > > > > Yes, I think you are right. I think I reproduced this problem as you suggested > > (Update the entire table in parallel). And I can reproduce this problem on both > > current HEAD and REL_15_1. The memory used in rb->tup_context can reach > 350M > > in HEAD and reach 600MB in REL_15_1. > > > > Here are my steps to reproduce: > > 1. Apply the attached diff patch to add some logs for confirmation. > > 2. Use the attached reproduction script to reproduce the problem. > > 3. Confirm the debug log that is output to the log file pub.log. > > > > After doing some research, I agree with the idea you mentioned before. I think > > this problem is caused by the implementation of 'Generational allocator' or the > > way we uses the API of 'Generational allocator'. > > I think there are two separate issues. One is a pure memory accounting > issue: since the reorderbuffer accounts the memory usage by > calculating actual tuple size etc. it includes neither the chunk > header size nor fragmentations within blocks. So I can understand why > the output of MemoryContextStats(rb->context) could be two or three > times higher than logical_decoding_work_mem and doesn't match rb->size > in some cases. > > However it cannot explain the original issue that the memory usage > (reported by MemoryContextStats(rb->context)) reached 5GB in spite of > logilca_decoding_work_mem being 256MB, which seems like a memory leak > bug or something we ignore the memory limit. Yes, I agree that the chunk header size or fragmentations within blocks may cause the allocated space to be larger than the accounted space. However, since these spaces are very small (please refer to [1] and [2]), I also don't think this is the cause of the original issue in this thread. I think that the cause of the original issue in this thread is the implementation of 'Generational allocator'. Please consider the following user scenario: The parallel execution of different transactions led to very fragmented and mixed-up WAL records for those transactions. Later, when walsender serially decodes the WAL, different transactions' chunks were stored on a single block in rb->tup_context. However, when a transaction ends, the chunks related to this transaction on the block will be marked as free instead of being actually released. The block will only be released when all chunks in the block are free. In other words, the block will only be released when all transactions occupying the block have ended. As a result, the chunks allocated by some ending transactions are not being released on many blocks for a long time. Then this issue occurred. I think this also explains why parallel execution is more likely to trigger this issue compared to serial execution of transactions. Please also refer to the analysis details of code in [3]. > My colleague Drew > Callahan helped me investigate this issue and we found out that in > ReorderBufferIterTXNInit() we restore all changes of the top > transaction as well as all subtransactions but we don't respect the > memory limit when restoring changes. It's normally not a problem since > each decoded change is not typically large and we restore up to 4096 > changes. However, if the transaction has many subtransactions, we > restore 4096 changes for each subtransaction at once, temporarily > consuming much memory. This behavior can explain the symptoms of the > original issue but one thing I'm unsure is that Alex reported > replacing MemoryContextAlloc() in ReorderBufferGetChange() with > malloc() resolved the issue. I think it can resolve/mitigate the > former issue (memory accounting issue) but IIUC it cannot for the > latter problem. There might be other factors for this problem. Anyway, > I've attached a simple reproducer that works as a tap test for > test_decoding. > > I think we need to discuss two things separately and should deal with > at least the latter problem. Yes, I agree that this is another separate issue we need to discuss. [1] - https://www.postgresql.org/message-id/OS3PR01MB6275677D0DAEC1132FD0AC7F9EFE9%40OS3PR01MB6275.jpnprd01.prod.outlook.com [2] - https://www.postgresql.org/message-id/CAMnUB3oSCM0BNK%3Do0_3PwHc1oJepgex_%3DmNAD5oY7rEjR8958w%40mail.gmail.com [3] - https://www.postgresql.org/message-id/OS3PR01MB6275A7E5323601D59D18DB979EC29%40OS3PR01MB6275.jpnprd01.prod.outlook.com Regards, Wang wei
В списке pgsql-bugs по дате отправления: