Обсуждение: VACUUM FULL deadlock with backend startup
Hi, We encountered a deadlock involving VACUUM FULL (surprise surprise! :)) in PG 8.3.13 (and still not fixed in 9.0 AFAICS although the window appears much smaller). The call spits out the following deadlock info: ERROR: SQLSTATE 40P01: deadlock detected DETAIL: Process 12479 waits for AccessExclusiveLock on relation 2663 of database 16384; blocked by process 14827. Process 14827 waits for AccessShareLock on relation 1259 of database 16384; blocked by process 12479. LOCATION: DeadLockReport, deadlock.c:918 It looked familiar, so I dug up the archives and found that Tom had committed a fix for a similar deadlock via git commitid: 715120e7 However this current deadlock involved an index with oid 2663, which is ClassNameNspIndexId. Clearly this was another case of locking the index directly without taking a lock on the parent catalog. Further sleuthing revealed that the culprit function was InitCatCachePhase2, which directly calls index_open in the process startup phase. Reproducing this was easy once you know the culprit, (excruciatingly difficult if you do not know the exact race window). I added a sleep inside the InitCatCachePhase2 function before calling index_open. Then I invoked a "VACUUM FULL pg_class" from another session, halting it in gdb just before taking the exclusive lock via try_relation_open. When a new PG process sleeps inside InitCatCachePhase2, we then take the lock in the VF process, waiting just after it. When the startup continues after the sleep, it will take the ClassNameNspIndexId share lock, but hang to take a share lock on pg_class in RelationReloadIndexInfo. Simply continue the VF process in gdb which will try to take the exclusive lock to vacuum the index. This will reproduce the deadlock in all its glory. The fix is similar to the earlier commit by Tom. I tested this fix against 8.3.13. We lock the parent catalog now before calling index_open. Patch against git HEAD attached with this mail. I guess we will backpatch this? Tom's last commit was backpatched till 8.2 I think. Regards, Nikhils
Вложения
On Fri, Mar 18, 2011 at 11:25 PM, Nikhil Sontakke <nikhil.sontakke@enterprisedb.com> wrote: > The fix is similar to the earlier commit by Tom. I tested this fix > against 8.3.13. We lock the parent catalog now before calling > index_open. Patch against git HEAD attached with this mail. I guess we > will backpatch this? Tom's last commit was backpatched till 8.2 I > think. Is it really worth slowing down the startup sequence for every connection to avoid deadlocking against a very rare maintenance operation? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Hi,<br /><br /> > The fix is similar to the earlier commit by Tom. I tested this fix<br /><div class="gmail_quote"><blockquoteclass="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204,204); padding-left: 1ex;"><div class="im"> > against 8.3.13. We lock the parent catalog now before calling<br /> >index_open. Patch against git HEAD attached with this mail. I guess we<br /> > will backpatch this? Tom's last commitwas backpatched till 8.2 I<br /> > think.<br /><br /></div>Is it really worth slowing down the startup sequencefor every<br /> connection to avoid deadlocking against a very rare maintenance<br /> operation?<br /><font color="#888888"><br/></font></blockquote><br /></div>Not really a performance issue AFAICS. If the relcache init file exists,then the phase2 of the catalog cache which eventually calls the above code path is avoided.<br /><br />Regards,<br/>Nikhils<br /><br />
On Sat, Mar 19, 2011 at 10:46 AM, Nikhil Sontakke <nikhil.sontakke@enterprisedb.com> wrote: > Not really a performance issue AFAICS. If the relcache init file exists, > then the phase2 of the catalog cache which eventually calls the above code > path is avoided. Oh, that doesn't sound so bad, then. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes: > On Sat, Mar 19, 2011 at 10:46 AM, Nikhil Sontakke > <nikhil.sontakke@enterprisedb.com> wrote: >> Not really a performance issue AFAICS. If the relcache init file exists, >> then the phase2 of the catalog cache which eventually calls the above code >> path is avoided. > Oh, that doesn't sound so bad, then. I want to take a closer look at this one because I thought I'd covered all those issues in the last go-round. But right offhand Nikhil's analysis seems sane. regards, tom lane
On Mar 20, 2011, at 2:24 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Robert Haas <robertmhaas@gmail.com> writes: >> On Sat, Mar 19, 2011 at 10:46 AM, Nikhil Sontakke >> <nikhil.sontakke@enterprisedb.com> wrote: >>> Not really a performance issue AFAICS. If the relcache init file exists, >>> then the phase2 of the catalog cache which eventually calls the above code >>> path is avoided. > >> Oh, that doesn't sound so bad, then. > > I want to take a closer look at this one because I thought I'd covered > all those issues in the last go-round. Go for it. ...Robert
Nikhil Sontakke <nikhil.sontakke@enterprisedb.com> writes: > It looked familiar, so I dug up the archives and found that Tom had > committed a fix for a similar deadlock via git commitid: 715120e7 > However this current deadlock involved an index with oid 2663, which > is ClassNameNspIndexId. Clearly this was another case of locking the > index directly without taking a lock on the parent catalog. Further > sleuthing revealed that the culprit function was InitCatCachePhase2, > which directly calls index_open in the process startup phase. Patch applied, thanks! I did a bit of extra digging around the cache modules and could not find any other instances of the same problem, though I did find some places that seemed worthy of a comment about how they avoid it. regards, tom lane
Patch applied, thanks!
Thanks Tom!
Regards,
Nikhils