Обсуждение: heap_update() VM retry could break HOT?
Hi, heap_update() retries pinning the vm pinning, as explained in the following comment: /* * Before locking the buffer, pin the visibility map page if it appears to * be necessary. Since we haven't got the lockyet, someone else might be * in the middle of changing this, so we'll need to recheck after we have * the lock. */if(PageIsAllVisible(page)) visibilitymap_pin(relation, block, &vmbuffer); ... /* * If we didn't pin the visibility map page and the page has become all * visible while we were busy locking the buffer,or during some * subsequent window during which we had it unlocked, we'll have to unlock * and re-lock, to avoid holdingthe buffer lock across an I/O. That's a * bit unfortunate, especially since we'll now have to recheck whether the* tuple has been locked or updated under us, but hopefully it won't * happen very often. */if (vmbuffer == InvalidBuffer&& PageIsAllVisible(page)){ LockBuffer(buffer, BUFFER_LOCK_UNLOCK); visibilitymap_pin(relation, block,&vmbuffer); LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); goto l2;} unfortunately the l2 target is after the following:/* * If we're not updating any "key" column, we can grab a weaker locktype. * This allows for more concurrency when we are running simultaneously * with foreign key checks. * * Note thatif a column gets detoasted while executing the update, but * the value ends up being the same, this test will fail andwe will use * the stronger lock. This is acceptable; the important case to optimize * is updates that don't manipulatekey columns, not those that * serendipitiously arrive at the same key values. */HeapSatisfiesHOTandKeyUpdate(relation,hot_attrs, key_attrs, id_attrs, &satisfies_hot, &satisfies_key, &satisfies_id, &oldtup, newtup);if (satisfies_key){ *lockmode = LockTupleNoKeyExclusive; mxact_status = MultiXactStatusNoKeyUpdate; key_intact = true; /* * If this is the first possibly-multixact-able operation in the * current transaction, set my per-backendOldestMemberMXactId * setting. We can be certain that the transaction will never become a * member ofany older MultiXactIds than that. (We have to do this * even if we end up just using our own TransactionId below,since * some other backend could incorporate our XID into a MultiXact * immediately afterwards.) */ MultiXactIdSetOldestMember();}else{ *lockmode = LockTupleExclusive; mxact_status = MultiXactStatusUpdate; key_intact= false;} as far as I can see that could mean that we perform hot updates when not permitted, because the tuple has been replaced since, including the pkey. Similarly, the wrong tuple lock mode could end up being used. Am I missing something? - Andres
On Mon, Jul 18, 2016 at 12:47 PM, Andres Freund <andres@anarazel.de> wrote:
as far as I can see that could mean that we perform hot updates when not
permitted, because the tuple has been replaced since, including the
pkey. Similarly, the wrong tuple lock mode could end up being used.
Am I missing something?
If the to-be-updated tuple gets updated while we were retrying vm pinning, heap_update() should return HeapTupleUpdated and the caller must wait for the updating transaction to finish, retry update with the new version (or fail depending on the isolation level). Given that HeapTupleSatisfiesUpdate() is called after l2, the logic seems fine to me.
Thanks,
Pavan
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
PostgreSQL Development, 24x7 Support, Training & Services