Re: Avoid orphaned objects dependencies, take 3

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: Avoid orphaned objects dependencies, take 3
Дата
Msg-id CA+TgmoYAv09Y9Y8-yOMzTCijzZeVqFrLBx6Y4+VPh0CMg476Uw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Avoid orphaned objects dependencies, take 3  (Bertrand Drouvot <bertranddrouvot.pg@gmail.com>)
Ответы Re: Avoid orphaned objects dependencies, take 3
Список pgsql-hackers
On Wed, May 22, 2024 at 6:21 AM Bertrand Drouvot
<bertranddrouvot.pg@gmail.com> wrote:
> I started initially with [1] but it was focusing on function-schema only.

Yeah, that's what I thought we would want to do. And then just extend
that to the other cases.

> Then I proposed [2] making use of a dirty snapshot when recording the dependency.
> But this approach appeared to be "scary" and it was still failing to close
> some race conditions.

The current patch still seems to be using dirty snapshots for some
reason, which struck me as a bit odd. My intuition is that if we're
relying on dirty snapshots to solve problems, we likely haven't solved
the problems correctly, which seems consistent with your statement
about "failing to close some race conditions". But I don't think I
understand the situation well enough to be sure just yet.

> Then, Tom proposed another approach in [2] which is that "creation DDL will have
> to take a lock on each referenced object that'd conflict with a lock taken by DROP".
> This is the one the current patch is trying to implement.

It's a clever idea, but I'm not sure that I like it.

> I think there is 2 cases here:
>
> First case: the "conflicting" lock mode is for one of our own lock then LockCheckConflicts()
> would report this as a NON conflict.
>
> Second case: the "conflicting" lock mode is NOT for one of our own lock then LockCheckConflicts()
> would report a conflict. But I've the feeling that the existing code would
> already lock those sessions.
>
> Was your concern about "First case" or "Second case" or both?

The second case, I guess. It's bad to take a weaker lock first and
then a stronger lock on the same object later, because it can lead to
deadlocks that would have been avoided if the stronger lock had been
taken at the outset. Here, it seems like things would happen in the
other order: if we took two locks, we'd probably take the stronger
lock in the higher-level code and then the weaker lock in the
dependency code. That shouldn't break anything; it's just a bit
inefficient. My concern was really more about the maintainability of
the code. I fear that if we add code that takes heavyweight locks in
surprising places, we might later find the behavior difficult to
reason about.

Tom, what is your thought about that concern?

--
Robert Haas
EDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Ranier Vilela
Дата:
Сообщение: Avoid possible dereference null pointer (src/backend/catalog/pg_depend.c)
Следующее
От: Robert Haas
Дата:
Сообщение: Re: Shared detoast Datum proposal