Обсуждение: Abort state on duplicated PKey in transactions

Поиск
Список
Период
Сортировка

Abort state on duplicated PKey in transactions

От
Haroldo Stenger
Дата:
Hi dear people,

I'm not sure if this is for general or for hackers, but let's start here
;-)

We are building a RAD tool (GeneXus) support, for PostgreSQL.

A problem which arose, is that within a transaction, if one inserts on a
table and the PK restriction is violated, the transaction aborts and
leaves itself in abort state. One has to END the transaction and start
a new one. This is a problem, in large transactions, where lots of
things have been done to the database, and an insertion is to be done,
which may yield an error just because the PK already existed. The whole
transaction should have to be redone if the insertion failed. A
solution, could be to query for the existance of the PK, just before the
insertion. But there is a little span between the test and the
insertion, where another insertion from another transaction could void
the existance test. Any clever ideas on how to solve this? Using
triggers maybe? Other solutions?

I'm aware that savepoints and nested transactions will be implemented in
future versions, but how to solve the problem before that starts
working?

Thanks

Regards,
Haroldo.

Re: Abort state on duplicated PKey in transactions

От
Martijn van Oosterhout
Дата:
On Fri, Sep 07, 2001 at 04:29:45PM -0500, Haroldo Stenger wrote:
> I'm aware that savepoints and nested transactions will be implemented in
> future versions, but how to solve the problem before that starts
> working?

The usual way is to generate your primarys keys from a sequence. Then you
never get duplicate values and so the inserts never fail.

This may not apply in your situation...
--
Martijn van Oosterhout <kleptog@svana.org>
http://svana.org/kleptog/
> Magnetism, electricity and motion are like a three-for-two special offer:
> if you have two of them, the third one comes free.

Re: Abort state on duplicated PKey in transactions

От
Haroldo Stenger
Дата:
Martijn van Oosterhout wrote:
> The usual way is to generate your primarys keys from a sequence. Then you
> never get duplicate values and so the inserts never fail.
>
> This may not apply in your situation...


Both are true, thanks. We need a layer that doesn't complain when a
failing insert is issued. In DB apps, many times a procedure, tries to
insert a record, and if it already exists, then it updates the record.
But since that pair of commands, may be late in a bigger transaction, a
failing insert shoudn't abort the transaction. So, I want to find a way
of avoiding a failing insert. Any other ideas?

This is important, since in a couple of weeks, we will be announcing
GeneXus support for PostgreSQL, and a lot of exitement is being
generated out of this. That announcement will be done in a commercial
and entepreneurial international event, which is fresh water for us
PostgreSQL enthusiasts.

Regards,
Haroldo.