Обсуждение: Transaction settings: nowait

Поиск
Список
Период
Сортировка

Transaction settings: nowait

От
durumdara
Дата:
Hi!

In FireBird the transactions have these settings:

SET TRANSACTION  [NAME hostvar]  [READ WRITE | READ ONLY]  [ [ISOLATION LEVEL] { SNAPSHOT [TABLE STABILITY]                        | READ COMMITTED [[NO] RECORD_VERSION] } ]  [WAIT | NO WAIT]  [LOCK TIMEOUT seconds]
And this is the important thing:

[WAIT | NO WAIT]  [LOCK TIMEOUT seconds]
If set wait and timeout, the Firebird is waiting for the locked resource (record) for X seconds before it show deadlock error.

But when you set no wait, the deadlock error immediately shown by the server.

I wanna ask that if I want to avoid the full deadlocks.

For. example: I forget to make commit, or rollback on exception then all resources I used (updated) is locked.

If I use nowait, the clients immediately get the error message, and they are are not sitting deafly and blindly before the screen, waiting for what will happen.

So: have PGSQL same mechanism like nowait?

Thanks for your help:
    dd

Re: Transaction settings: nowait

От
durumdara
Дата:
Hi!

2009.05.06. 11:54 keltezéssel, Richard Huxton írta:
> durumdara wrote:
>>
>> So: have PGSQL same mechanism like nowait?
>
> When you take a lock:
> http://www.postgresql.org/docs/8.3/interactive/sql-lock.html
> http://www.postgresql.org/docs/8.3/interactive/sql-select.html#SQL-FOR-UPDATE-SHARE
>
As I see these things are not help to me when two transactions are
trying to use same recource...

Yesterday I tried to test my PG Website. I withdrew a "rollback"
instruction from the code.
The transaction started, but the insert SQL was wrong (I miss some
required field).
So this transaction haven't been changed anything on the table, but the
transaction remaining in "active" state because my fail.
Then the pgadmin is flew away on a field readjustment in this table.
This was an deadlock...

I don't want to see any deadlocks... I don't want to lock the tables.


>
> There is also the "statement_timeout" config setting which will
> terminate any single statement that takes longer than a set time.
> http://www.postgresql.org/docs/8.3/interactive/runtime-config-client.html#RUNTIME-CONFIG-CLIENT-STATEMENT
>
>
>
As I read this config param, this is terminate the statements only.
As I think, this meaning that if I have a "wrong join", or I do very
slow query, the server can cancel and terminate it.
But if I have a transaction that remaining in opened state, this could
be a source of the errors (deadlocks).

Thanks for your help:
    dd


Re: Transaction settings: nowait

От
"Albe Laurenz"
Дата:
durumdara wrote:
> In FireBird the transactions have these settings:
>
>
> SET TRANSACTION
>    [NAME hostvar]
>    [READ WRITE | READ ONLY]
>    [ [ISOLATION LEVEL] { SNAPSHOT [TABLE STABILITY]
>                          | READ COMMITTED [[NO] RECORD_VERSION] } ]
>    [WAIT | NO WAIT]
>    [LOCK TIMEOUT seconds]
> And this is the important thing:
>
>
> [WAIT | NO WAIT]
>    [LOCK TIMEOUT seconds]
> If set wait and timeout, the Firebird is waiting for the
> locked resource (record) for X seconds before it show deadlock error.
>
> But when you set no wait, the deadlock error immediately
> shown by the server.
>
> I wanna ask that if I want to avoid the full deadlocks.
>
> For. example: I forget to make commit, or rollback on
> exception then all resources I used (updated) is locked.
>
> If I use nowait, the clients immediately get the error
> message, and they are are not sitting deafly and blindly
> before the screen, waiting for what will happen.
>
> So: have PGSQL same mechanism like nowait?

I assume that you mean "locks" when you write "deadlocks".

There is no exact equivalent, but you can set the variable
statement_timeout to something else than 0 in your session
(or in your transaction).
Then every statement longer than this many seconds will be
terminated.

Yours,
Laurenz Albe

Re: Transaction settings: nowait

От
Merlin Moncure
Дата:
2009/5/6 Albe Laurenz <laurenz.albe@wien.gv.at>:
> durumdara wrote:
>> In FireBird the transactions have these settings:
>>
>>
>> SET TRANSACTION
>>    [NAME hostvar]
>>    [READ WRITE | READ ONLY]
>>    [ [ISOLATION LEVEL] { SNAPSHOT [TABLE STABILITY]
>>                          | READ COMMITTED [[NO] RECORD_VERSION] } ]
>>    [WAIT | NO WAIT]
>>    [LOCK TIMEOUT seconds]
>> And this is the important thing:
>>
>>
>> [WAIT | NO WAIT]
>>    [LOCK TIMEOUT seconds]
>> If set wait and timeout, the Firebird is waiting for the
>> locked resource (record) for X seconds before it show deadlock error.
>>
>> But when you set no wait, the deadlock error immediately
>> shown by the server.
>>
>> I wanna ask that if I want to avoid the full deadlocks.
>>
>> For. example: I forget to make commit, or rollback on
>> exception then all resources I used (updated) is locked.
>>
>> If I use nowait, the clients immediately get the error
>> message, and they are are not sitting deafly and blindly
>> before the screen, waiting for what will happen.
>>
>> So: have PGSQL same mechanism like nowait?

'select...for update' has a nowait clause, as does 'lock'.  also
advisory locks, using pg_try_advisory_lock.

another tactic is to send queries asynchronously and fire a cancel
based on client driven logic.

merlin

Re: Transaction settings: nowait

От
Craig Ringer
Дата:
durumdara wrote:

> If set wait and timeout, the Firebird is waiting for the locked resource
> (record) for X seconds before it show deadlock error.
>
> But when you set no wait, the deadlock error immediately shown by the
> server.

Waiting on a lock doesn't mean there's a deadlock. A deadlock only
occurs if two transactions are each waiting on resources the other
transaction holds.

PostgreSQL *DOES* let you control how long it waits before it checks to
see if transactions waiting on locks might be deadlocked with another
waiting transaction. See:
http://www.postgresql.org/docs/current/static/runtime-config-locks.html

However, it sounds like what you want is the ability to tell PostgreSQL
that you don't want your queries to ever wait on a lock at all, even if
it's just that you're waiting for another transaction to finish work.

If that's what you mean: I'm not aware of any way in PostgreSQL to set
lock wait timeouts at a transaction-wide level, or to set Pg to report
an error if it would have to wait for a lock.

Others seem to have wanted it enough to write patches for it, but I
don't think they got merged. See, for example:
http://archives.postgresql.org/pgsql-hackers/2004-06/msg00935.php

(Please read the discussion on that thread to get an idea of some of the
problems with the approach you appear to be requesting - if I'm reading
your post correctly.).

Anyway, it *IS* possible, as others have noted, to use
    SELECT ... FOR { UPDATE | SHARE } NOWAIT
and/or
    LOCK TABLE ... NOWAIT
before issuing work. However, this generally means you need to know what
tuples your queries will touch, including tuples accessed by triggers
and the like, before you issue your query.

It's also really clumsy, since you often WANT queries to wait on locks
at least briefly, otherwise you'll get intermittent errors from queries
that're operating normally just because another query that happened to
run concurrently briefly locked something the first query wanted.

I must say that I personally can't really see the use of a
transaction-wide lock wait timeout. Sure, most applications have to be
prepared to re-issue queries anyway due to transient errors of one sort
or another, and it's not unreasonable to want to be able to detect the
difference between a query blocking on a lock, and a query that's just
taking a long time to do work ... but I don't think that aborting the
transaction/statement with an error is the right way to do that.

First: In PostgreSQL an error from a transaction results in rollback of
the whole transaction. This means that if you wanted to re-try after a
lock wait timeout, you'd need to re-issue the whole transaction, or
you'd need to use savepoints before each statement to give you
statement-level retry. Both are clumsy and inefficient. It could also
easily result in situations where the same group of transactions keep on
re-trying and fighting over the same locks over and over; you'd waste a
lot of CPU and I/O repeating work pointlessly, when if you just let the
transaction wait on the lock everything would go just fine.

In other words, it'd be ugly. I'm also not really sure how much
difference it makes in practice WHY a statement is taking a long time,
only that it is. Who cares whether you're waiting on a lock held by
another transaction, or whether another transaction is using so much
memory and disk I/O bandwidth that your query is taking forever?

If you really did care about lock waits specifically, it might almost be
nicer to be able to have the server send asynchronous NOTICE-level
messages informing the application that the query is blocked on a lock -
eg "NOTICE: lock_wait_timeout exceeded waiting for SHARE lock on oid
44123". That way the client app would know what was happening, but the
query wouldn't get interrupted unless the app intentionally issued a
query cancel request.

Of course, that requires more sophisticated app programming and database
interface driver use than just issuing an error on lock wait (you have
to process messages, and you have to be prepared to do a query cancel
request via another connection, probably from another thread), but it's
a lot cleaner and nicer.

Personally in these situations I just make sure my apps are
multi-threaded with all database work done in a worker thread and
controlled by posting events back and forth. That way my UI can keep the
user informed and keep on updating while the query runs. The UI can also
make decisions about what to do with excessively long-running queries.
In the app I'm presently working on, for example, I pop up an infinite
progress indicator (busy-bar) after a short delay, and I permit the user
to cancel a long-running transaction if they don't want to keep waiting
(after all, I have to be able to handle transaction aborts for other
reasons anyway, so why not?).

In theory, if I was worried about excessive lock wait times, I could
even use the `pg_catalog.pg_locks' relation to figure out whether it was
locking issues that was causing the delay. I don't care why queries take
a while, though, only that they do.

So ... personally, I don't think a lock timeout is a particularly good
or useful idea. Basically all the use cases are handled (IMO more
cleanly) by statement_timeout, SELECT ... FOR {SHARE|UPDATE} NOWAIT,
LOCK TABLE ... NOWAIT, and pg_cancel_query(...) .

> For. example: I forget to make commit, or rollback on exception then all
> resources I used (updated) is locked.

Yes - that's an application bug.

--
Craig Ringer

Re: Transaction settings: nowait

От
Yasser Idris
Дата:
Dude, all what u wrote make sense. Only your missing one thing, consider the
following scenario that u already reply to:

>> For. example: I forget to make commit, or rollback on exception then all
>> resources I used (updated) is locked.

>Yes - that's an application bug.

Even if it's application bug, resources shouldn't be locked indefinitely.
What if there are other clients that access this db? Forget abt this, what
if the same problem arises because of a network failure during a transaction
that issued some locks?? What's the solution then, should the resources be
locked forever till someone finds out and kills the process manually?
Although the odds of this scenario might look unlikely to happen, the
results are extremely inconvenient. That's why those features (lock timeout,
transaction time out) are there in every other dbms. It's not abt that u
don't want ur client to wait much before aborting a statement, it's more of
telling ur stupid server to abort a transaction or release a lock in case of
non-graceful abortion (without commit, rollback, or release lock) from the
client side whether it was a programmers fault or any other reason


Craig Ringer wrote:
>
> durumdara wrote:
>
>> If set wait and timeout, the Firebird is waiting for the locked resource
>> (record) for X seconds before it show deadlock error.
>>
>> But when you set no wait, the deadlock error immediately shown by the
>> server.
>
> Waiting on a lock doesn't mean there's a deadlock. A deadlock only
> occurs if two transactions are each waiting on resources the other
> transaction holds.
>
> PostgreSQL *DOES* let you control how long it waits before it checks to
> see if transactions waiting on locks might be deadlocked with another
> waiting transaction. See:
> http://www.postgresql.org/docs/current/static/runtime-config-locks.html
>
> However, it sounds like what you want is the ability to tell PostgreSQL
> that you don't want your queries to ever wait on a lock at all, even if
> it's just that you're waiting for another transaction to finish work.
>
> If that's what you mean: I'm not aware of any way in PostgreSQL to set
> lock wait timeouts at a transaction-wide level, or to set Pg to report
> an error if it would have to wait for a lock.
>
> Others seem to have wanted it enough to write patches for it, but I
> don't think they got merged. See, for example:
> http://archives.postgresql.org/pgsql-hackers/2004-06/msg00935.php
>
> (Please read the discussion on that thread to get an idea of some of the
> problems with the approach you appear to be requesting - if I'm reading
> your post correctly.).
>
> Anyway, it *IS* possible, as others have noted, to use
>     SELECT ... FOR { UPDATE | SHARE } NOWAIT
> and/or
>     LOCK TABLE ... NOWAIT
> before issuing work. However, this generally means you need to know what
> tuples your queries will touch, including tuples accessed by triggers
> and the like, before you issue your query.
>
> It's also really clumsy, since you often WANT queries to wait on locks
> at least briefly, otherwise you'll get intermittent errors from queries
> that're operating normally just because another query that happened to
> run concurrently briefly locked something the first query wanted.
>
> I must say that I personally can't really see the use of a
> transaction-wide lock wait timeout. Sure, most applications have to be
> prepared to re-issue queries anyway due to transient errors of one sort
> or another, and it's not unreasonable to want to be able to detect the
> difference between a query blocking on a lock, and a query that's just
> taking a long time to do work ... but I don't think that aborting the
> transaction/statement with an error is the right way to do that.
>
> First: In PostgreSQL an error from a transaction results in rollback of
> the whole transaction. This means that if you wanted to re-try after a
> lock wait timeout, you'd need to re-issue the whole transaction, or
> you'd need to use savepoints before each statement to give you
> statement-level retry. Both are clumsy and inefficient. It could also
> easily result in situations where the same group of transactions keep on
> re-trying and fighting over the same locks over and over; you'd waste a
> lot of CPU and I/O repeating work pointlessly, when if you just let the
> transaction wait on the lock everything would go just fine.
>
> In other words, it'd be ugly. I'm also not really sure how much
> difference it makes in practice WHY a statement is taking a long time,
> only that it is. Who cares whether you're waiting on a lock held by
> another transaction, or whether another transaction is using so much
> memory and disk I/O bandwidth that your query is taking forever?
>
> If you really did care about lock waits specifically, it might almost be
> nicer to be able to have the server send asynchronous NOTICE-level
> messages informing the application that the query is blocked on a lock -
> eg "NOTICE: lock_wait_timeout exceeded waiting for SHARE lock on oid
> 44123". That way the client app would know what was happening, but the
> query wouldn't get interrupted unless the app intentionally issued a
> query cancel request.
>
> Of course, that requires more sophisticated app programming and database
> interface driver use than just issuing an error on lock wait (you have
> to process messages, and you have to be prepared to do a query cancel
> request via another connection, probably from another thread), but it's
> a lot cleaner and nicer.
>
> Personally in these situations I just make sure my apps are
> multi-threaded with all database work done in a worker thread and
> controlled by posting events back and forth. That way my UI can keep the
> user informed and keep on updating while the query runs. The UI can also
> make decisions about what to do with excessively long-running queries.
> In the app I'm presently working on, for example, I pop up an infinite
> progress indicator (busy-bar) after a short delay, and I permit the user
> to cancel a long-running transaction if they don't want to keep waiting
> (after all, I have to be able to handle transaction aborts for other
> reasons anyway, so why not?).
>
> In theory, if I was worried about excessive lock wait times, I could
> even use the `pg_catalog.pg_locks' relation to figure out whether it was
> locking issues that was causing the delay. I don't care why queries take
> a while, though, only that they do.
>
> So ... personally, I don't think a lock timeout is a particularly good
> or useful idea. Basically all the use cases are handled (IMO more
> cleanly) by statement_timeout, SELECT ... FOR {SHARE|UPDATE} NOWAIT,
> LOCK TABLE ... NOWAIT, and pg_cancel_query(...) .
>
>> For. example: I forget to make commit, or rollback on exception then all
>> resources I used (updated) is locked.
>
> Yes - that's an application bug.
>
> --
> Craig Ringer
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
>

--
View this message in context: http://www.nabble.com/Transaction-settings%3A-nowait-tp23402987p23773848.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


Re: Transaction settings: nowait

От
Scott Marlowe
Дата:
On Thu, May 28, 2009 at 9:52 PM, Yasser Idris <luckyasser@gmail.com> wrote:
>
> Dude, all what u wrote make sense. Only your missing one thing, consider the
> following scenario that u already reply to:
>
>>> For. example: I forget to make commit, or rollback on exception then all
>>> resources I used (updated) is locked.
>
>>Yes - that's an application bug.
>
> Even if it's application bug, resources shouldn't be locked indefinitely.

But some people run very long running transactions on purpose.  Should
we break their applications to make you happy?

> What if there are other clients that access this db? Forget abt this, what
> if the same problem arises because of a network failure during a transaction
> that issued some locks?? What's the solution then, should the resources be
> locked forever till someone finds out and kills the process manually?

No, if the network connection is lost, then tcp_keepalive_timeout will
kick in and the process will eventually get killed off.

> Although the odds of this scenario might look unlikely to happen, the
> results are extremely inconvenient. That's why those features (lock timeout,
> transaction time out) are there in every other dbms. It's not abt that u
> don't want ur client to wait much before aborting a statement, it's more of
> telling ur stupid server to abort a transaction or release a lock in case of
> non-graceful abortion (without commit, rollback, or release lock) from the
> client side whether it was a programmers fault or any other reason

I agree that those would be useful settings / features to have.  But
there are other ways of handling this situation.  One of the most
common is to use a firewall between your db and the other servers that
times out idle connections, which will result in pg seeing the lost
connection after the tcp keepalive timeout and killing the session.
It's not perfect, but it does work.

Re: Transaction settings: nowait

От
Scott Marlowe
Дата:
On Fri, May 29, 2009 at 10:18 AM, Y W <luckyasser@gmail.com> wrote:
> Are you referring to connection settings ? Coz if u do, the closet thing I
> can find for the tcp_keepalive_timeout u're mentioning are the
> tcp_keepalives_idle, tcp_keepalives_interval, and the tcp_keepalives_count.
> And apart from the fact that they're ignored for unix-socket connetions,
> these are for terminating the client connection from the client side and not
> from the server side.

Yes those are what I'm referring to, and they apply to the server side
as well.  They only terminate DEAD connections.  I.e. the client dies,
and x seconds later the connection is closed on the server and the
backend terminated.

> Otherwise I don't know what are you refering to. I
> want to be able to terminate idle connections from the server side, if there
> was a tcp failure with that client which resulted in an unfinished
> transaction and a locked resource, how can u do that ?

tcp keepalive does that.

Re: Transaction settings: nowait

От
Scott Marlowe
Дата:
On Fri, May 29, 2009 at 10:55 AM, Y W <luckyasser@gmail.com> wrote:
> But according to the documentation, are they ignored if postgres was hosted
> on a Unix/linux system?

They work fine on unix, as long as the user has the option to set
their own timeouts.  If not then set them for the OS (for linux see
/etc/sysctl.conf)

> And are you're pretty sure that connections are terminated on the server
> side, which will result in releasing any locked resources during that failed
> transaction?

Yep, tested and confirmed behaviour on RHEL4 and RHEL5 in production.
I set the system level tcp_keepalive setttings in those two instances.

Re: Transaction settings: nowait

От
Scott Marlowe
Дата:
correct.  Also, if the client is still "alive" they have no effect either.

On Fri, May 29, 2009 at 11:01 AM, Y W <luckyasser@gmail.com> wrote:
> I'm sorry that was my bad, they're only ignored when uing Unix-domain
> sockets to connect instead of TCP/IP.
>
> On Fri, May 29, 2009 at 12:55 PM, Y W <luckyasser@gmail.com> wrote:
>>
>> But according to the documentation, are they ignored if postgres was
>> hosted on a Unix/linux system?
>>
>> And are you're pretty sure that connections are terminated on the server
>> side, which will result in releasing any locked resources during that failed
>> transaction?
>>
>> On Fri, May 29, 2009 at 12:22 PM, Scott Marlowe <scott.marlowe@gmail.com>
>> wrote:
>>>
>>> On Fri, May 29, 2009 at 10:18 AM, Y W <luckyasser@gmail.com> wrote:
>>> > Are you referring to connection settings ? Coz if u do, the closet
>>> > thing I
>>> > can find for the tcp_keepalive_timeout u're mentioning are the
>>> > tcp_keepalives_idle, tcp_keepalives_interval, and the
>>> > tcp_keepalives_count.
>>> > And apart from the fact that they're ignored for unix-socket
>>> > connetions,
>>> > these are for terminating the client connection from the client side
>>> > and not
>>> > from the server side.
>>>
>>> Yes those are what I'm referring to, and they apply to the server side
>>> as well.  They only terminate DEAD connections.  I.e. the client dies,
>>> and x seconds later the connection is closed on the server and the
>>> backend terminated.
>>>
>>> > Otherwise I don't know what are you refering to. I
>>> > want to be able to terminate idle connections from the server side, if
>>> > there
>>> > was a tcp failure with that client which resulted in an unfinished
>>> > transaction and a locked resource, how can u do that ?
>>>
>>> tcp keepalive does that.
>>
>
>



--
When fascism comes to America, it will be intolerance sold as diversity.