Обсуждение: deadlocks in postgresql 7.2.1

Поиск
Список
Период
Сортировка

deadlocks in postgresql 7.2.1

От
Philipp Reisner
Дата:
Hi,

We use postgresql 7.2.1 (actually the Debian binary packet version
7.2.1-2woody2)
Our database load has more QUERIES than INSERT/DELETES. But we have a
sever problem with the wirting operations.

Once in a while (about 3 times a day) one or more INSERTS/DELETES simply
go into the "waiting" state, and block the whole database. The only way
out is to terminate the client connection (i.e. to abort the blocked
INSERT/DELETE query)

Further investigation with ps -e -o wchan... showed that the backed
process was simply sleeping in "semop".

Output of ps:

   762 ?        S      0:00 /usr/lib/postgresql/bin/postmaster
   764 ?        S      0:00 postgres: stats buffer process
   765 ?        S      0:00 postgres: stats collector process
24872 ?        S      0:00 postgres: sd sd 10.2.2.6 idle in transaction
24873 ?        R     68:01 postgres: sd sd 10.2.2.6 SELECT
24932 ?        S      3:09 postgres: sd sd 10.2.2.6 idle in transaction
24943 ?        R      3:02 postgres: sd sd 10.2.2.6 SELECT
25004 ?        S      0:01 postgres: sd sd 10.2.1.5 idle in transaction
21226 ?        S      0:00 postgres: sd sd 10.2.1.5 idle in transaction
21228 ?        S      0:00 postgres: sd sd 10.2.1.5 idle in transaction
21229 ?        S      0:00 postgres: sd sd 10.2.1.5 idle in transaction
21230 ?        S      0:00 postgres: sd sd 10.2.1.5 idle in transaction
21231 ?        S      0:01 postgres: sd sd 10.2.1.5 idle in transaction
21232 ?        S      0:00 postgres: sd sd 10.2.1.5 idle in transaction
21227 ?        S      0:01 postgres: sd sd 10.2.1.5 INSERT waiting
21391 pts/1    S      0:00 grep postg

   762 ?        S      0:00 /usr/lib/postgresql/bin/postmaster
   764 ?        S      0:00 postgres: stats buffer process
   765 ?        S      0:00 postgres: stats collector process
24872 ?        S      0:00 postgres: sd sd 10.2.2.6 idle in transaction
24873 ?        S     68:02 postgres: sd sd 10.2.2.6 idle in transaction
24932 ?        S      3:09 postgres: sd sd 10.2.2.6 idle in transaction
24943 ?        S      3:04 postgres: sd sd 10.2.2.6 idle in transaction
25004 ?        S      0:01 postgres: sd sd 10.2.1.5 idle in transaction
21226 ?        S      0:00 postgres: sd sd 10.2.1.5 idle in transaction
21228 ?        S      0:00 postgres: sd sd 10.2.1.5 idle in transaction
21229 ?        S      0:00 postgres: sd sd 10.2.1.5 idle in transaction
21230 ?        S      0:00 postgres: sd sd 10.2.1.5 idle in transaction
21231 ?        S      0:01 postgres: sd sd 10.2.1.5 idle in transaction
21232 ?        S      0:00 postgres: sd sd 10.2.1.5 idle in transaction
21227 ?        S      0:01 postgres: sd sd 10.2.1.5 INSERT waiting

   762 ?        S      0:00 /usr/lib/postgresql/bin/postmaster
   764 ?        S      0:00 postgres: stats buffer process
   765 ?        S      0:00 postgres: stats collector process
10291 ?        S      0:05 postgres: sd sd 10.2.1.5 idle in transaction
  4303 ?        S      0:00 postgres: sd sd 10.2.2.6 idle in transaction
  4304 ?        S     40:43 postgres: sd sd 10.2.2.6 idle in transaction
  4370 ?        S      2:54 postgres: sd sd 10.2.2.6 idle in transaction
  4384 ?        S      3:12 postgres: sd sd 10.2.2.6 idle in transaction
19961 ?        S      4:32 postgres: sd sd 10.2.1.5 idle in transaction
19962 ?        S      4:40 postgres: sd sd 10.2.1.5 idle in transaction
19963 ?        S      4:24 postgres: sd sd 10.2.1.5 DELETE waiting
19964 ?        S      4:58 postgres: sd sd 10.2.1.5 idle in transaction
19965 ?        S      4:18 postgres: sd sd 10.2.1.5 idle in transaction
19966 ?        S      3:49 postgres: sd sd 10.2.1.5 idle in transaction
  5574 ?        S      2:50 postgres: sd sd 10.2.1.5 idle in transaction
  5575 ?        S      2:54 postgres: sd sd 10.2.1.5 idle in transaction
  8744 pts/0    S      0:00 /usr/lib/postgresql/bin/psql -d sd
  8751 ?        S      0:00 postgres: sd sd [local] idle
16606 ?        S      0:03 postgres: sd sd 10.2.1.5 idle in transaction


To get rid of the problem we tried to upgrade to 7.3.3. But see next
mail for our experiences with 7.3.3.

-Philipp
--
: Dipl-Ing Philipp Reisner                      Tel +43-1-8178292-50 :
: LINBIT Information Technologies GmbH          Fax +43-1-8178292-82 :
: Schönbrunnerstr 244, 1120 Vienna, Austria    http://www.linbit.com :

Re: deadlocks in postgresql 7.2.1

От
Peter Eisentraut
Дата:
Philipp Reisner writes:

> Once in a while (about 3 times a day) one or more INSERTS/DELETES simply
> go into the "waiting" state, and block the whole database. The only way
> out is to terminate the client connection (i.e. to abort the blocked
> INSERT/DELETE query)
>
> Further investigation with ps -e -o wchan... showed that the backed
> process was simply sleeping in "semop".
>
> Output of ps:
>
>    762 ?        S      0:00 /usr/lib/postgresql/bin/postmaster
>    764 ?        S      0:00 postgres: stats buffer process
>    765 ?        S      0:00 postgres: stats collector process
> 24872 ?        S      0:00 postgres: sd sd 10.2.2.6 idle in transaction
> 24873 ?        R     68:01 postgres: sd sd 10.2.2.6 SELECT
> 24932 ?        S      3:09 postgres: sd sd 10.2.2.6 idle in transaction
> 24943 ?        R      3:02 postgres: sd sd 10.2.2.6 SELECT
> 25004 ?        S      0:01 postgres: sd sd 10.2.1.5 idle in transaction
[snip]

All these "idle in transaction" sessions have unfinished transactions that
are probably holding locks that the INSERT is waiting for.  If you
constantly have loads of "idle in transaction" sessions, you need to fix
your application.

In 7.3 there is a system table called pg_locks that you can use to
investigate locks.  I don't believe there was one in 7.2.

--
Peter Eisentraut   peter_e@gmx.net

Re: deadlocks in postgresql 7.2.1

От
Philipp Reisner
Дата:
Am Montag, 28. Juli 2003 11:41 schrieb Peter Eisentraut:
> Philipp Reisner writes:
> > Once in a while (about 3 times a day) one or more INSERTS/DELETES simply
> > go into the "waiting" state, and block the whole database. The only way
> > out is to terminate the client connection (i.e. to abort the blocked
> > INSERT/DELETE query)
> >
> > Further investigation with ps -e -o wchan... showed that the backed
> > process was simply sleeping in "semop".
> >
> > Output of ps:
> >
> >    762 ?        S      0:00 /usr/lib/postgresql/bin/postmaster
> >    764 ?        S      0:00 postgres: stats buffer process
> >    765 ?        S      0:00 postgres: stats collector process
> > 24872 ?        S      0:00 postgres: sd sd 10.2.2.6 idle in transaction
> > 24873 ?        R     68:01 postgres: sd sd 10.2.2.6 SELECT
> > 24932 ?        S      3:09 postgres: sd sd 10.2.2.6 idle in transaction
> > 24943 ?        R      3:02 postgres: sd sd 10.2.2.6 SELECT
> > 25004 ?        S      0:01 postgres: sd sd 10.2.1.5 idle in transaction
>
> [snip]
>
> All these "idle in transaction" sessions have unfinished transactions that
> are probably holding locks that the INSERT is waiting for.  If you
> constantly have loads of "idle in transaction" sessions, you need to fix
> your application.
>
> In 7.3 there is a system table called pg_locks that you can use to
> investigate locks.  I don't believe there was one in 7.2.

[ Sorry about this offtopic posting ]

Thanks a lot for the reply!

The applications uses the jdbc driver with autocommit turned off,
commit and rollback. Do you know if the jdbc driver just starts
a new transaction as soon as the last one was ended with commit/
rollback ?

BTW, The development system is still on postgres 7.3.3. The is
the pg_locks table:

 relation | database | transaction |  pid  |      mode       | granted
----------+----------+-------------+-------+-----------------+---------
  6520806 |  6520640 |             | 20988 | AccessShareLock | t
          |          |       63435 | 21154 | ExclusiveLock   | t
  6521098 |  6520640 |             | 20988 | AccessShareLock | t
  6521041 |  6520640 |             | 20988 | AccessShareLock | t
  6520884 |  6520640 |             | 20988 | AccessShareLock | t
  6520894 |  6520640 |             | 20988 | AccessShareLock | t
          |          |       63442 | 21153 | ExclusiveLock   | t
          |          |       57548 | 21140 | ExclusiveLock   | t
  6520810 |  6520640 |             | 20988 | AccessShareLock | t
          |          |       63434 | 21160 | ExclusiveLock   | t
          |          |       63110 | 21109 | ExclusiveLock   | t
  6520833 |  6520640 |             | 20988 | AccessShareLock | t
          |          |       57837 | 21096 | ExclusiveLock   | t
  6521078 |  6520640 |             | 20988 | AccessShareLock | t
          |          |       63437 | 21156 | ExclusiveLock   | t
  6520702 |  6520640 |             | 20988 | AccessShareLock | t
          |          |       63436 | 21155 | ExclusiveLock   | t
          |          |       63438 | 21157 | ExclusiveLock   | t
          |          |       63440 | 21159 | ExclusiveLock   | t
  6520814 |  6520640 |             | 20988 | AccessShareLock | t
  6520899 |  6520640 |             | 20988 | AccessShareLock | t
  6520652 |  6520640 |             | 20988 | AccessShareLock | t
          |          |       57826 | 20987 | ExclusiveLock   | t
          |          |       63439 | 21158 | ExclusiveLock   | t
  6521092 |  6520640 |             | 20988 | AccessShareLock | t
    16757 |  6520640 |             | 21153 | AccessShareLock | t
          |          |       57535 | 20988 | ExclusiveLock   | t
(27 rows)

-Philipp
--
: Dipl-Ing Philipp Reisner                      Tel +43-1-8178292-50 :
: LINBIT Information Technologies GmbH          Fax +43-1-8178292-82 :
: Schönbrunnerstr 244, 1120 Vienna, Austria    http://www.linbit.com :

Re: deadlocks in postgresql 7.2.1

От
Tom Lane
Дата:
Philipp Reisner <philipp.reisner@linbit.com> writes:

>    762 ?        S      0:00 /usr/lib/postgresql/bin/postmaster
>    764 ?        S      0:00 postgres: stats buffer process
>    765 ?        S      0:00 postgres: stats collector process
> 24872 ?        S      0:00 postgres: sd sd 10.2.2.6 idle in transaction
> 24873 ?        R     68:01 postgres: sd sd 10.2.2.6 SELECT
> 24932 ?        S      3:09 postgres: sd sd 10.2.2.6 idle in transaction
> 24943 ?        R      3:02 postgres: sd sd 10.2.2.6 SELECT
> 25004 ?        S      0:01 postgres: sd sd 10.2.1.5 idle in transaction
> 21226 ?        S      0:00 postgres: sd sd 10.2.1.5 idle in transaction
> 21228 ?        S      0:00 postgres: sd sd 10.2.1.5 idle in transaction
> 21229 ?        S      0:00 postgres: sd sd 10.2.1.5 idle in transaction
> 21230 ?        S      0:00 postgres: sd sd 10.2.1.5 idle in transaction
> 21231 ?        S      0:01 postgres: sd sd 10.2.1.5 idle in transaction
> 21232 ?        S      0:00 postgres: sd sd 10.2.1.5 idle in transaction
> 21227 ?        S      0:01 postgres: sd sd 10.2.1.5 INSERT waiting
> 21391 pts/1    S      0:00 grep postg

I see no reason to think this is an undetected deadlock.  The INSERT
is probably waiting for a lock that is held by one of your idle
transactions.  You need to fix your client logic to not hold
transactions open for long periods.

(Now that you've moved up to 7.3, you could look at pg_locks to see just
which idle transaction is blocking the INSERT.)

            regards, tom lane

Re: deadlocks in postgresql 7.2.1

От
Dmitry Tkach
Дата:
>
>
>
>Thanks a lot for the reply!
>
>The applications uses the jdbc driver with autocommit turned off,
>commit and rollback. Do you know if the jdbc driver just starts
>a new transaction as soon as the last one was ended with commit/
>rollback ?
>
>
Yes, it does.
You have to explictly setAutocommit(true) instead of commit() (or, you
can commit(), and *then* setAutocommit(true) if you want, the result is
the same).

I hope, it helps....

Dima

Deadlock in PostgreSQL 7.3.4

От
Philipp Reisner
Дата:
Hi,

In the meantime we were able to upgrade to PostgreSQL 7.3.4+Tom's patch
to datetime.c [see 'Postgresql 7.3.3 crashing on query' thread].

Now if the applications issues one delete statement concurrently on
two connections both block forever.

Please correct me if I am wrong, but should not one of the statements
succeed and the other simply fail ?

Aug 18 10:34:25 nut1 postgres[4934]: [44389] LOG:
                     query: delete from Calls where objID = 2268645

Aug 18 10:34:25 nut1 postgres[4933]: [44071] LOG:
                     query: delete from Calls where objID = 2268645


Architecture: i386, OS: Debian

And here the full info:

sd=> SELECT l.mode, l.granted, l.pid, l.transaction, d.datname, r.relname FROM pg_locks l, pg_database d, pg_class r
WHEREd.oid=l.database AND r.oid=l.relation; 
       mode       | granted |  pid  | transaction | datname |           relname
------------------+---------+-------+-------------+---------+------------------------------
 AccessShareLock  | t       | 26836 |             | sd      | pg_locks
 AccessShareLock  | t       | 26836 |             | sd      | pg_class
 AccessShareLock  | t       | 22486 |             | sd      | callactivities
 AccessShareLock  | t       |  4935 |             | sd      | callactivities
 AccessShareLock  | t       | 22487 |             | sd      | callactivities
 AccessShareLock  | t       | 22486 |             | sd      | bppersons
 AccessShareLock  | t       | 22487 |             | sd      | bppersons
 AccessShareLock  | t       |  4935 |             | sd      | bppersons
 AccessShareLock  | t       |  4935 |             | sd      | priorities
 AccessShareLock  | t       | 22487 |             | sd      | priorities
 AccessShareLock  | t       | 22486 |             | sd      | priorities
 AccessShareLock  | t       | 22487 |             | sd      | callstates
 AccessShareLock  | t       |  4935 |             | sd      | callstates
 AccessShareLock  | t       | 22486 |             | sd      | callstates
 AccessShareLock  | t       |  4935 |             | sd      | queues
 AccessShareLock  | t       | 22487 |             | sd      | queues
 AccessShareLock  | t       | 22486 |             | sd      | queues
 AccessShareLock  | t       | 22486 |             | sd      | timezones
 AccessShareLock  | t       |  4935 |             | sd      | timezones
 AccessShareLock  | t       | 22487 |             | sd      | timezones
 AccessShareLock  | t       | 22486 |             | sd      | calls
 RowExclusiveLock | t       |  4934 |             | sd      | calls
 AccessShareLock  | t       |  4934 |             | sd      | calls
 AccessShareLock  | t       |  4935 |             | sd      | calls
 RowExclusiveLock | t       |  4933 |             | sd      | calls
 AccessShareLock  | t       |  4933 |             | sd      | calls
 AccessShareLock  | t       | 22487 |             | sd      | calls
 AccessShareLock  | t       | 22487 |             | sd      | bporganizations
 AccessShareLock  | t       | 22486 |             | sd      | bporganizations
 AccessShareLock  | t       |  4935 |             | sd      | bporganizations
 AccessShareLock  | t       | 22487 |             | sd      | problemtypes
 AccessShareLock  | t       | 22486 |             | sd      | problemtypes
 AccessShareLock  | t       |  4935 |             | sd      | problemtypes
 AccessShareLock  | t       |  4935 |             | sd      | contractelements
 AccessShareLock  | t       | 22486 |             | sd      | contractelements
 AccessShareLock  | t       | 22487 |             | sd      | contractelements
 AccessShareLock  | t       |  4935 |             | sd      | contracts
 AccessShareLock  | t       | 22487 |             | sd      | contracts
 AccessShareLock  | t       | 22486 |             | sd      | contracts
 AccessShareLock  | t       | 22487 |             | sd      | bpartners
 AccessShareLock  | t       |  4935 |             | sd      | bpartners
 AccessShareLock  | t       | 22486 |             | sd      | bpartners
 AccessShareLock  | t       | 22487 |             | sd      | callcalculations
 AccessShareLock  | t       | 22486 |             | sd      | callcalculations
 AccessShareLock  | t       |  4935 |             | sd      | callcalculations
 AccessShareLock  | t       | 22487 |             | sd      | chargingtypes
 AccessShareLock  | t       | 22486 |             | sd      | chargingtypes
 AccessShareLock  | t       |  4935 |             | sd      | chargingtypes
 AccessShareLock  | t       | 22487 |             | sd      | dictpropertysethasproperties
 AccessShareLock  | t       |  4935 |             | sd      | dictpropertysethasproperties
 AccessShareLock  | t       | 22486 |             | sd      | dictpropertysethasproperties
 AccessShareLock  | t       | 22487 |             | sd      | concretepropertysets
 AccessShareLock  | t       |  4935 |             | sd      | concretepropertysets
 AccessShareLock  | t       | 22486 |             | sd      | concretepropertysets
 AccessShareLock  | t       | 22486 |             | sd      | concretepropertyvalues
 AccessShareLock  | t       |  4935 |             | sd      | concretepropertyvalues
 AccessShareLock  | t       | 22487 |             | sd      | concretepropertyvalues
 AccessShareLock  | t       |  4935 |             | sd      | dictproperties
 AccessShareLock  | t       | 22486 |             | sd      | dictproperties
 AccessShareLock  | t       | 22487 |             | sd      | dictproperties
 AccessShareLock  | t       |  4934 |             | sd      | calls_pkey
(61 rows)

 3936 ?        S      0:00 postgres: stats buffer process
 3937 ?        S      0:00 postgres: stats collector process
 4555 ?        S     69:38 postgres: sd sd 10.2.2.6 idle
 4556 ?        S      0:07 postgres: sd sd 10.2.2.6 idle
 4621 ?        S     11:19 postgres: sd sd 10.2.2.6 idle
 4632 ?        S     10:08 postgres: sd sd 10.2.2.6 idle
 4764 ?        S      0:04 postgres: sd sd 10.2.1.5 idle
 4933 ?        S      5:20 postgres: sd sd 10.2.1.5 idle in transaction
 4934 ?        S      5:08 postgres: sd sd 10.2.1.5 DELETE waiting
 4935 ?        S      6:00 postgres: sd sd 10.2.1.5 idle in transaction
 4936 ?        S      4:54 postgres: sd sd 10.2.1.5 idle
 4937 ?        S      5:30 postgres: sd sd 10.2.1.5 idle
 4938 ?        S      6:17 postgres: sd sd 10.2.1.5 idle
 4939 ?        S      6:23 postgres: sd sd 10.2.1.5 idle
22486 ?        S      1:40 postgres: sd sd 10.2.1.5 idle in transaction
22487 ?        S      1:54 postgres: sd sd 10.2.1.5 idle in transaction
24211 pts/2    S      0:00 /usr/lib/postgresql/bin/psql -d sd
24218 ?        S      0:00 postgres: sd sd [local] idle
26753 ?        S      0:00 postgres: sd sd 10.2.1.5 idle

Aug 18 10:34:25 nut1 postgres[4934]: [44383-1] LOG:  query: select  objID, ID, CustCallID, SPCallID, cel_objid,
con_objid,ccu_objid, csp_objid, csf_objid, fro_objid, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-2]  to1_objid, to2_objid, CallerSLA, CallOpenTime, CallOpenTimeUTC,
CallResponseTime,CallResponseTimeUTC, CallCloseTime, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-3]  CallCloseTimeUTC, CallReceiveTime, CallReceiveTimeUTC, CallSendTime,
CallSendTimeUTC,ProblemStartTime, ProblemStartTimeUTC, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-4]  EndOfResponseTime, EndOfResponseTimeUTC, RemainingResponseMinutes,
EndOfRecoveryTime,EndOfRecoveryTimeUTC, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-5]  RemainingRecoveryMinutes, CallStateChangeTime, CallStateChangeTimeUTC,
CallAcknowledgeTime,CallAcknowledgeTimeUTC, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-6]  CallerShortName, CallerFirstName, CallerLastName, CallerTitle,
CallerSalutation,CallerTel, CallerTel2, CallerMobileTel, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-7]  CallerFax, CallerLocationZip, CallerLocationCity, CallerLocationStreet,
CallerLocationCountry,CallerLocation, CallerEMail, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-8]  CallerPIN, CallerDescription, CallerDepartment, CallerSign,
MainCompManfacturer,MainCompType, MainCompModel, MainComponent, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-9]  MainCompName, MainCompSerNr, MainCompInvNr, MainCompSerNrProv,
MainCompOpSys,MainCompLocationZip, MainCompLocationCity, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-10]  MainCompLocationStreet, MainCompLocationCountry, MainCompLocation,
MainCompLocationTel,MainCompDescription, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-11]  MainCompLocationName, pro_objid, pcu_objid, psp_objid, psf_objid,
pgl_objid,sev_objid, fat_objid, cst_objid, pst_objid, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-12]  cac_objid, ema_objid, InternalState, CallProvedFlag, ShortDescription,
Description,Remarks, Diagnosis, edi_objid, EditTime, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-13]  EditTimeUTC, csc_objid, css_objid, csu_objid, cal_objid,
SubCompManfacturer,SubCompType, SubCompModel, SubComponent, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-14]  SubCompName, SubCompSerNr, SubCompInvNr, SubCompSerNrProv,
SubCompOpSys,SubCompLocationZip, SubCompLocationCity, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-15]  SubCompLocationStreet, SubCompLocationCountry, SubCompLocation,
SubCompLocationTel,SubCompDescription, SubCompLocationName, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-16]  SysSpecField1, SysSpecField2, SysSpecField3, SysSpecField4,
SysSpecField5,SysSpecField6, SysSpecField7, SysSpecField8, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-17]  SysSpecField9, SysSpecField10, XML_Info, CHDShortName, CHDFirstName,
CHDLastName,CHDTitle, CHDSalutation, CHDTel, CHDTel2, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-18]  CHDMobileTel, CHDFax, CHDLocationZip, CHDLocationCity,
CHDLocationStreet,CHDLocationCountry, CHDLocation, CHDEMail, CHDPIN, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-19]  CHDDescription, CHDDepartment, CHDSign, CCPShortName, CCPFirstName,
CCPLastName,CCPTitle, CCPSalutation, CCPTel, CCPTel2, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-20]  CCPMobileTel, CCPFax, CCPLocationZip, CCPLocationCity,
CCPLocationStreet,CCPLocationCountry, CCPLocation, CCPEMail, CCPPIN, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-21]  CCPDescription, CCPDepartment, CCPSign, bpr_objid, SlaResponseMinutes,
SlaRecoveryMinutes,TotalResponseMinutes, 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-22]  TotalRecoveryMinutes, TotalAcknowledgeMinutes, Zone, unspsc,
dev_objid,tzo_objid, isTestCall, que_objid, tec_objid from Calls 
Aug 18 10:34:25 nut1 postgres[4934]: [44383-23]  where ID = 2266388
Aug 18 10:34:25 nut1 postgres[4934]: [44384] LOG:  duration: 0.007914 sec
Aug 18 10:34:25 nut1 postgres[4934]: [44385] LOG:  query: select  objID, ID, ShortName, Name, TZID, UTCRawOffset from
TimeZoneswhere objID = 1000040000 
Aug 18 10:34:25 nut1 postgres[4934]: [44386] LOG:  duration: 0.001898 sec
Aug 18 10:34:25 nut1 postgres[4934]: [44387] LOG:  query: begin;
Aug 18 10:34:25 nut1 postgres[4934]: [44388] LOG:  duration: 0.000117 sec
Aug 18 10:34:25 nut1 postgres[4934]: [44389] LOG:  query: delete from Calls where objID = 2268645
Aug 18 10:39:18 nut1 postgres[4934]: [44390] LOG:  duration: 292.649273 sec
Aug 18 10:39:18 nut1 postgres[4934]: [44391] LOG:  pq_recvbuf: unexpected EOF on client connection


Aug 18 10:34:25 nut1 postgres[4933]: [44067-1] LOG:  query: select  objID, ID, CustCallID, SPCallID, cel_objid,
con_objid,ccu_objid, csp_objid, csf_objid, fro_objid, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-2]  to1_objid, to2_objid, CallerSLA, CallOpenTime, CallOpenTimeUTC,
CallResponseTime,CallResponseTimeUTC, CallCloseTime, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-3]  CallCloseTimeUTC, CallReceiveTime, CallReceiveTimeUTC, CallSendTime,
CallSendTimeUTC,ProblemStartTime, ProblemStartTimeUTC, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-4]  EndOfResponseTime, EndOfResponseTimeUTC, RemainingResponseMinutes,
EndOfRecoveryTime,EndOfRecoveryTimeUTC, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-5]  RemainingRecoveryMinutes, CallStateChangeTime, CallStateChangeTimeUTC,
CallAcknowledgeTime,CallAcknowledgeTimeUTC, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-6]  CallerShortName, CallerFirstName, CallerLastName, CallerTitle,
CallerSalutation,CallerTel, CallerTel2, CallerMobileTel, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-7]  CallerFax, CallerLocationZip, CallerLocationCity, CallerLocationStreet,
CallerLocationCountry,CallerLocation, CallerEMail, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-8]  CallerPIN, CallerDescription, CallerDepartment, CallerSign,
MainCompManfacturer,MainCompType, MainCompModel, MainComponent, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-9]  MainCompName, MainCompSerNr, MainCompInvNr, MainCompSerNrProv,
MainCompOpSys,MainCompLocationZip, MainCompLocationCity, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-10]  MainCompLocationStreet, MainCompLocationCountry, MainCompLocation,
MainCompLocationTel,MainCompDescription, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-11]  MainCompLocationName, pro_objid, pcu_objid, psp_objid, psf_objid,
pgl_objid,sev_objid, fat_objid, cst_objid, pst_objid, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-12]  cac_objid, ema_objid, InternalState, CallProvedFlag, ShortDescription,
Description,Remarks, Diagnosis, edi_objid, EditTime, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-13]  EditTimeUTC, csc_objid, css_objid, csu_objid, cal_objid,
SubCompManfacturer,SubCompType, SubCompModel, SubComponent, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-14]  SubCompName, SubCompSerNr, SubCompInvNr, SubCompSerNrProv,
SubCompOpSys,SubCompLocationZip, SubCompLocationCity, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-15]  SubCompLocationStreet, SubCompLocationCountry, SubCompLocation,
SubCompLocationTel,SubCompDescription, SubCompLocationName, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-16]  SysSpecField1, SysSpecField2, SysSpecField3, SysSpecField4,
SysSpecField5,SysSpecField6, SysSpecField7, SysSpecField8, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-17]  SysSpecField9, SysSpecField10, XML_Info, CHDShortName, CHDFirstName,
CHDLastName,CHDTitle, CHDSalutation, CHDTel, CHDTel2, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-18]  CHDMobileTel, CHDFax, CHDLocationZip, CHDLocationCity,
CHDLocationStreet,CHDLocationCountry, CHDLocation, CHDEMail, CHDPIN, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-19]  CHDDescription, CHDDepartment, CHDSign, CCPShortName, CCPFirstName,
CCPLastName,CCPTitle, CCPSalutation, CCPTel, CCPTel2, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-20]  CCPMobileTel, CCPFax, CCPLocationZip, CCPLocationCity,
CCPLocationStreet,CCPLocationCountry, CCPLocation, CCPEMail, CCPPIN, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-21]  CCPDescription, CCPDepartment, CCPSign, bpr_objid, SlaResponseMinutes,
SlaRecoveryMinutes,TotalResponseMinutes, 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-22]  TotalRecoveryMinutes, TotalAcknowledgeMinutes, Zone, unspsc,
dev_objid,tzo_objid, isTestCall, que_objid, tec_objid from Calls 
Aug 18 10:34:25 nut1 postgres[4933]: [44067-23]  where ID = 2265302
Aug 18 10:34:25 nut1 postgres[4933]: [44068] LOG:  duration: 0.008038 sec
Aug 18 10:34:25 nut1 postgres[4933]: [44069] LOG:  query: begin;
Aug 18 10:34:25 nut1 postgres[4933]: [44070] LOG:  duration: 0.000121 sec
Aug 18 10:34:25 nut1 postgres[4933]: [44071] LOG:  query: delete from Calls where objID = 2268645
Aug 18 10:34:25 nut1 postgres[4933]: [44072] LOG:  duration: 0.000675 sec
Aug 18 10:39:18 nut1 postgres[4933]: [44073] LOG:  pq_recvbuf: unexpected EOF on client connection

-Philipp
--
: Dipl-Ing Philipp Reisner                      Tel +43-1-8178292-50 :
: LINBIT Information Technologies GmbH          Fax +43-1-8178292-82 :
: Schönbrunnerstr 244, 1120 Vienna, Austria    http://www.linbit.com :

Re: Deadlock in PostgreSQL 7.3.4

От
Tom Lane
Дата:
Philipp Reisner <philipp.reisner@linbit.com> writes:
> Now if the applications issues one delete statement concurrently on
> two connections both block forever.

> Please correct me if I am wrong, but should not one of the statements
> succeed and the other simply fail ?

> Aug 18 10:34:25 nut1 postgres[4934]: [44389] LOG:
>                      query: delete from Calls where objID = 2268645

> Aug 18 10:34:25 nut1 postgres[4933]: [44071] LOG:
>                      query: delete from Calls where objID = 2268645

>  4933 ?        S      5:20 postgres: sd sd 10.2.1.5 idle in transaction
>  4934 ?        S      5:08 postgres: sd sd 10.2.1.5 DELETE waiting

I see no deadlock.  4933 is waiting for its client to issue another
command.  4934 is waiting to see if it can delete the row.

            regards, tom lane

Re: Deadlock in PostgreSQL 7.3.4

От
Philipp Reisner
Дата:
Am Montag, 18. August 2003 15:38 schrieb Tom Lane:
> Philipp Reisner <philipp.reisner@linbit.com> writes:
> > Now if the applications issues one delete statement concurrently on
> > two connections both block forever.
> >
> > Please correct me if I am wrong, but should not one of the statements
> > succeed and the other simply fail ?
> >
> > Aug 18 10:34:25 nut1 postgres[4934]: [44389] LOG:
> >                      query: delete from Calls where objID = 2268645
> >
> > Aug 18 10:34:25 nut1 postgres[4933]: [44071] LOG:
> >                      query: delete from Calls where objID = 2268645
> >
> >  4933 ?        S      5:20 postgres: sd sd 10.2.1.5 idle in transaction
> >  4934 ?        S      5:08 postgres: sd sd 10.2.1.5 DELETE waiting
>
> I see no deadlock.  4933 is waiting for its client to issue another
> command.  4934 is waiting to see if it can delete the row.
>
>             regards, tom lane

Hi Tom,

Right, thanks for pointing it out for me. It seems to be somwhere in
the application code or PostgreSQL's jdbc driver.

Finally we are able to reproduce the "Lockup" and we will continue
to work out if it is in the App or in the jdbc driver.

-Philipp
--
: Dipl-Ing Philipp Reisner                      Tel +43-1-8178292-50 :
: LINBIT Information Technologies GmbH          Fax +43-1-8178292-82 :
: Schönbrunnerstr 244, 1120 Vienna, Austria    http://www.linbit.com :

Re: Deadlock in PostgreSQL 7.3.4

От
Philipp Reisner
Дата:
Am Dienstag, 19. August 2003 11:52 schrieb Philipp Reisner:
> Am Montag, 18. August 2003 15:38 schrieb Tom Lane:
> > Philipp Reisner <philipp.reisner@linbit.com> writes:
> > > Now if the applications issues one delete statement concurrently on
> > > two connections both block forever.
> > >
> > > Please correct me if I am wrong, but should not one of the statements
> > > succeed and the other simply fail ?
> > >
> > > Aug 18 10:34:25 nut1 postgres[4934]: [44389] LOG:
> > >                      query: delete from Calls where objID = 2268645
> > >
> > > Aug 18 10:34:25 nut1 postgres[4933]: [44071] LOG:
> > >                      query: delete from Calls where objID = 2268645
> > >
> > >  4933 ?        S      5:20 postgres: sd sd 10.2.1.5 idle in transaction
> > >  4934 ?        S      5:08 postgres: sd sd 10.2.1.5 DELETE waiting
> >
> > I see no deadlock.  4933 is waiting for its client to issue another
> > command.  4934 is waiting to see if it can delete the row.
> >
> >             regards, tom lane
>
> Hi Tom,
>
> Right, thanks for pointing it out for me. It seems to be somwhere in
> the application code or PostgreSQL's jdbc driver.
>
> Finally we are able to reproduce the "Lockup" and we will continue
> to work out if it is in the App or in the jdbc driver.
>
> -Philipp

Just in order to end this thread. It turned out that the lockup was
triggered by an old version of postgres' jdbc driver. (It was from
the 7.2.1 days of postgresql)

Updating to a recent version finaly solved this issue.

-Philipp