Обсуждение: Help..Help...

Поиск
Список
Период
Сортировка

Help..Help...

От
Murali Mohan Kasetty
Дата:
Hi All,

We are using PostgreSQL 7.2.

We are running two processes accessing the same table using JDBC. Both
the
processes updates records in the same table. The same rows will not be
updated by the processes at the same time.

When the processes are run concurrently, the time taken is X seconds
each.
But, when we run the same processes together, we are seeing that the
time
taken is worse than 2X.

Is it possible that there is a contention that is occuring while the
records
are being written. Has anybody experienced a similar problem. What is
the
LOCK mechanism that is used by PostgreSQL.


Any help would be greatly appreciated.

Thanks in advance,
Murali



Вложения

Re: Help..Help...

От
"Shridhar Daithankar"
Дата:
On 13 Nov 2002 at 19:14, Murali Mohan Kasetty wrote:

> We are running two processes accessing the same table using JDBC. Both
> the
> processes updates records in the same table. The same rows will not be
> updated by the processes at the same time.
>
> When the processes are run concurrently, the time taken is X seconds
> each.
> But, when we run the same processes together, we are seeing that the
> time
> taken is worse than 2X.

Update generates dead tuples which causes performance slowdown. Run vacuum
analyze concurrently in background so that these dead tuples are available for
reuse.

>
> Is it possible that there is a contention that is occuring while the
> records
> are being written. Has anybody experienced a similar problem. What is
> the

I am sure that's not the case. Are you doing rapind updates. Practiacally you
should run vacuum analyze for each 1000 updates to keep performance maximum.
Tune this figure to suit your need..

> LOCK mechanism that is used by PostgreSQL.

Go thr. MVCC. It's documented in postgresql manual.

HTH

Bye
 Shridhar

--
mixed emotions:    Watching a bus-load of lawyers plunge off a cliff.    With five
empty seats.


Re: Help..Help...

От
Csaba Nagy
Дата:
Hi there,

This could be caused by the foreign key locking mechanism used by postgres.
Do you have foreign keys defined on that table ? Do the new inserted rows
point to the same row in the referenced table ? If yes, that's the cause.
You can insert only from 1 process at a time, because the referenced row is
locked exclusively, and therefore all other processes trying to insert in
the same table a row referencing the same foreign key will have to wait
untill the first transaction finishes. Ditto for updates which change rows
referencing the same foreign key.
Postgres foreign keys are also deadlock prone because of this locking
mechanism.
Hopefully this will be improved soon (there is some work done toward this).

Cheers,
Csaba.


-----Ursprungliche Nachricht-----
Von: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org]Im Auftrag von Murali Mohan
Kasetty
Gesendet: Mittwoch, 13. November 2002 14:44
An: pgsql-general@postgresql.org
Betreff: [GENERAL] Help..Help...


Hi All,

We are using PostgreSQL 7.2.

We are running two processes accessing the same table using JDBC. Both
the
processes updates records in the same table. The same rows will not be
updated by the processes at the same time.

When the processes are run concurrently, the time taken is X seconds
each.
But, when we run the same processes together, we are seeing that the
time
taken is worse than 2X.

Is it possible that there is a contention that is occuring while the
records
are being written. Has anybody experienced a similar problem. What is
the
LOCK mechanism that is used by PostgreSQL.


Any help would be greatly appreciated.

Thanks in advance,
Murali



Re: Help..Help...

От
Savita
Дата:
Hi Csaba,

We do have a foreign key defined on that table.SO what is the solution to
this??
Is there any setting required to avoid this locking problem.

Please help us to slove this problem.

Csaba Nagy wrote:

> Hi there,
>
> This could be caused by the foreign key locking mechanism used by postgres.
> Do you have foreign keys defined on that table ? Do the new inserted rows
> point to the same row in the referenced table ? If yes, that's the cause.
> You can insert only from 1 process at a time, because the referenced row is
> locked exclusively, and therefore all other processes trying to insert in
> the same table a row referencing the same foreign key will have to wait
> untill the first transaction finishes. Ditto for updates which change rows
> referencing the same foreign key.
> Postgres foreign keys are also deadlock prone because of this locking
> mechanism.
> Hopefully this will be improved soon (there is some work done toward this).
>
> Cheers,
> Csaba.
>
> -----Ursprungliche Nachricht-----
> Von: pgsql-general-owner@postgresql.org
> [mailto:pgsql-general-owner@postgresql.org]Im Auftrag von Murali Mohan
> Kasetty
> Gesendet: Mittwoch, 13. November 2002 14:44
> An: pgsql-general@postgresql.org
> Betreff: [GENERAL] Help..Help...
>
> Hi All,
>
> We are using PostgreSQL 7.2.
>
> We are running two processes accessing the same table using JDBC. Both
> the
> processes updates records in the same table. The same rows will not be
> updated by the processes at the same time.
>
> When the processes are run concurrently, the time taken is X seconds
> each.
> But, when we run the same processes together, we are seeing that the
> time
> taken is worse than 2X.
>
> Is it possible that there is a contention that is occuring while the
> records
> are being written. Has anybody experienced a similar problem. What is
> the
> LOCK mechanism that is used by PostgreSQL.
>
> Any help would be greatly appreciated.
>
> Thanks in advance,
> Murali
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/users-lounge/docs/faq.html

--
Best Regards
- Savita
----------------------------------------------------
Hewlett Packard (India)
+91 80 2051288 (Phone)
847 1288 (HP Telnet)
----------------------------------------------------



Re: Help..Help...

От
Csaba Nagy
Дата:
In my experience, you can only get rid of the foreign key constraint, and
live with the risk of having unconsistent data... if this is acceptable. You
can compensate by careful coding and explicit checks, if you're just
starting your project (in my case I inherited an Oracle project).
There is another possibility, to define the foreign key with "DEFERRABLE
INITIALLY DEFERRED", but this approach has a different drawback: the foreign
key constraint will be verified only at the transaction's end, and if it
fails, the whole transaction is rolled back.
So if you have long transactions, they could run to the end and then fail
for a missing foreign key which could have been detected at the beginning...
Also the place where the transaction fails is at the "commit" command, so
your code has to be prepared to handle the failure of the commit command
(which my code was not...).
I will start to lobby the implementation of shared row level locks, which
would be right type of lock for foreign keys.
No idea though how complex this can be - I'm also new to postgres, and I'm
also not a C programmer so I can't implement it myself.
I've also received a patch from Stephan Szabo, addressing foreign key locks,
which I didn't have the time yet to test. It's for 7.3b3 version. See
attachement.

Cheers,
Csaba.

-----Ursprungliche Nachricht-----
Von: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org]Im Auftrag von Savita
Gesendet: Donnerstag, 14. November 2002 10:30
An: Csaba Nagy; 'pgsql-general@postgresql.org'
Betreff: Re: [GENERAL] Help..Help...


Hi Csaba,

We do have a foreign key defined on that table.SO what is the solution to
this??
Is there any setting required to avoid this locking problem.

Please help us to slove this problem.

Csaba Nagy wrote:

> Hi there,
>
> This could be caused by the foreign key locking mechanism used by
postgres.
> Do you have foreign keys defined on that table ? Do the new inserted rows
> point to the same row in the referenced table ? If yes, that's the cause.
> You can insert only from 1 process at a time, because the referenced row
is
> locked exclusively, and therefore all other processes trying to insert in
> the same table a row referencing the same foreign key will have to wait
> untill the first transaction finishes. Ditto for updates which change rows
> referencing the same foreign key.
> Postgres foreign keys are also deadlock prone because of this locking
> mechanism.
> Hopefully this will be improved soon (there is some work done toward
this).
>
> Cheers,
> Csaba.
>
> -----Ursprungliche Nachricht-----
> Von: pgsql-general-owner@postgresql.org
> [mailto:pgsql-general-owner@postgresql.org]Im Auftrag von Murali Mohan
> Kasetty
> Gesendet: Mittwoch, 13. November 2002 14:44
> An: pgsql-general@postgresql.org
> Betreff: [GENERAL] Help..Help...
>
> Hi All,
>
> We are using PostgreSQL 7.2.
>
> We are running two processes accessing the same table using JDBC. Both
> the
> processes updates records in the same table. The same rows will not be
> updated by the processes at the same time.
>
> When the processes are run concurrently, the time taken is X seconds
> each.
> But, when we run the same processes together, we are seeing that the
> time
> taken is worse than 2X.
>
> Is it possible that there is a contention that is occuring while the
> records
> are being written. Has anybody experienced a similar problem. What is
> the
> LOCK mechanism that is used by PostgreSQL.
>
> Any help would be greatly appreciated.
>
> Thanks in advance,
> Murali
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/users-lounge/docs/faq.html

--
Best Regards
- Savita
----------------------------------------------------
Hewlett Packard (India)
+91 80 2051288 (Phone)
847 1288 (HP Telnet)
----------------------------------------------------



---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly


Вложения