Обсуждение: Triggers and scalability in high transaction tables.

Поиск
Список
Период
Сортировка

Triggers and scalability in high transaction tables.

От
Tim Uckun
Дата:
I want to write a trigger which runs semi-complicated code after each insert.  I have done some reading and from what I can gather this could cause problems because after insert triggers "don't spill to the disk" and can cause queue problems.   Many people suggest LISTEN NOTIFY but that's not going to help me because my daemons could be offline and I would lose records. 

I have two questions.

There are some hints out there that it could be possible to do asynchronous triggers based on dblink but I haven't seen any documentation or examples of this.   Is there a writeup someplace about this?

Secondly I had the idea of "partitioning" the trigger processing by partitioning the table and then putting a trigger on each child table.  This way theoretically I could be running the triggers in parallel.  Is my presumption correct here?  If I only have one table the trigger calls get queued up one at a time but if I partition my table into N tables I am running N triggers simultaneously?

Thanks.

Re: Triggers and scalability in high transaction tables.

От
Merlin Moncure
Дата:
On Thu, Feb 26, 2015 at 3:54 PM, Tim Uckun <timuckun@gmail.com> wrote:
> I want to write a trigger which runs semi-complicated code after each
> insert.  I have done some reading and from what I can gather this could
> cause problems because after insert triggers "don't spill to the disk" and
> can cause queue problems.   Many people suggest LISTEN NOTIFY but that's not
> going to help me because my daemons could be offline and I would lose
> records.
>
> I have two questions.
>
> There are some hints out there that it could be possible to do asynchronous
> triggers based on dblink but I haven't seen any documentation or examples of
> this.   Is there a writeup someplace about this?
>
> Secondly I had the idea of "partitioning" the trigger processing by
> partitioning the table and then putting a trigger on each child table.  This
> way theoretically I could be running the triggers in parallel.  Is my
> presumption correct here?  If I only have one table the trigger calls get
> queued up one at a time but if I partition my table into N tables I am
> running N triggers simultaneously?

I would strongly advise you not to put complex processing in triggers
if at all possible.  Instead have the insert operation write a record
into another table which forms a queue of work to do.  That queue can
then be walked by another process which accumulates the work and takes
appropriate action (and, since you are out of the context of the
operation at hand, can be threaded etc).

merlin


Re: Triggers and scalability in high transaction tables.

От
Jerry Sievers
Дата:
Tim Uckun <timuckun@gmail.com> writes:

> I want to write a trigger which runs semi-complicated code after each insert.  I have done some reading and from
whatI can gather this could cause problems because 
> after insert triggers "don't spill to the disk" and can cause queue problems.   Many people suggest LISTEN NOTIFY
butthat's not going to help me because my daemons 
> could be offline and I would lose records. 
>
> I have two questions.
>
> There are some hints out there that it could be possible to do asynchronous triggers based on dblink but I haven't
seenany documentation or examples of this.   Is 
> there a writeup someplace about this?
>
> Secondly I had the idea of "partitioning" the trigger processing by
> partitioning the table and then putting a trigger on each child
> table.  This way theoretically I could be running the triggers
> in parallel.  Is my presumption correct here?  If I only
> have one table the trigger calls get queued up one at a time but if I
> partition my table into N tables I am running N triggers
> simultaneously?
>
False on both counts.

Nothing to prevent concurrent firing of same trigger on same table given
multi session concurrent insert.

Nothing to prevent contention related single-threading of any triggers
firing for whatever reason if  the code they are running  will result in
lock contention with other sessions.

Just like 2 or more sessions trying to update the same row,  you are
going to single thread around such an operation like it or not.

You need to tell us a lot more about your problem and what the triggers
do.


> Thanks.
>

--
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consulting@comcast.net
p: 312.241.7800


Re: Triggers and scalability in high transaction tables.

От
John R Pierce
Дата:
On 2/26/2015 2:03 PM, Merlin Moncure wrote:
> I would strongly advise you not to put complex processing in triggers
> if at all possible.  Instead have the insert operation write a record
> into another table which forms a queue of work to do.  That queue can
> then be walked by another process which accumulates the work and takes
> appropriate action (and, since you are out of the context of the
> operation at hand, can be threaded etc).

I 2nd this emotion...  that's exactly what we do with our complex
background processing.



--
john r pierce                                      37N 122W
somewhere on the middle of the left coast



Re: Triggers and scalability in high transaction tables.

От
Tim Uckun
Дата:
I just want to make sure I understood correctly.

All the triggers are firing in a single thread assigned to the connection and will be run serially no matter how many tables are firing triggers.

If this is correct then yes I guess I have to create a queue of some sort and process them via an external process.  Thanks.

On Fri, Feb 27, 2015 at 11:12 AM, Jerry Sievers <gsievers19@comcast.net> wrote:
Tim Uckun <timuckun@gmail.com> writes:

> I want to write a trigger which runs semi-complicated code after each insert.  I have done some reading and from what I can gather this could cause problems because
> after insert triggers "don't spill to the disk" and can cause queue problems.   Many people suggest LISTEN NOTIFY but that's not going to help me because my daemons
> could be offline and I would lose records. 
>
> I have two questions.
>
> There are some hints out there that it could be possible to do asynchronous triggers based on dblink but I haven't seen any documentation or examples of this.   Is
> there a writeup someplace about this?
>
> Secondly I had the idea of "partitioning" the trigger processing by
> partitioning the table and then putting a trigger on each child
> table.  This way theoretically I could be running the triggers
> in parallel.  Is my presumption correct here?  If I only
> have one table the trigger calls get queued up one at a time but if I
> partition my table into N tables I am running N triggers
> simultaneously?
>
False on both counts.

Nothing to prevent concurrent firing of same trigger on same table given
multi session concurrent insert.

Nothing to prevent contention related single-threading of any triggers
firing for whatever reason if  the code they are running  will result in
lock contention with other sessions.

Just like 2 or more sessions trying to update the same row,  you are
going to single thread around such an operation like it or not.

You need to tell us a lot more about your problem and what the triggers
do.


> Thanks.
>

--
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consulting@comcast.net
p: 312.241.7800