Обсуждение: transaction and triggers
Hi all. Im puzzled again. Just thinking: As im having fun trying to make my own replication system, im stuck in this situation: Consider a simple table with a unique index on the `id' field, and a function who will fail, such as insert into test (id) values (1); insert into test (id) values (1); This will fail and the transaction will be rollback'ed, but as the basis of my replication system is on row level triggers, the first time the insert is called, the trigger will be executed, and i will like to be able to stack the triggers in some way, in order to be fired only after a succesfull execution of the hole function. Im also reading the NOTIFY/LISTEN mechanism and the rule system as a workarround on this, but the fact is that there is a lot of client code, and will take a big amount of time to change it. Any sugestions? Thanks! Gerardo
2008/1/18, Gerardo Herzig <gherzig@fmed.uba.ar>: > Hi all. Im puzzled again. Just thinking: > > As im having fun trying to make my own replication system, im stuck in > this situation: > Consider a simple table with a unique index on the `id' field, and a > function who will fail, such as > > insert into test (id) values (1); > insert into test (id) values (1); > > This will fail and the transaction will be rollback'ed, but as the basis > of my replication system is on row level triggers, the first time the > insert is called, the trigger will be executed, and i will like to be > able to stack the triggers in some way, in order to be fired only after > a succesfull execution of the hole function. If the transaction is rolled back, changes made by your trigger to local database will be also canceled. Unless you make any manipulation on remote databases, you have no problem. Any changes made to remote databases, for example if you call some dblink functions, are not transactional, and will not be rolled back. In this case you have to rethink your design, as there is no "ON COMMIT" trigger (yet?) -- Filip Rembiałkowski
Filip Rembiałkowski wrote: >2008/1/18, Gerardo Herzig <gherzig@fmed.uba.ar>: > > >>Hi all. Im puzzled again. Just thinking: >> >>As im having fun trying to make my own replication system, im stuck in >>this situation: >>Consider a simple table with a unique index on the `id' field, and a >>function who will fail, such as >> >>insert into test (id) values (1); >>insert into test (id) values (1); >> >>This will fail and the transaction will be rollback'ed, but as the basis >>of my replication system is on row level triggers, the first time the >>insert is called, the trigger will be executed, and i will like to be >>able to stack the triggers in some way, in order to be fired only after >>a succesfull execution of the hole function. >> >> > >If the transaction is rolled back, changes made by your trigger to >local database will be also canceled. > >Unless you make any manipulation on remote databases, you have no problem. > >Any changes made to remote databases, for example if you call some >dblink functions, are not transactional, and will not be rolled back. > >In this case you have to rethink your design, as there is no "ON >COMMIT" trigger (yet?) > > Right. But today, that trigger do some other work, wich includes writing some files to disk, so there is my problem. Crap, i guess i will have to review the main logic. Thanks! Gerardo
Gerardo Herzig escribió: > Right. But today, that trigger do some other work, wich includes writing > some files to disk, so there is my problem. Crap, i guess i will have to > review the main logic. Probably it's better to move the actual file writing to a listener external process -- the transaction only does a NOTIFY, which is certain to be delivered only when the transaction commits. So if it aborts, no spurious write occurs. -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Alvaro Herrera wrote: >Gerardo Herzig escribió: > > > >>Right. But today, that trigger do some other work, wich includes writing >>some files to disk, so there is my problem. Crap, i guess i will have to >>review the main logic. >> >> > >Probably it's better to move the actual file writing to a listener >external process -- the transaction only does a NOTIFY, which is certain >to be delivered only when the transaction commits. So if it aborts, no >spurious write occurs. > > > Mmmhmm, sounds good...I will give it a try on monday. Now its beer time :) Thanks all. Gerardo
On Fri, 18 Jan 2008 12:16:04 -0300 Gerardo Herzig <gherzig@fmed.uba.ar> wrote: > Right. But today, that trigger do some other work, wich includes > writing some files to disk, so there is my problem. Crap, i guess i will > have to review the main logic. I built a replication system that syncs up dozens of systems in a multi-master environment spanning multiple continents in almost real-time and it works flawlessly so don't give up hope. It is doable. I can't give you the code because it was written under contract and it was based heavily on our specific business requirements but I can give you a few pointers. You have discovered the basic problem of trying to replicate in full real time. You'll probably have to give up on that. Instead, focus on making updates to the local database. Create a replication table or tables that you update with triggers. Basically this needs to be a log of every change to the database in a structured way. Once you have the replication table(s) you can create external programs that connect to the master and update the slave. In the slave you can track the last ID that completed. Do the insert/update/delete in a transaction so that you have a guarantee that your database is up to date to a very specific point. Note that you can have multiple slaves in this scenario and, in fact, the slaves can have slaves using the exact same scheme giving you a hierarchy. If you need multi-master you just need to have another process to feed your local changes up to the master. This is not just a matter of making the master a slave though. If you do that you get into a feedback loop. Also, if you need multi-master, you have to think about your sequencing. If you need unique IDs on some tables you will have to think about setting up ranges of sequences based on server or have a central sequence server. We used a combination of both as well as specifying that certain tables could only be inserted to on one system. Of course, this system doesn't need to be the same as the top of the hierarchy and, in fact, different tables can have different generator systems. Hope this gets you started. There's still lots of gotchas on the way. -- D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.
D'Arcy J.M. Cain wrote: >On Fri, 18 Jan 2008 12:16:04 -0300 >Gerardo Herzig <gherzig@fmed.uba.ar> wrote: > > >>Right. But today, that trigger do some other work, wich includes >>writing some files to disk, so there is my problem. Crap, i guess i will >>have to review the main logic. >> >> > >I built a replication system that syncs up dozens of systems in a >multi-master environment spanning multiple continents in almost >real-time and it works flawlessly so don't give up hope. > And im trying with 3 virtual machines...this is embarrasing :) > It is >doable. I can't give you the code because it was written under >contract and it was based heavily on our specific business requirements >but I can give you a few pointers. > >You have discovered the basic problem of trying to replicate in full >real time. You'll probably have to give up on that. Instead, focus on >making updates to the local database. Create a replication table or >tables that you update with triggers. Basically this needs to be a log >of every change to the database in a structured way. > > Crap. That was my first approach! I later chose the inmediate file writing, trying to minimize the changes that would be lost in case of primary system crash. I guess i will come with it again. >Once you have the replication table(s) you can create external programs >that connect to the master and update the slave. In the slave you can >track the last ID that completed. Do the insert/update/delete in a >transaction so that you have a guarantee that your database is up to >date to a very specific point. Note that you can have multiple slaves >in this scenario and, in fact, the slaves can have slaves using the >exact same scheme giving you a hierarchy. > >If you need multi-master you just need to have another process to feed >your local changes up to the master. This is not just a matter of >making the master a slave though. If you do that you get into a >feedback loop. > >Also, if you need multi-master, you have to think about your >sequencing. If you need unique IDs on some tables you will have to >think about setting up ranges of sequences based on server or have a >central sequence server. We used a combination of both as well as >specifying that certain tables could only be inserted to on one >system. Of course, this system doesn't need to be the same as the top >of the hierarchy and, in fact, different tables can have different >generator systems. > > > What i want to do is something like:If the master fails, it will be a peace of soft that would change the conf files (which indicate who's the master, slaves, and so on), so one of the slaves take the master's place. Since those are a common pc, when the real master come back to life, it has to be re-sync, and take his place as the master again. Im thinking in something as simple as posible (since im not a senior programmer), something like a ip address change could do the trick >Hope this gets you started. There's still lots of gotchas on the way. > > Oh yes, im specting so much fun!!!! Thanks for sharing your knowledge with us!! Mamooth replicator, Slone-I, feel the fear! :) Thanks again, D'arcy! Gerardo