Can Postgres Not Do This Safely ?!?

Поиск
Список
Период
Сортировка
От Karl Pickett
Тема Can Postgres Not Do This Safely ?!?
Дата
Msg-id AANLkTi=rRkR-EgPkMLTVVukJV3X-eUCbk8fkva6X0HZY@mail.gmail.com
обсуждение исходный текст
Ответы Re: Can Postgres Not Do This Safely ?!?  (Peter Geoghegan <peter.geoghegan86@gmail.com>)
Re: Can Postgres Not Do This Safely ?!?  (Craig Ringer <craig@postnewspapers.com.au>)
Re: Can Postgres Not Do This Safely ?!?  (Adrian Klaver <adrian.klaver@gmail.com>)
Re: Can Postgres Not Do This Safely ?!?  (Merlin Moncure <mmoncure@gmail.com>)
Список pgsql-general
Hello Postgres Hackers,

We have a simple 'event log' table that is insert only (by multiple
concurrent clients).  It has an integer primary key.  We want to do
incremental queries of this table every 5 minutes or so, i.e. "select
* from events where id > LAST_ID_I_GOT" to insert into a separate
reporting database.  The problem is, this simple approach has a race
that will forever skip uncommitted events.  I.e., if 5000 was
committed sooner than 4999, and we get 5000, we will never go back and
get 4999 when it finally commits.  How can we solve this?  Basically
it's a phantom row problem but it spans transactions.

I looked at checking the internal 'xmin' column but the docs say that
is 32 bit, and something like 'txid_current_snapshot' returns a 64 bit
value.  I don't get it.   All I want to is make sure I skip over any
rows that are newer than the oldest currently running transaction.
Has nobody else run into this before?

Thank you very much.

--
Karl Pickett

В списке pgsql-general по дате отправления:

Предыдущее
От: Alex Hunsaker
Дата:
Сообщение: Re: MySQL -> PostgreSQL conversion issue
Следующее
От: Gregory Machin
Дата:
Сообщение: Adivice on master - master replication.