Re: duplicate key errors in log file

Поиск
Список
Период
Сортировка
От Jim Nasby
Тема Re: duplicate key errors in log file
Дата
Msg-id 564CB6F3.2090105@BlueTreble.com
обсуждение исходный текст
Ответ на duplicate key errors in log file  (anj patnaik <patna73@gmail.com>)
Ответы Re: duplicate key errors in log file  (Jeff Janes <jeff.janes@gmail.com>)
Список pgsql-general
On 11/17/15 5:33 PM, anj patnaik wrote:
> The pg log files apparently log error lines every time a user inserts a
> duplicate. I implemented a composite primary key and then when I see the
> exception in my client app I update the row with the recent data.
>
> however, I don't want the log file to fill out with these error messages
> since it's handled by the client.
>
> is there a way to stop logging certain messages?
>
> Also do any of you use any options to cause log files not to fill up the
> disk over time?

Not really. You could do something like SET log_min_messages = PANIC for
that statement, but then you won't get a log for any other errors.

In any case, the real issue is that you shouldn't do this in the client.
I'll bet $1 that your code has race conditions. Even if you got rid of
those, the overhead of the back-and-forth with the database is huge
compared to doing this in the database.

So really you should create a plpgsql function ala example 40-2 at
http://www.postgresql.org/docs/9.4/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


В списке pgsql-general по дате отправления:

Предыдущее
От: Karsten Hilbert
Дата:
Сообщение: Re: Taking lot time
Следующее
От: Adrian Klaver
Дата:
Сообщение: Re: Taking lot time