Re: My Experiment of PG crash when dealing with huge amount of data

Поиск
Список
Период
Сортировка
От Michael Paquier
Тема Re: My Experiment of PG crash when dealing with huge amount of data
Дата
Msg-id CAB7nPqSuLujfaF6rNHDSqiA8b_rr0reSi_mS+PenS6ueS1emvw@mail.gmail.com
обсуждение исходный текст
Ответ на My Experiment of PG crash when dealing with huge amount of data  (高健 <luckyjackgao@gmail.com>)
Список pgsql-general
On Fri, Aug 30, 2013 at 6:10 PM, 高健 <luckyjackgao@gmail.com> wrote:
> In log, I can see the following:
> LOG:  background writer process (PID 3221) was terminated by signal 9:
> Killed
Assuming that no users on your server manually killed this process, or
that no maintenance task you implemented did that, this looks like the
Linux OOM killer because of a memory overcommit. Have a look here for
more details:
http://www.postgresql.org/docs/current/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT
So have a look at dmesg to confirm that, then you could use one of the
strategies described in the docs. Also, as you have been doing a bulk
INSERT, you should as well increase temporarily checkpoint_segments to
reduce the pressure on the background writer by reducing the number of
checkpoints happening. This will also make your data load faster.
--
Michael


В списке pgsql-general по дате отправления:

Предыдущее
От: Michael Paquier
Дата:
Сообщение: Re: Using of replication by initdb for both nodes?
Следующее
От: vibhuti nataraj
Дата:
Сообщение: Unable to CREATE SCHEMA and INSERT data in table in that schema in same EXECUTE