Re: Size estimation of postgres core files

Поиск
Список
Период
Сортировка
От Jeremy Finzel
Тема Re: Size estimation of postgres core files
Дата
Msg-id CAMa1XUgd79VBYFHs=uTQB+YkXQjw-YxEQtB=USwL4BDPAVJD4g@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Size estimation of postgres core files  (Andrew Gierth <andrew@tao11.riddles.org.uk>)
Ответы Re: Size estimation of postgres core files  ("Peter J. Holzer" <hjp-pgsql@hjp.at>)
Список pgsql-general
It doesn't write out all of RAM, only the amount in use by the
particular backend that crashed (plus all the shared segments attached
by that backend, including the main shared_buffers, unless you disable
that as previously mentioned).

And yes, it can take a long time to generate a large core file.

--
Andrew (irc:RhodiumToad)

Based on the Alvaro's response, I thought it is reasonably possible that that *could* include nearly all of RAM, because that was my original question.  If shared buffers is say 50G and my OS has 1T, shared buffers is a small portion of that.  But really my question is what should we reasonably assume is possible - meaning what kind of space should I provision for a volume to be able to contain the core dump in case of crash?  The time of writing the core file would definitely be a concern if it could indeed be that large.

Could someone provide more information on exactly how to do that coredump_filter?

We are looking to enable core dumps to aid in case of unexpected crashes and wondering if there are any recommendations in general in terms of balancing costs/benefits of enabling core dumps.

Thank you!
Jeremy

В списке pgsql-general по дате отправления:

Предыдущее
От: Adrian Klaver
Дата:
Сообщение: Re: Problems pushing down WHERE-clause to underlying view
Следующее
От: Andre Piwoni
Дата:
Сообщение: Re: Promoted slave tries to archive previously archived WAL file