Re: Limit of bgwriter_lru_maxpages of max. 1000?

Поиск
Список
Период
Сортировка
От Greg Smith
Тема Re: Limit of bgwriter_lru_maxpages of max. 1000?
Дата
Msg-id alpine.GSO.2.01.0910021610130.13300@westnet.com
обсуждение исходный текст
Ответ на Re: Limit of bgwriter_lru_maxpages of max. 1000?  (Gerhard Wiesinger <lists@wiesinger.com>)
Ответы Re: Limit of bgwriter_lru_maxpages of max. 1000?  (Scott Marlowe <scott.marlowe@gmail.com>)
Список pgsql-general
On Fri, 2 Oct 2009, Gerhard Wiesinger wrote:

> In my experience flushing I/O as soon as possible is the best solution.

That what everyone assumes, but detailed benchmarks of PostgreSQL don't
actually support that view given how the database operates.  We went
through a lot of work in 8.3 related to how to optimize the database as a
system that disproved some of the theories about what would work well
here.

What happens if you're really aggressive about writing blocks out as soon
as they're dirty is that you waste a lot of I/O on things that just get
dirty again later.  Since checkpoint time is the only period where blocks
*must* get written, the approach that worked the best for reducing
checkpoint spikes was to spread the checkpoint writes out over a very wide
period.  The only remaining work that made sense for the background writer
was to tightly focus the background writer its I/O on blocks that are
about to be evicted due to low usage no matter what.

In most cases where people think they need more I/O from the background
writer, what you actually want is to increase checkpoint_segments,
checkpoint_completion_target, and checkpoint_timeout in order to spread
the checkpoint I/O out over a longer period.  The stats you provided
suggest this is working exactly as intended.

As far as work to improve the status quo, IMHO the next thing to improve
is getting the fsync calls made at checkpoint time more intelligently
spread over the whole period.  That's got a better payback than trying to
make the background writer more aggressive, which is basically a doomed
cause.

> So I'd like to do some tests with new statistics. Any fast way to reset
> statistics for all databases for pg_stat_pgwriter?

No, that's an open TODO item I keep meaning to fix; we lost that
capability at one point.  What I do is create a table that looks just like
it, but with a time stamp, and save snapshots to that table.  Then a view
on top can generate just the deltas between two samples to show activity
during that time.  It's handy to have such a history anyway.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

В списке pgsql-general по дате отправления:

Предыдущее
От: David Fetter
Дата:
Сообщение: Re: Time Management - Training Seminar in Cape Town
Следующее
От: Greg Smith
Дата:
Сообщение: Re: PostgreSQL reads each 8k block - no larger blocks are used - even on sequential scans