Re: Optimizer Bug issue

Поиск
Список
Период
Сортировка
От Greg Stark
Тема Re: Optimizer Bug issue
Дата
Msg-id 87u0y4nt0o.fsf@stark.xeocode.com
обсуждение исходный текст
Ответ на Optimizer Bug issue  ("Ismail Kizir" <ikizir@tumgazeteler.com>)
Список pgsql-hackers
"Ismail Kizir" <ikizir@tumgazeteler.com> writes:

> I have a database of 20 tables, ~1gb total size. My biggest table contains
> ~270,000 newspaper article from Turkish journals. I am actually working on
> "fulltext search" program of my own.

How much RAM does the machine have? Have you already executed the query and
are repeating it? It's likely the entire data set is cached in RAM. That's not
the long-term average as your data set grows.

The numbers there are appropriate for a database where the data being fetched
cannot all fit in RAM and isn't all pre-cached. There are also scenarios where
the algorithms the optimizer uses to estimate costs don't capture everything.
tweaking the parameters to correct for these problems would cause other
queries to be handled even worse.

If anything the penalty for random disk accesses has increased over the years.
My desktop is about 100 times faster than my 486 router. But the hard drive in
the 486 is only about 10x slower than the hard drive in the desktop. And the
ratio of seek times is probably even less.


There is a parameter effective_cache_size which is supposed to help Postgres
take into account the likelihood that the data will already be in cache. How
exactly does this affect planning and perhaps this parameter needs to have
much more impact on the resultant plans. At least for databases that are small
relative to it.

-- 
greg



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Alvaro Herrera
Дата:
Сообщение: Re: Timezone fun (bugs and a request)
Следующее
От: Bruce Momjian
Дата:
Сообщение: Re: New horology failure