Re: 600 million rows of data. Bad hardware or need partitioning?

Поиск
Список
Период
Сортировка
От David Rowley
Тема Re: 600 million rows of data. Bad hardware or need partitioning?
Дата
Msg-id CAApHDvqt53Q9NSzffOGZGx=6qzutmDS1ssYsUDuZiGQskaEEMw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: 600 million rows of data. Bad hardware or need partitioning?  (Arya F <arya6000@gmail.com>)
Ответы Re: 600 million rows of data. Bad hardware or need partitioning?  (Arya F <arya6000@gmail.com>)
Список pgsql-performance
On Mon, 4 May 2020 at 15:52, Arya F <arya6000@gmail.com> wrote:
>
> On Sun, May 3, 2020 at 11:46 PM Michael Lewis <mlewis@entrata.com> wrote:
> >
> > What kinds of storage (ssd or old 5400 rpm)? What else is this machine running?
>
> Not an SSD, but an old 1TB 7200 RPM HDD
>
> > What configs have been customized such as work_mem or random_page_cost?
>
> work_mem = 2403kB
> random_page_cost = 1.1

How long does it take if you first do:

SET enable_nestloop TO off;

If you find it's faster then you most likely have random_page_cost set
unrealistically low. In fact, I'd say it's very unlikely that a nested
loop join will be a win in this case when random pages must be read
from a mechanical disk, but by all means, try disabling it with the
above command and see for yourself.

If you set random_page_cost so low to solve some other performance
problem, then you may wish to look at the effective_cache_size
setting. Having that set to something realistic should allow indexes
to be used more in situations where they're likely to not require as
much random I/O from the disk.

David



В списке pgsql-performance по дате отправления:

Предыдущее
От: Arya F
Дата:
Сообщение: Re: 600 million rows of data. Bad hardware or need partitioning?
Следующее
От: Justin Pryzby
Дата:
Сообщение: Re: 600 million rows of data. Bad hardware or need partitioning?