Re: 8K recordsize bad on ZFS?

Поиск
Список
Период
Сортировка
От Josh Berkus
Тема Re: 8K recordsize bad on ZFS?
Дата
Msg-id 4BE877BE.8000908@agliodbs.com
обсуждение исходный текст
Ответ на Re: 8K recordsize bad on ZFS?  (Greg Stark <gsstark@mit.edu>)
Список pgsql-performance
> That still is consistent with it being caused by the files being
> discontiguous. Copying them moved all the blocks to be contiguous and
> sequential on disk and might have had the same effect even if you had
> left the settings at 8kB blocks. You described it as "overloading the
> array/drives with commands" which is probably accurate but sounds less
> exotic if you say "the files were fragmented causing lots of seeks so
> our drives we saturated the drives' iops capacity". How many iops were
> you doing before and after anyways?

Don't know.  This was a client system and once we got the target
numbers, they stopped wanting me to run tests on in.  :-(

Note that this was a brand-new system, so there wasn't much time for
fragmentation to occur.

--
                                  -- Josh Berkus
                                     PostgreSQL Experts Inc.
                                     http://www.pgexperts.com

В списке pgsql-performance по дате отправления:

Предыдущее
От: Greg Stark
Дата:
Сообщение: Re: 8K recordsize bad on ZFS?
Следующее
От: "Carlo Stonebanks"
Дата:
Сообщение: Function scan/Index scan to nested loop