Re: Raid 10 chunksize

Поиск
Список
Период
Сортировка
От Mark Kirkwood
Тема Re: Raid 10 chunksize
Дата
Msg-id 49CB07DE.4040102@paradise.net.nz
обсуждение исходный текст
Ответ на Re: Raid 10 chunksize  (Stef Telford <stef@ummon.com>)
Ответы Re: Raid 10 chunksize  (Scott Carey <scott@richrelevance.com>)
Список pgsql-performance
Stef Telford wrote:
>
> Hello Mark,
>     Okay, so, take all of this with a pinch of salt, but, I have the
> same config (pretty much) as you, with checkpoint_Segments raised to
> 192. The 'test' database server is Q8300, 8GB ram, 2 x 7200rpm SATA
> into motherboard which I then lvm stripped together; lvcreate -n
> data_lv -i 2 -I 64 mylv -L 60G (expandable under lvm2). That gives me
> a stripe size of 64. Running pgbench with the same scaling factors;
>
> starting vacuum...end.
> transaction type: TPC-B (sort of)
> scaling factor: 100
> number of clients: 24
> number of transactions per client: 12000
> number of transactions actually processed: 288000/288000
> tps = 1398.907206 (including connections establishing)
> tps = 1399.233785 (excluding connections establishing)
>
>     It's also running ext4dev, but, this is the 'playground' server,
> not the real iron (And I dread to do that on the real iron). In short,
> I think that chunksize/stripesize is killing you. Personally, I would
> go for 64 or 128 .. that's jst my 2c .. feel free to
> ignore/scorn/laugh as applicable ;)
>
>
Stef - I suspect that your (quite high) tps is because your SATA disks
are not honoring the fsync() request for each commit. SCSI/SAS disks
tend to by default flush their cache at fsync - ATA/SATA tend not to.
Some filesystems (e.g xfs) will try to work around this with write
barrier support, but it depends on the disk firmware.

Thanks for your reply!

Mark

В списке pgsql-performance по дате отправления:

Предыдущее
От: Mark Kirkwood
Дата:
Сообщение: Re: Raid 10 chunksize
Следующее
От: Mark Kirkwood
Дата:
Сообщение: Re: Raid 10 chunksize