Обсуждение: Bad performance in bulky updates

Поиск
Список
Период
Сортировка

Bad performance in bulky updates

От
"Carlos H. Reimer"
Дата:
Hi,
 
We have very bad performance issues in one of our customer PostgreSQL servers and we would like some suggestions to improve the performance for bulky updates.
 
When one of the biggest tables has all lines updated for example, it takes at about 30 minutes for processing. If we drop all indexes (21) and let only the primary index the same update takes 2 minutes.
 
As far as I could see there is no cpu nor memory bottleneck but sar is indicating 50% of iowait during those update processings.
 
What I'm suggesting in priority order:
 
a) add another 2 SCSI disks and move index data to the new drives
b) add a SCSCI controllers with RAID support and move existing disks to the new controller
c) add another disk and move pg_xlog to there
 
Server configuration data:
http://www.opendb.com.br/servidor.htm
 
Is there any other suggestion to improve the performance of those updates?
 
Thank you in advance!
 
Carlos

Re: Bad performance in bulky updates

От
Tom Lane
Дата:
"Carlos H. Reimer" <carlos.reimer@opendb.com.br> writes:
> When one of the biggest tables has all lines updated for example, it takes
> at about 30 minutes for processing. If we drop all indexes (21) and let only
> the primary index the same update takes 2 minutes.

21 indexes??

If update performance is important then you should try to economize on
indexes.  Do you have evidence that each of those indexes is worth its
update costs?

            regards, tom lane

Re: Bad performance in bulky updates

От
Richard Broersma Jr
Дата:
> Is there any other suggestion to improve the performance of those updates?

Perhaps turning on query duration logging so see if any particular queries are giving you grief.

Other than that I would repost this email on the preformance list as you will get better responses
from that list. :o)

Regards,

Richard Broersma Jr.