Обсуждение: Query runs slow
Hi all,
I have query like this :select subkategori,produkid, namabarang , sum(keluar) as ttlkeluar, sum(ttlmodal) as ttlmodal from
( select subkategori, kodebarang as produkid, namabarang, keluar, tbltransaksi.modal*keluar as ttlmodal,
case
when tbltransaksi.discount<=100 then
keluar*(harga - (discount/100*harga))
when tbltransaksi.discount>100
then keluar*(harga-discount)
end as jumlah
from tblpenjualan
join tbltransaksi on tblpenjualan.id=tbltransaksi.jualid
join tblproduk on tbltransaksi.kodebarang=tblproduk.produkid
join tblsubkategori on tblproduk.subkategoriid=tblsubkategori.tblsubkategoriid
join tblkategori on tblkategori.kategoriid=tblsubkategori.kategoriid
where tblpenjualan.tanggal between '01/01/13' and '31/12/13')
as dt group by subkategori, produkid, namabarang
This is the query to collect sales record from specific date and sum the qty based on product id.
GroupAggregate (cost=190773.38..209827.25 rows=692868 width=65)
Output: tblsubkategori.subkategori, tbltransaksi.kodebarang, tblproduk.namabarang, sum(tbltransaksi.keluar), sum((tbltransaksi.modal * tbltransaksi.keluar))
-> Sort (cost=190773.38..192505.55 rows=692868 width=65)
Output: tblsubkategori.subkategori, tbltransaksi.kodebarang, tblproduk.namabarang, tbltransaksi.keluar, tbltransaksi.modal
Sort Key: tblsubkategori.subkategori, tbltransaksi.kodebarang, tblproduk.namabarang
-> Hash Join (cost=5123.14..69083.49 rows=692868 width=65)
Output: tblsubkategori.subkategori, tbltransaksi.kodebarang, tblproduk.namabarang, tbltransaksi.keluar, tbltransaksi.modal
Hash Cond: ((tbltransaksi.kodebarang)::text = (tblproduk.produkid)::text)
-> Merge Join (cost=0.77..42032.84 rows=692868 width=23)
Output: tbltransaksi.kodebarang, tbltransaksi.keluar, tbltransaksi.modal
Merge Cond: (tblpenjualan.id = tbltransaksi.jualid)
-> Index Scan using tblpenjualan_pkey on public.tblpenjualan (cost=0.29..6662.34 rows=155847 width=4)
Output: tblpenjualan.id, tblpenjualan.tanggal, tblpenjualan.noinvoice, tblpenjualan.customer, tblpenjualan.bayar, tblpenjualan.jenis, tblpenjualan.jumlah, tblpenjualan.keterangan, tblpenjualan.jam, tblpenjualan.kassa, tblpenjualan.jatuhtempo, tblpenjualan.cetak, tblpenjualan.modifyby, tblpenjualan.createby, tblpenjualan.sales, tblpenjualan.mesinedc, tblpenjualan.void
Filter: ((tblpenjualan.tanggal >= '2013-01-01'::date) AND (tblpenjualan.tanggal <= '2013-12-31'::date))
-> Index Scan using tbltransaksi_idx4 on public.tbltransaksi (cost=0.42..26320.16 rows=692890 width=27)
Output: tbltransaksi.id, tbltransaksi.tanggal, tbltransaksi.kodebarang, tbltransaksi.masuk, tbltransaksi.keluar, tbltransaksi.satuan, tbltransaksi.keterangan, tbltransaksi.jenis, tbltransaksi.harga, tbltransaksi.discount, tbltransaksi.jualid, tbltransaksi.beliid, tbltransaksi.mutasiid, tbltransaksi.nobukti, tbltransaksi.customerid, tbltransaksi.modal, tbltransaksi.awalid, tbltransaksi.terimabrgid, tbltransaksi.opnameid, tbltransaksi.returjualid, tbltransaksi.returbeliid
-> Hash (cost=3259.85..3259.85 rows=83642 width=55)
Output: tblproduk.namabarang, tblproduk.produkid, tblsubkategori.subkategori
-> Hash Join (cost=5.35..3259.85 rows=83642 width=55)
Output: tblproduk.namabarang, tblproduk.produkid, tblsubkategori.subkategori
Hash Cond: ((tblproduk.subkategoriid)::text = (tblsubkategori.tblsubkategoriid)::text)
-> Seq Scan on public.tblproduk (cost=0.00..2104.42 rows=83642 width=45)
Output: tblproduk.produkid, tblproduk.namabarang, tblproduk.hargajual, tblproduk.subkategoriid, tblproduk.createby, tblproduk.kodepromo, tblproduk.satuan, tblproduk.foto, tblproduk.pajak, tblproduk.listingfee, tblproduk.supplierid, tblproduk.modifyby, tblproduk.qtygrosir, tblproduk.hargagrosir, tblproduk.diskonjual, tblproduk.modal
-> Hash (cost=4.23..4.23 rows=90 width=17)
Output: tblsubkategori.subkategori, tblsubkategori.tblsubkategoriid
-> Hash Join (cost=1.09..4.23 rows=90 width=17)
Output: tblsubkategori.subkategori, tblsubkategori.tblsubkategoriid
Hash Cond: ((tblsubkategori.kategoriid)::text = (tblkategori.kategoriid)::text)
-> Seq Scan on public.tblsubkategori (cost=0.00..1.90 rows=90 width=21)
Output: tblsubkategori.tblsubkategoriid, tblsubkategori.subkategori, tblsubkategori.kategoriid
-> Hash (cost=1.04..1.04 rows=4 width=38)
Output: tblkategori.kategoriid
-> Seq Scan on public.tblkategori (cost=0.00..1.04 rows=4 width=38)
Output: tblkategori.kategoriid
Thanks for any suggestion.
Hengky Lie <hengkyliwandouw@gmail.com> wrote: > this query takes long time to process. It takes around 48 seconds > to calculate about 690 thousand record. > Is there any way to make calculation faster ? Quite possibly -- that's about 70 microseconds per row, and even fairly complex queries can often do better than that. You didn't provide enough information to allow people to help you very effectively. Please read this page: http://wiki.postgresql.org/wiki/SlowQueryQuestions I suggest that you post to the pgsql-performance list with more detail, as suggested on that page. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Dear Kevin, After reading the link you gave to me, changing shared_buffers to 25% (512MB) of available RAM and effective_cache_size to1500MB (about 75% of available RAM) make the query runs very fast. Postgres only need 1.8 second to display the result. Thanks a lot ! On Nov 24, 2013, at 11:21 PM, Kevin Grittner wrote: > Hengky Lie <hengkyliwandouw@gmail.com> wrote: > >> this query takes long time to process. It takes around 48 seconds >> to calculate about 690 thousand record. > >> Is there any way to make calculation faster ? > > Quite possibly -- that's about 70 microseconds per row, and even > fairly complex queries can often do better than that. You didn't > provide enough information to allow people to help you very > effectively. Please read this page: > > http://wiki.postgresql.org/wiki/SlowQueryQuestions > > I suggest that you post to the pgsql-performance list with more > detail, as suggested on that page. > > -- > Kevin Grittner > EDB: http://www.enterprisedb.com > The Enterprise PostgreSQL Company
Hengky Liwandouw <hengkyliwandouw@gmail.com> wrote: > On Nov 24, 2013, at 11:21 PM, Kevin Grittner wrote: >> Hengky Lie <hengkyliwandouw@gmail.com> wrote: >> >>> this query takes long time to process. It takes around 48 >>> seconds to calculate about 690 thousand record. >> >>> Is there any way to make calculation faster ? >> >> Quite possibly -- that's about 70 microseconds per row, and even >> fairly complex queries can often do better than that. > After reading the link you gave to me, changing shared_buffers to > 25% (512MB) of available RAM and effective_cache_size to 1500MB > (about 75% of available RAM) make the query runs very fast. > Postgres only need 1.8 second to display the result. That's 4.6 microseconds per row. Given the complexity of the query, it might be hard to improve on that. A simple tablescan that returns all rows generally takes 1 to 2 microseconds on the hardware I generally use. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company