Обсуждение: how would you speed up this long query?
select sub_query_1.pid, sub_query_1.tit, sub_query_1.num, sub_query_3.cid, sub_query_3.id, sub_query_3.c, sub_query_3.s, sub_query_3.z, sub_query_3.cy, sub_query_3.cd, sub_query_3.cr, org.id as org__id, org.pid as org__pid, org.open, org.cid as org__cid, z0.zcg from (select proj.pid, proj.tit, proj.num from proj, (select org.pid from org where org.open = 'Y') as sub_1 where proj.pid = sub_1.pid) as sub_query_1, (select detail.cid, detail.id, detail.c, detail.s, detail.z, detail.cy, detail.cd, detail.cr from detail, (select org.id from org where org.open = 'Y') as sub_3 where detail.id = sub_3.id) as sub_query_3, org, z0 where sub_query_1.pid = org.pid and sub_query_3.id = org.id and sub_query_3.z = z0.zcg group by z0.zcg, sub_query_1.pid, sub_query_1.tit, sub_query_1.num, sub_query_3.cid, sub_query_3.id, sub_query_3.c, sub_query_3.s, sub_query_3.z, sub_query_3.cy, sub_query_3.cd, sub_query_3.cr, org.id, org.pid, org.open, org.cid
On 28/03/15 10:10, zach cruise wrote: > select > sub_query_1.pid, > sub_query_1.tit, > sub_query_1.num, > sub_query_3.cid, > sub_query_3.id, > sub_query_3.c, > sub_query_3.s, > sub_query_3.z, > sub_query_3.cy, > sub_query_3.cd, > sub_query_3.cr, > org.id as org__id, > org.pid as org__pid, > org.open, > org.cid as org__cid, > z0.zcg > from > (select > proj.pid, > proj.tit, > proj.num > from > proj, > (select > org.pid > from > org > where > org.open = 'Y') as sub_1 > where > proj.pid = sub_1.pid) as sub_query_1, > (select > detail.cid, > detail.id, > detail.c, > detail.s, > detail.z, > detail.cy, > detail.cd, > detail.cr > from > detail, > (select > org.id > from > org > where > org.open = 'Y') as sub_3 > where > detail.id = sub_3.id) as sub_query_3, > org, > z0 > where > sub_query_1.pid = org.pid and > sub_query_3.id = org.id and > sub_query_3.z = z0.zcg > group by > z0.zcg, > sub_query_1.pid, > sub_query_1.tit, > sub_query_1.num, > sub_query_3.cid, > sub_query_3.id, > sub_query_3.c, > sub_query_3.s, > sub_query_3.z, > sub_query_3.cy, > sub_query_3.cd, > sub_query_3.cr, > org.id, > org.pid, > org.open, > org.cid > > Version of PostgreSQL? Operating system? Hardware configuration? Indexes? EXPLAIN ANALYZE output? Anything else that might be relevant? What have you already done to investigate? Cheers, Gavin
> Version of PostgreSQL? 9.3 > Operating system? win > Hardware configuration? 8 gb ram. takes about 7000 ms to retrieve about 7000 rows. max_connections = 200 shared_buffers = 512mb effective_cache_size = 6gb work_mem = 13107kb maintenance_work_mem = 512mb checkpoint_segments = 32 checkpoint_completion_target = 0.7 wal_buffers = 16mb default_statistics_target = 100 > Indexes? no > Anything else that might be relevant? no > What have you already done to investigate? moved subquery from "where" to "from" to evaluate once instead of once per row > EXPLAIN ANALYZE output? would help if you can help us understand what's going on here: "Group (cost=5520.89..6335.03 rows=18092 width=199) (actual time=3864.186..4402.447 rows=5512 loops=1)" " -> Sort (cost=5520.89..5566.12 rows=18092 width=199) (actual time=3864.171..4146.725 rows=97141 loops=1)" " Sort Key: z0.zcg, proj.pid, proj.tit, proj.num, detail.cid, detail.id, det (...)" " Sort Method: external merge Disk: 21648kB" " -> Hash Join (cost=3541.48..4241.51 rows=18092 width=199) (actual time=254.216..432.629 rows=97141 loops=1)" " Hash Cond: (org.id = detail.id)" " -> Hash Join (cost=752.72..1036.45 rows=4955 width=109) (actual time=64.492..86.822 rows=4977 loops=1)" " Hash Cond: (org.pid = proj.pid)" " -> Seq Scan on org (cost=0.00..196.82 rows=4982 width=26) (actual time=0.024..6.199 rows=4982 loops=1)" " -> Hash (cost=702.97..702.97 rows=3980 width=91) (actual time=64.439..64.439 rows=3973 loops=1)" " Buckets: 1024 Batches: 1 Memory Usage: 465kB" " -> Hash Join (cost=424.04..702.97 rows=3980 width=91) (actual time=20.994..52.773 rows=3973 loops=1)" " Hash Cond: (org_1.pid = proj.pid)" " -> Seq Scan on org org_1 (cost=0.00..209.28 rows=3980 width=8) (actual time=0.016..10.815 rows=3980 loops=1)" " Filter: ((open)::text = 'Y'::text)" " Rows Removed by Filter: 1002" " -> Hash (cost=374.02..374.02 rows=4002 width=83) (actual time=20.950..20.950 rows=4002 loops=1)" " Buckets: 1024 Batches: 1 Memory Usage: 424kB" " -> Seq Scan on proj (cost=0.00..374.02 rows=4002 width=83) (actual time=0.010..9.810 rows=4002 loops=1)" " -> Hash (cost=2716.44..2716.44 rows=5786 width=98) (actual time=189.677..189.677 rows=4959 loops=1)" " Buckets: 1024 Batches: 1 Memory Usage: 629kB" " -> Hash Join (cost=2369.71..2716.44 rows=5786 width=98) (actual time=169.635..182.956 rows=4959 loops=1)" " Hash Cond: (org_2.id = detail.id)" " -> Seq Scan on org org_2 (cost=0.00..209.28 rows=3980 width=8) (actual time=0.015..4.194 rows=3980 loops=1)" " Filter: ((open)::text = 'Y'::text)" " Rows Removed by Filter: 1002" " -> Hash (cost=2340.92..2340.92 rows=2303 width=90) (actual time=169.596..169.596 rows=1964 loops=1)" " Buckets: 1024 Batches: 1 Memory Usage: 224kB" " -> Hash Join (cost=2069.93..2340.92 rows=2303 width=90) (actual time=159.126..166.937 rows=1964 loops=1)" " Hash Cond: ((detail.z)::text = (z0.zcg)::text)" " -> Seq Scan on detail (cost=0.00..199.03 rows=2303 width=52) (actual time=0.009..2.152 rows=2303 loops=1)" " -> Hash (cost=1538.30..1538.30 rows=42530 width=38) (actual time=159.070..159.070 rows=42530 loops=1)" " Buckets: 8192 Batches: 1 Memory Usage: 2451kB" " -> Seq Scan on z0 (cost=0.00..1538.30 rows=42530 width=38) (actual time=0.010..82.125 rows=42530 loops=1)" "Total runtime: 4414.655 ms"
-----Original Message----- From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of zach cruise Sent: Tuesday, March 31, 2015 2:56 PM To: Gavin Flower Cc: PostgreSQL Subject: Re: [GENERAL] how would you speed up this long query? > Version of PostgreSQL? 9.3 > Operating system? win > Hardware configuration? 8 gb ram. takes about 7000 ms to retrieve about 7000 rows. max_connections = 200 shared_buffers = 512mb effective_cache_size = 6gb work_mem = 13107kb maintenance_work_mem = 512mb checkpoint_segments = 32 checkpoint_completion_target = 0.7 wal_buffers = 16mb default_statistics_target = 100 > Indexes? no > Anything else that might be relevant? no > What have you already done to investigate? moved subquery from "where" to "from" to evaluate once instead of once per row > EXPLAIN ANALYZE output? would help if you can help us understand what's going on here: "Group (cost=5520.89..6335.03 rows=18092 width=199) (actual time=3864.186..4402.447 rows=5512 loops=1)" " -> Sort (cost=5520.89..5566.12 rows=18092 width=199) (actual time=3864.171..4146.725 rows=97141 loops=1)" " Sort Key: z0.zcg, proj.pid, proj.tit, proj.num, detail.cid, detail.id, det (...)" " Sort Method: external merge Disk: 21648kB" " -> Hash Join (cost=3541.48..4241.51 rows=18092 width=199) (actual time=254.216..432.629 rows=97141 loops=1)" " Hash Cond: (org.id = detail.id)" " -> Hash Join (cost=752.72..1036.45 rows=4955 width=109) (actual time=64.492..86.822 rows=4977 loops=1)" " Hash Cond: (org.pid = proj.pid)" " -> Seq Scan on org (cost=0.00..196.82 rows=4982 width=26) (actual time=0.024..6.199 rows=4982 loops=1)" " -> Hash (cost=702.97..702.97 rows=3980 width=91) (actual time=64.439..64.439 rows=3973 loops=1)" " Buckets: 1024 Batches: 1 Memory Usage: 465kB" " -> Hash Join (cost=424.04..702.97 rows=3980 width=91) (actual time=20.994..52.773 rows=3973 loops=1)" " Hash Cond: (org_1.pid = proj.pid)" " -> Seq Scan on org org_1 (cost=0.00..209.28 rows=3980 width=8) (actual time=0.016..10.815 rows=3980 loops=1)" " Filter: ((open)::text = 'Y'::text)" " Rows Removed by Filter: 1002" " -> Hash (cost=374.02..374.02 rows=4002 width=83) (actual time=20.950..20.950 rows=4002 loops=1)" " Buckets: 1024 Batches: 1 Memory Usage: 424kB" " -> Seq Scan on proj (cost=0.00..374.02 rows=4002 width=83) (actual time=0.010..9.810 rows=4002 loops=1)" " -> Hash (cost=2716.44..2716.44 rows=5786 width=98) (actual time=189.677..189.677 rows=4959 loops=1)" " Buckets: 1024 Batches: 1 Memory Usage: 629kB" " -> Hash Join (cost=2369.71..2716.44 rows=5786 width=98) (actual time=169.635..182.956 rows=4959 loops=1)" " Hash Cond: (org_2.id = detail.id)" " -> Seq Scan on org org_2 (cost=0.00..209.28 rows=3980 width=8) (actual time=0.015..4.194 rows=3980 loops=1)" " Filter: ((open)::text = 'Y'::text)" " Rows Removed by Filter: 1002" " -> Hash (cost=2340.92..2340.92 rows=2303 width=90) (actual time=169.596..169.596 rows=1964 loops=1)" " Buckets: 1024 Batches: 1 Memory Usage: 224kB" " -> Hash Join (cost=2069.93..2340.92 rows=2303 width=90) (actual time=159.126..166.937 rows=1964 loops=1)" " Hash Cond: ((detail.z)::text = (z0.zcg)::text)" " -> Seq Scan on detail (cost=0.00..199.03 rows=2303 width=52) (actual time=0.009..2.152 rows=2303 loops=1)" " -> Hash (cost=1538.30..1538.30 rows=42530 width=38) (actual time=159.070..159.070 rows=42530 loops=1)" " Buckets: 8192 Batches: 1 Memory Usage: 2451kB" " -> Seq Scan on z0 (cost=0.00..1538.30 rows=42530 width=38) (actual time=0.010..82.125 rows=42530 loops=1)" "Total runtime: 4414.655 ms" -- Didn't see replies to this message, so... Your query spends most of the time on sorting: " Sort Method: external merge Disk: 21648kB" and it doesn't fit in memory. Try increasing work_mem somewhat to 50MB, you could do it for this particular connection only, if you don't want to changeit for the whole server. Regards, Igor Neyman