Обсуждение: Slow select
I have a table with column of character varying(100). There are about 150.000.000 rows in a table. Index was created as CREATE INDEX idx_stringv ON bn_stringvalue USING btree (lower(value::text)); I'm trying to execute queries like 'select * from stringvalue where value=lower(?)'. Making 1000 selects takes about 4-5 min. I did vacuum and analyze on this table and checked that query plan uses index. What can I do to make it faster? Thanks in advance, Yulia -- View this message in context: http://old.nabble.com/Slow-select-tp26810673p26810673.html Sent from the PostgreSQL - general mailing list archive at Nabble.com.
show us explain select * .... -- GJ
On Wed, Dec 16, 2009 at 04:56:16AM -0800, yuliada wrote: > I have a table with column of character varying(100). There are about > 150.000.000 rows in a table. Index was created as > > CREATE INDEX idx_stringv > ON bn_stringvalue > USING btree > (lower(value::text)); > > I'm trying to execute queries like 'select * from stringvalue where > value=lower(?)'. Wouldn't this be "lower(value) = lower(?)" ? > Making 1000 selects takes about 4-5 min. So each query is taking approx 300ms? How much data does each one return? > I did vacuum and > analyze on this table and checked that query plan uses index. What can I do > to make it faster? How about combining all 1000 selects into one? Maybe something like: SELECT * FROM stringvalue WHERE lower(value) = ANY (ARRAY ['a','b','c']); -- Sam http://samason.me.uk/
Sam Mason wrote: > > Wouldn't this be "lower(value) = lower(?)" ? > Yes, I use it as "lower(value) = lower(?)", I typed inaccurate example. Sam Mason wrote: > > So each query is taking approx 300ms? How much data does each one > return? > No more than 1000 rows. Sam Mason wrote: > > How about combining all 1000 selects into one? > I can't combine these selects into one, I need to run them one after another. Grzegorz Jaśkiewicz wrote: > > show us explain select * .... > "Bitmap Heap Scan on bn_stringvalue v (cost=228.40..8688.70 rows=2172 width=90) (actual time=1129.767..1781.403 rows=104 loops=1)" " Recheck Cond: (lower((value)::text) = 'esr'::text)" " -> Bitmap Index Scan on idx_stringv (cost=0.00..227.86 rows=2172 width=0) (actual time=1107.974..1107.974 rows=104 loops=1)" " Index Cond: (lower((value)::text) = 'esr'::text)" "Total runtime: 1781.566 ms" Thanks -- View this message in context: http://old.nabble.com/Slow-select-tp26810673p26821568.html Sent from the PostgreSQL - general mailing list archive at Nabble.com.
If I search for something which is not in db like 'dfsgsdfgsdfgdsfg' it always work fast. I suspect that speed depends on number of rows retruned, but I don't know exactly... -- View this message in context: http://old.nabble.com/Slow-select-tp26810673p26821859.html Sent from the PostgreSQL - general mailing list archive at Nabble.com.
On Wed, Dec 16, 2009 at 05:18:12PM -0800, yuliada wrote: > Sam Mason wrote: > > How about combining all 1000 selects into one? > > I can't combine these selects into one, I need to run them one after > another. Hum, difficult. What other information is in the row that you need back? Can you turn the table structure around somehow so that the "value" is the primary key and hence only a single row needs to be found each time. Other than that, I think you just need faster disks. > "Bitmap Heap Scan on bn_stringvalue v (cost=228.40..8688.70 rows=2172 width=90) (actual time=1129.767..1781.403 rows=104loops=1)" > " Recheck Cond: (lower((value)::text) = 'esr'::text)" > " -> Bitmap Index Scan on idx_stringv (cost=0.00..227.86 rows=2172 width=0) (actual time=1107.974..1107.974 rows=104loops=1)" > " Index Cond: (lower((value)::text) = 'esr'::text)" > "Total runtime: 1781.566 ms" It looks like it's doing reasonable things. I assume you've got a single disk servicing this, 1781 / (104*2) = 8ms average seek time. Clustering on "value" may help, but it's going to take a while. Its value depends on how common this operation is compared to other ones. -- Sam http://samason.me.uk/