I have a index question. My table has 800K rows and I a doing a basic query on an indexed integer field which takes over 2 seconds to complete because it's ignoring the index for some reason. Any ideas as to why it's ignoring the index? I'm using postgres 8.0.2.
SELECT count(*) FROM purchase_order_items WHERE expected_quantity > '0'
EXPLAIN ANALYZE reveals that it's not using the index...
Aggregate (cost=22695.28..22695.28 rows=1 width=0) (actual time=2205.688..2205.724 rows=1 loops=1)
-> Seq Scan on purchase_order_items (cost=0.00..21978.08 rows=286882 width=0) (actual time=0.535..2184.405 rows=7458 loops=1)
Filter: (expected_quantity > 0)
Total runtime: 2207.203 ms
However, if I use the "SET ENABLE_SEQSCAN TO OFF" trick, then it does use the index and is much faster.
SET ENABLE_SEQSCAN TO OFF;
EXPLAIN ANALYZE SELECT count(*) FROM purchase_order_items WHERE expected_quantity > '0'
Aggregate (cost=1050659.46..1050659.46 rows=1 width=0) (actual time=137.393..137.441 rows=1 loops=1)
-> Index Scan using purchase_order_items_expected_quantity_idx on purchase_order_items (cost=0.00..1049942.25 rows=286882 width=0) (actual time=0.756..119.990 rows=7458 loops=1)
Index Cond: (expected_quantity > 0)
Total runtime: 139.185 ms
I could understand if this was a really complex query and the planner got confused... but this is such a simple query. Is it OK to use "SET ENABLE_SEQSCAN TO OFF;" in production code? Is there another solution?
Thanks!
------------------------------
-- Table Definition --
CREATE TABLE purchase_order_items (
id serial NOT NULL,
purchase_order_id integer,
manufacturer_id integer,
quantity integer,
product_name character varying(16),
short_description character varying(60),
expected_quantity integer,
received_quantity integer,
"position" real,
created_at timestamp without time zone DEFAULT now(),
updated_at timestamp without time zone
);
-- Index --
CREATE INDEX purchase_order_items_expected_quantity_idx ON purchase_order_items USING btree (expected_quantity);