Обсуждение: Is this a planner bug?

Поиск
Список
Период
Сортировка

Is this a planner bug?

От
Torsten Förtsch
Дата:
Hi,

I got this plan:

Limit  (cost=0.00..1.12 rows=1 width=0)
   ->  Seq Scan on fmb  (cost=0.00..6964734.35 rows=6237993 width=0)
         Filter: ...

The table has ~80,000,000 rows. So, the filter, according to the plan,
filters out >90% of the rows. Although the cost for the first row to
come out of the seqscan might be 0, the cost for the first row to pass
the filter and, hence, to hit the limit node is probably higher.

Thanks,
Torsten


Re: Is this a planner bug?

От
Pavel Stehule
Дата:

Hello

what is your effective_cache_size in postgresql.conf?

What is random_page_cost and seq_page_cost?

Regards

Pavel

2014-04-22 14:10 GMT+02:00 Torsten Förtsch <torsten.foertsch@gmx.net>:
Hi,

I got this plan:

Limit  (cost=0.00..1.12 rows=1 width=0)
   ->  Seq Scan on fmb  (cost=0.00..6964734.35 rows=6237993 width=0)
         Filter: ...

The table has ~80,000,000 rows. So, the filter, according to the plan,
filters out >90% of the rows. Although the cost for the first row to
come out of the seqscan might be 0, the cost for the first row to pass
the filter and, hence, to hit the limit node is probably higher.

Thanks,
Torsten


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: Is this a planner bug?

От
Torsten Förtsch
Дата:
On 22/04/14 14:24, Pavel Stehule wrote:
> what is your effective_cache_size in postgresql.conf?
>
> What is random_page_cost and seq_page_cost?
>

8GB, 4, 1

But I am not asking about how to get a different plan or how to optimize
the query. I know that.

What I'm asking is the following. Assuming node without any filter has a
startup cost C1, a total cost of C2 and produces N rows. Now, a filter
is applied which passes through M rows. Then the startup cost for the
node *with* the filter applied should be different from C1 because a
certain amount of rows from the beginning is filtered out, right?

I think the startup cost should be something like

  C1 + (C2+N*F-C1)*M/N   or   C1 + 0.5*(C2+N*F-C1)*M/N

where F is the cost to apply the filter to one row.

On average only one out of N/M rows matches the filter. So we need to
fetch N/M rows to produce the first row out of the filter. Now, you can
argue that we don't know where in that set the first matching row is. On
average it would probably in the middle. That's where the 0.5 comes from.

I certainly got it wrong somewhere. But I think you got the idea.

If not the seqscan node, but the limit node should have a startup cost
>0 (depending where the filter is taken into account). In my case the
startup cost for the limit node should be somewhere between 250000 and
300000.

Torsten

> 2014-04-22 14:10 GMT+02:00 Torsten Förtsch <torsten.foertsch@gmx.net
> <mailto:torsten.foertsch@gmx.net>>:
>
>     Hi,
>
>     I got this plan:
>
>     Limit  (cost=0.00..1.12 rows=1 width=0)
>        ->  Seq Scan on fmb  (cost=0.00..6964734.35 rows=6237993 width=0)
>              Filter: ...
>
>     The table has ~80,000,000 rows. So, the filter, according to the plan,
>     filters out >90% of the rows. Although the cost for the first row to
>     come out of the seqscan might be 0, the cost for the first row to pass
>     the filter and, hence, to hit the limit node is probably higher.



Re: Is this a planner bug?

От
Albe Laurenz
Дата:
> Torsten Förtsch wrote:
>>> I got this plan:
>>>
>>> Limit  (cost=0.00..1.12 rows=1 width=0)
>>>    ->  Seq Scan on fmb  (cost=0.00..6964734.35 rows=6237993 width=0)
>>>          Filter: ...
>>>
>>> The table has ~80,000,000 rows. So, the filter, according to the plan,
>>> filters out >90% of the rows. Although the cost for the first row to
>>> come out of the seqscan might be 0, the cost for the first row to pass
>>> the filter and, hence, to hit the limit node is probably higher.

>> what is your effective_cache_size in postgresql.conf?
>>
>> What is random_page_cost and seq_page_cost?

> 8GB, 4, 1

Could you run EXPLAIN ANALYZE for the query with enable_seqscan
on and off?  I'd be curious
a) if the index can be used
b) if it can be used, if that is actually cheaper
c) how the planner estimates compare with reality.

Yours,
Laurenz Albe

Re: Is this a planner bug?

От
Tom Lane
Дата:
=?UTF-8?B?VG9yc3RlbiBGw7ZydHNjaA==?= <torsten.foertsch@gmx.net> writes:
> What I'm asking is the following. Assuming node without any filter has a
> startup cost C1, a total cost of C2 and produces N rows. Now, a filter
> is applied which passes through M rows. Then the startup cost for the
> node *with* the filter applied should be different from C1 because a
> certain amount of rows from the beginning is filtered out, right?

No.  The model is that startup cost is what's expended before the scan can
start, and then the run cost (total_cost - startup_cost) is expended while
scanning.  Applying a filter increases the run cost and also reduces the
number of rows returned, but that's got nothing to do with startup cost.

As a comparison point, imagine an index scan that has a filter condition
in addition to the indexable condition (which let's assume selects
multiple rows).  The startup cost for such a plan corresponds to the index
descent costs.  The run cost corresponds to scanning the index entries
matching the indexable condition, fetching the heap rows, and applying the
filter condition.

Or in other words, time to get the first result row is not just startup
cost; it's startup cost plus run_cost/N, if the plan is estimated to
return N rows altogether.

            regards, tom lane


Re: Is this a planner bug?

От
Torsten Förtsch
Дата:
On 22/04/14 16:39, Albe Laurenz wrote:
> Could you run EXPLAIN ANALYZE for the query with enable_seqscan
> on and off?  I'd be curious
> a) if the index can be used
> b) if it can be used, if that is actually cheaper
> c) how the planner estimates compare with reality.
>

Using the index:

Limit  (cost=0.57..2.95 rows=1 width=0)
       (actual time=0.095..0.095 rows=1 loops=1)
   ->  Index Scan ... (cost=0.57..14857285.83 rows=6240539 width=0)
                      (actual time=0.095..0.095 rows=1 loops=1)
         Index Cond:...
         Filter: ...
         Rows Removed by Filter: 4
 Total runtime: 0.147 ms


seq scan:

Limit  (cost=0.00..1.12 rows=1 width=0)
       (actual time=0.943..0.944 rows=1 loops=1)
   ->  Seq Scan ...  (cost=0.00..6967622.77 rows=6240580 width=0)
                     (actual time=0.940..0.940 rows=1 loops=1)
         Filter: ...
         Rows Removed by Filter: 215
 Total runtime: 0.997 ms

In these cases all the stuff comes from cache hits. When I first tried
the query it used a seq scan and it took several seconds. In this case
only setting random_page_cost less than seq_page_cost would make the
planner use the index.


I think if we had separate filter nodes, just like SORT nodes, then it
would be clearer that the setup cost of the seq scan with filter cannot
be 0.

Torsten


Re: Is this a planner bug?

От
Torsten Förtsch
Дата:
On 22/04/14 16:45, Tom Lane wrote:
> No.  The model is that startup cost is what's expended before the scan can
> start, and then the run cost (total_cost - startup_cost) is expended while
> scanning.  Applying a filter increases the run cost and also reduces the
> number of rows returned, but that's got nothing to do with startup cost.
>
> As a comparison point, imagine an index scan that has a filter condition
> in addition to the indexable condition (which let's assume selects
> multiple rows).  The startup cost for such a plan corresponds to the index
> descent costs.  The run cost corresponds to scanning the index entries
> matching the indexable condition, fetching the heap rows, and applying the
> filter condition.
>
> Or in other words, time to get the first result row is not just startup
> cost; it's startup cost plus run_cost/N, if the plan is estimated to
> return N rows altogether.

Ok, I understand that's the way the model is.

The point is that especially in presence of a "LIMIT 1" there is a
difference between a seq scan that has to fetch a few 10MB to find the
first and only row and an index scan that has to process perhaps  a few
kb. And in this case even setting random_page_cost=seq_page_cost didn't
help.

If that query were part of a larger one, I wouldn't want to fiddle with
the cost parameters to get one part of the query fast only to sacrifice
performance in another part.

Torsten


Re: Is this a planner bug?

От
Tom Lane
Дата:
=?UTF-8?B?VG9yc3RlbiBGw7ZydHNjaA==?= <torsten.foertsch@gmx.net> writes:
> Using the index:

> Limit  (cost=0.57..2.95 rows=1 width=0)
>        (actual time=0.095..0.095 rows=1 loops=1)
>    ->  Index Scan ... (cost=0.57..14857285.83 rows=6240539 width=0)
>                       (actual time=0.095..0.095 rows=1 loops=1)
>          Index Cond:...
>          Filter: ...
>          Rows Removed by Filter: 4
>  Total runtime: 0.147 ms


> seq scan:

> Limit  (cost=0.00..1.12 rows=1 width=0)
>        (actual time=0.943..0.944 rows=1 loops=1)
>    ->  Seq Scan ...  (cost=0.00..6967622.77 rows=6240580 width=0)
>                      (actual time=0.940..0.940 rows=1 loops=1)
>          Filter: ...
>          Rows Removed by Filter: 215
>  Total runtime: 0.997 ms

So the real question here is whether 215 rows skipped to find the first
matching row is more or less than the statistical expectation.

You said the table has 80M rows, so the planner evidently thinks the
filter has selectivity 6240580/80M or about 1/13, so it would have been
expecting the scan to find a match after about 13 rows on average.  Having
to scan 215 rows is thus considerably more than it was guessing.  If we
had statistics that would allow a better guess at where the first matching
row is, then indeed this would be a planner bug --- but it's not a bug,
it's just a limitation of the available statistical data.

What's more interesting is that the index scan only had to skip 4 rows
to get a match.  Is it getting unduly lucky rather than unduly unlucky?

There have been some discussions of intentionally penalizing the estimates
for small-LIMIT plans, so that we assume worse-than-random placement of
the first few matching rows.  That would kick up the estimate for this
seqscan all right, but I'm unsure that it wouldn't kick up the estimate
for the indexscan just as much.

            regards, tom lane