Обсуждение: optimizer and type question

Поиск
Список
Период
Сортировка

optimizer and type question

От
Erik Riedel
Дата:
[last week aggregation, this week, the optimizer]

I have a somewhat general optimizer question/problem that I would like
to get some input on - i.e. I'd like to know what is "supposed" to
work here and what I should be expecting.  Sadly, I think the patch
for this is more involved than my last message.

Using my favorite table these days:

Table    = lineitem
+------------------------+----------------------------------+-------+
|              Field     |              Type                | Length|
+------------------------+----------------------------------+-------+
| l_orderkey             | int4 not null                    |     4 |
| l_partkey              | int4 not null                    |     4 |
| l_suppkey              | int4 not null                    |     4 |
| l_linenumber           | int4 not null                    |     4 |
| l_quantity             | float4 not null                  |     4 |
| l_extendedprice        | float4 not null                  |     4 |
| l_discount             | float4 not null                  |     4 |
| l_tax                  | float4 not null                  |     4 |
| l_returnflag           | char() not null                  |     1 |
| l_linestatus           | char() not null                  |     1 |
| l_shipdate             | date                             |     4 |
| l_commitdate           | date                             |     4 |
| l_receiptdate          | date                             |     4 |
| l_shipinstruct         | char() not null                  |    25 |
| l_shipmode             | char() not null                  |    10 |
| l_comment              | char() not null                  |    44 |
+------------------------+----------------------------------+-------+
Index:    lineitem_index_

and the query:

--
-- Query 1
--
explain select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, 
sum(l_extendedprice) as sum_base_price, 
sum(l_extendedprice*(1-l_discount)) as sum_disc_price, 
sum(l_extendedprice*(1-l_discount)*(1+l_tax)) as sum_charge, 
avg(l_quantity) as avg_qty, avg(l_extendedprice) as avg_price, 
avg(l_discount) as avg_disc, count(*) as count_order 
from lineitem 
where l_shipdate <= '1998-09-02'::date 
group by l_returnflag, l_linestatus 
order by l_returnflag, l_linestatus;


note that I have eliminated the date calculation in my query of last
week and manually replaced it with a constant (since this wasn't
happening automatically - but let's not worry about that for now).
And this is only an explain, we care about the optimizer.  So we get:

Sort  (cost=34467.88 size=0 width=0)->  Aggregate  (cost=34467.88 size=0 width=0)  ->  Group  (cost=34467.88 size=0
width=0)   ->  Sort  (cost=34467.88 size=0 width=0)      ->  Seq Scan on lineitem  (cost=34467.88 size=200191
width=44)

so let's think about the selectivity that is being chosen for the
seq scan (the where l_shipdate <= '1998-09-02').

Turns out the optimizer is choosing "33%", even though the real answer
is somewhere in 90+% (that's how the query is designed).  So, why does
it do that?

Turns out that selectivity in this case is determined via
plancat::restriction_selectivity() which calls into functionOID = 103
(intltsel) for operatorOID = 1096 (date "<=") on relation OID = 18663
(my lineitem).

This all follows because of the description of 1096 (date "<=") in
pg_operator.  Looking at local1_template1.bki.source near line 1754
shows:

insert OID = 1096 ( "<=" PGUID 0 <...> date_le intltsel intltjoinsel )

where we see that indeed, it thinks "intltsel" is the right function
to use for "oprrest" in the case of dates.

Question 1 - is intltsel the right thing for selectivity on dates?

Hope someone is still with me.

So now we're running selfuncs::intltsel() where we make a further call
to selfuncs::gethilokey().  The job of gethilokey is to determine the
min and max values of a particular attribute in the table, which will
then be used with the constant in my where clause to estimate the
selectivity.  It is going to search the pg_statistic relation with
three key values:

Anum_pg_statistic_starelid     18663  (lineitem)
Anum_pg_statistic_staattnum       11  (l_shipdate)
Anum_pg_statistic_staop         1096  (date "<=")

this finds no tuples in pg_statistic.  Why is that?  The only nearby
tuple in pg_statistic is:

starelid|staattnum|staop|stalokey        |stahikey       
--------+---------+-----+----------------+----------------  18663|       11|    0|01-02-1992      |12-01-1998

and the reason the query doesn't match anything?  Because 1096 != 0.
But why is it 0 in pg_statistic?  Statistics are determined near line
1844 in vacuum.c (assuming a 'vacuum analyze' run at some point)
            i = 0;            values[i++] = (Datum) relid;            /* 1 */            values[i++] = (Datum)
attp->attnum;/* 2 */
 
====>        values[i++] = (Datum) InvalidOid;       /* 3 */            fmgr_info(stats->outfunc, &out_function);
    out_string = <...min...>            values[i++] = (Datum) fmgr(F_TEXTIN, out_string);            pfree(out_string);
          out_string = <...max...>            values[i++] = (Datum) fmgr(F_TEXTIN, out_string);
pfree(out_string);           stup = heap_formtuple(sd->rd_att, values, nulls);
 

the "offending" line is setting the staop to InvalidOid (i.e. 0).

Question 2 - is this right?  Is the intent for 0 to serve as a
"wildcard", or should it be inserting an entry for each operation
individually?

In the case of "wildcard" then gethilokey() should allow a match for 

Anum_pg_statistic_staop         0

instead of requiring the more restrictive 1096.  In the current code,
what happens next is gethilokey() returns "not found" and intltsel()
returns the default 1/3 which I see in the resultant query plan (size
= 200191 is 1/3 of the number of lineitem tuples).

Question 3 - is there any inherent reason it couldn't get this right?
The statistic is in the table 1992 to 1998, so the '1998-09-02' date
should be 90-some% selectivity, a much better guess than 33%.

Doesn't make a difference for this particular query, of course,
because the seq scan must proceed anyhow, but it could easily affect
other queries where selectivities matter (and it affects the
modifications I am trying to test in the optimizer to be "smarter"
about selectivities - my overall context is to understand/improve the
behavior that the underlying storage system sees from queries like this).

OK, so let's say we treat 0 as a "wildcard" and stop checking for
1096.  Not we let gethilokey() return the two dates from the statistic
table.  The immediate next thing that intltsel() does, near lines 122
in selfuncs.c is call atol() on the strings from gethilokey().  And
guess what it comes up with?

low = 1
high = 12

because it calls atol() on '01-02-1992' and '12-01-1998'.  This
clearly isn't right, it should get some large integer that includes
the year and day in the result.  Then it should compare reasonably
with my constant from the where clause and give a decent selectivity
value.  This leads to a re-visit of Question 1.

Question 4 - should date "<=" use a dateltsel() function instead of
intltsel() as oprrest?

If anyone is still with me, could you tell me if this makes sense, or
if there is some other location where the appropriate type conversion
could take place so that intltsel() gets something reasonable when it
does the atol() calls?

Could someone also give me a sense for how far out-of-whack the whole
current selectivity-handling structure is?  It seems that most of the
operators in pg_operator actually use intltsel() and would have
type-specific problems like that described.  Or is the problem in the
way attribute values are stored in pg_statistic by vacuum analyze?  Or
is there another layer where type conversion belongs?

Phew.  Enough typing, hope someone can follow this and address at
least some of the questions.

Thanks.

Erik Riedel
Carnegie Mellon University
www.cs.cmu.edu/~riedel



Re: [HACKERS] optimizer and type question

От
Taral
Дата:
On Mon, 22 Mar 1999, you wrote:
>Question 1 - is intltsel the right thing for selectivity on dates?

I think so... dates are really special integers.

>Question 2 - is this right?  Is the intent for 0 to serve as a
>"wildcard", or should it be inserting an entry for each operation
>individually?

This looks wrong... but I'm not proficient enough to know.

>Question 3 - is there any inherent reason it couldn't get this right?
>The statistic is in the table 1992 to 1998, so the '1998-09-02' date
>should be 90-some% selectivity, a much better guess than 33%.

I would imagine that 33% is a result due to the lack of the statistics match.

>OK, so let's say we treat 0 as a "wildcard" and stop checking for
>1096.  Not we let gethilokey() return the two dates from the statistic
>table.  The immediate next thing that intltsel() does, near lines 122
>in selfuncs.c is call atol() on the strings from gethilokey().  And
>guess what it comes up with?

This is ridiculous... why does gethilokey() return a string for a field that is
internally stored as an integer?

*sigh* Just more questions...

Taral


Re: [HACKERS] optimizer and type question

От
Tom Lane
Дата:
Erik Riedel <riedel+@CMU.EDU> writes:
> [ optimizer doesn't find relevant pg_statistic entry ]

It's clearly a bug that the selectivity code is not finding this tuple.
If your analysis is correct, then selectivity estimation has *never*
worked properly, or at least not in recent memory :-(.  Yipes.
Bruce and I found a bunch of other problems in the optimizer recently,
so it doesn't faze me to assume that this is broken too.

> the "offending" line is setting the staop to InvalidOid (i.e. 0).
> Question 2 - is this right?  Is the intent for 0 to serve as a
> "wildcard",

My thought is that what the staop column ought to be is the OID of the
comparison function that was used to determine the sort order of the
column.  Without a sort op the lowest and highest keys in the column are
not well defined, so it makes no sense to assert "these are the lowest
and highest values" without providing the sort op that determined that.
(For sufficiently complex data types one could reasonably have multiple
ordering operators.  A crude example is sorting on "circumference" and
"area" for polygons.)  But typically the sort op will be the "<"
operator for the column data type.

So, the vacuum code is definitely broken --- it's not storing the sort
op that it used.  The code in gethilokey might be broken too, depending
on how it is producing the operator it's trying to match against the
tuple.  For example, if the actual operator in the query is any of
< <= > >= on int4, then int4lt ought to be used to probe the pg_statistic
table.  I'm not sure if we have adequate info in pg_operator or pg_type
to let the optimizer code determine the right thing to probe with :-(

> The immediate next thing that intltsel() does, near lines 122
> in selfuncs.c is call atol() on the strings from gethilokey().  And
> guess what it comes up with?
> low = 1
> high = 12
> because it calls atol() on '01-02-1992' and '12-01-1998'.  This
> clearly isn't right, it should get some large integer that includes
> the year and day in the result.  Then it should compare reasonably
> with my constant from the where clause and give a decent selectivity
> value.  This leads to a re-visit of Question 1.
> Question 4 - should date "<=" use a dateltsel() function instead of
> intltsel() as oprrest?

This is clearly busted as well.  I'm not sure that creating dateltsel()
is the right fix, however, because if you go down that path then every
single datatype needs its own selectivity function; that's more than we
need.

What we really want here is to be able to map datatype values into
some sort of numeric range so that we can compute what fraction of the
low-key-to-high-key range is on each side of the probe value (the
constant taken from the query).  This general concept will apply to
many scalar types, so what we want is a type-specific mapping function
and a less-specific fraction-computing-function.  Offhand I'd say that
we want intltsel() and floatltsel(), plus conversion routines that can
produce either int4 or float8 from a data type as seems appropriate.
Anything that couldn't map to one or the other would have to supply its
own selectivity function.

> Or is the problem in the
> way attribute values are stored in pg_statistic by vacuum analyze?

Looks like it converts the low and high values to text and stores them
that way.  Ugly as can be :-( but I'm not sure there is a good
alternative.  We have no "wild card" column type AFAIK, which is what
these columns of pg_statistic would have to be to allow storage of
unconverted min and max values.

I think you've found a can of worms here.  Congratulations ;-)
        regards, tom lane


Re: [HACKERS] optimizer and type question

От
Bruce Momjian
Дата:
> Erik Riedel <riedel+@CMU.EDU> writes:
> > [ optimizer doesn't find relevant pg_statistic entry ]
> 
> It's clearly a bug that the selectivity code is not finding this tuple.
> If your analysis is correct, then selectivity estimation has *never*
> worked properly, or at least not in recent memory :-(.  Yipes.
> Bruce and I found a bunch of other problems in the optimizer recently,
> so it doesn't faze me to assume that this is broken too.

Yes.  Originally, pg_statistic was always empty, and there was no
pg_attribute.attdisbursion.

I added proper pg_attribute.attdisbursion processing.  In fact, our TODO
list has(you can see it on our web page under documentation, or in
/doc/TODO):
* update pg_statistic table to remove operator column

What I did not realize is that the selectivity code was still addressing
that column.  We either have to populate is properly, or throw it away. 
The good thing is that we only use "<" and ">" to compute min/max, so we
really don't need that operator column, and I don't know what I would
put in there anyway.

I realized "<" optimization processing was probably pretty broken, so
this is no surprise.

What we really need is some way to determine how far the requested value
is from the min/max values.  With int, we just do (val-min)/(max-min). 
That works, but how do we do that for types that don't support division.
Strings come to mind in this case.  Maybe we should support string too,
and convert all other types to string representation to do the
comparison, though things like date type will fail badly.

My guess is that 1/3 is a pretty good estimate for these types.  Perhaps
we should just get int types and float8 types to work, and punt on the
rest.

> I think you've found a can of worms here.  Congratulations ;-)

I can ditto that.

--  Bruce Momjian                        |  http://www.op.net/~candle maillist@candle.pha.pa.us            |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: [HACKERS] optimizer and type question

От
Erik Riedel
Дата:
OK, building on your high-level explanation, I am attaching a patch that
attempts to do something "better" than the current code.  Note that I
have only tested this with the date type and my particular query.  I
haven't run it through the regression, so consider it "proof of concept"
at best.  Although hopefully it will serve my purposes.

> My thought is that what the staop column ought to be is the OID of the
> comparison function that was used to determine the sort order of the
> column.  Without a sort op the lowest and highest keys in the column are
> not well defined, so it makes no sense to assert "these are the lowest
> and highest values" without providing the sort op that determined that.
>
> (For sufficiently complex data types one could reasonably have multiple
> ordering operators.  A crude example is sorting on "circumference" and
> "area" for polygons.)  But typically the sort op will be the "<"
> operator for the column data type.
>  
I changed vacuum.c to do exactly that.  oid of the lt sort op.

> So, the vacuum code is definitely broken --- it's not storing the sort
> op that it used.  The code in gethilokey might be broken too, depending
> on how it is producing the operator it's trying to match against the
> tuple.  For example, if the actual operator in the query is any of
> < <= > >= on int4, then int4lt ought to be used to probe the pg_statistic
> table.  I'm not sure if we have adequate info in pg_operator or pg_type
> to let the optimizer code determine the right thing to probe with :-(
>  
This indeed seems like a bigger problem.  I thought about somehow using
type-matching from the sort op and the actual operator in the query - if
both the left and right type match, then consider them the same for
purposes of this probe.  That seemed complicated, so I punted in my
example - it just does the search with relid and attnum and assumes that
only returns one tuple.  This works in my case (maybe in all cases,
because of the way vacuum is currently written - ?).

> What we really want here is to be able to map datatype values into
> some sort of numeric range so that we can compute what fraction of the
> low-key-to-high-key range is on each side of the probe value (the
> constant taken from the query).  This general concept will apply to
> many scalar types, so what we want is a type-specific mapping function
> and a less-specific fraction-computing-function.  Offhand I'd say that
> we want intltsel() and floatltsel(), plus conversion routines that can
> produce either int4 or float8 from a data type as seems appropriate.
> Anything that couldn't map to one or the other would have to supply its
> own selectivity function.
>  
This is what my example then does.  Uses the stored sort op to get the
type and then uses typinput to convert from the string to an int4.

Then puts the int4 back into string format because that's what everyone
was expecting.

It seems to work for my particular query.  I now get:

(selfuncs) gethilokey() obj 18663 attr 11 opid 1096 (ignored)
(selfuncs) gethilokey() found op 1087 in pg_proc
(selfuncs) gethilokey() found type 1082 in pg_type
(selfuncs) gethilokey() going to use 1084 to convert type 1082
(selfuncs) gethilokey() have low -2921 high -396
(selfuncs) intltsel() high -396 low -2921 val -486
(plancat) restriction_selectivity() for func 103 op 1096 rel 18663 attr
11 const -486 flag 3 returns 0.964356
NOTICE:  QUERY PLAN:

Sort  (cost=34467.88 size=0 width=0)->  Aggregate  (cost=34467.88 size=0 width=0) ->  Group  (cost=34467.88 size=0
width=0) ->  Sort  (cost=34467.88 size=0 width=0)   ->  Seq Scan on lineitem  (cost=34467.88 size=579166 width=44)
 

including my printfs, which exist in the patch as well.

Selectivity is now the expected 96% and the size estimate for the seq
scan is much closer to correct.

Again, not tested with anything besides date, so caveat not-tested.

Hope this helps.

Erik

----------------------[optimizer_fix.sh]------------------------

#! /bin/sh
# This is a shell archive, meaning:
# 1. Remove everything above the #! /bin/sh line.
# 2. Save the resulting text in a file.
# 3. Execute the file with /bin/sh (not csh) to create:
#    selfuncs.c.diff
#    vacuum.c.diff
# This archive created: Mon Mar 22 22:58:14 1999
export PATH; PATH=/bin:/usr/bin:$PATH
if test -f 'selfuncs.c.diff'
thenecho shar: "will not over-write existing file 'selfuncs.c.diff'"
else
cat << \SHAR_EOF > 'selfuncs.c.diff'
***
/afs/ece.cmu.edu/project/lcs/lcs-004/er1p/postgres/611/src/backend/utils/adt
/selfuncs.c    Thu Mar 11 23:59:35 1999
---
/afs/ece.cmu.edu/project/lcs/lcs-004/er1p/postgres/615/src/backend/utils/adt
/selfuncs.c    Mon Mar 22 22:57:25 1999
***************
*** 32,37 ****
--- 32,40 ---- #include "utils/lsyscache.h"    /* for get_oprrest() */ #include "catalog/pg_statistic.h" 
+ #include "catalog/pg_proc.h"    /* for Form_pg_proc */
+ #include "catalog/pg_type.h"    /* for Form_pg_type */
+  /* N is not a valid var/constant or relation id */ #define NONVALUE(N)        ((N) == -1) 
***************
*** 103,110 ****                 bottom;      result = (float64) palloc(sizeof(float64data));
!     if (NONVALUE(attno) || NONVALUE(relid))         *result = 1.0 / 3;     else     {         /* XXX            val =
atol(value);*/
 
--- 106,114 ----                 bottom;      result = (float64) palloc(sizeof(float64data));
!     if (NONVALUE(attno) || NONVALUE(relid)) {         *result = 1.0 / 3;
+     }     else     {         /* XXX            val = atol(value); */
***************
*** 117,130 ****         }         high = atol(highchar);         low = atol(lowchar);         if ((flag & SEL_RIGHT &&
val< low) ||             (!(flag & SEL_RIGHT) && val > high))         {             float32data nvals;
nvals= getattdisbursion(relid, (int) attno);
 
!             if (nvals == 0)                 *result = 1.0 / 3.0;             else             {
*result= 3.0 * (float64data) nvals;
 
--- 121,136 ----         }         high = atol(highchar);         low = atol(lowchar);
+         printf("(selfuncs) intltsel() high %d low %d val %d\n",high,low,val);         if ((flag & SEL_RIGHT && val <
low)||             (!(flag & SEL_RIGHT) && val > high))         {             float32data nvals;              nvals =
getattdisbursion(relid,(int) attno);
 
!             if (nvals == 0) {                 *result = 1.0 / 3.0;
+             }             else             {                 *result = 3.0 * (float64data) nvals;
***************
*** 336,341 ****
--- 342,353 ---- {     Relation    rel;     HeapScanDesc scan;
+     /* this assumes there is only one row in the statistics table for any
particular */
+     /* relid, attnum pair - could be more complicated if staop is also
used.         */
+     /* at the moment, if there are multiple rows, this code ends up
picking the      */
+     /* "first" one                                                         - er1p  */
+     /* the actual "ignoring" is done in the call to heap_beginscan()
below, where    */
+     /* we only mention 2 of the 3 keys in this array                       - er1p  */     static ScanKeyData key[3] =
{        {0, Anum_pg_statistic_starelid, F_OIDEQ, {0, 0, F_OIDEQ}},         {0, Anum_pg_statistic_staattnum, F_INT2EQ,
{0,0, F_INT2EQ}},
 
***************
*** 344,355 ****     bool        isnull;     HeapTuple    tuple;      rel = heap_openr(StatisticRelationName);
key[0].sk_argument= ObjectIdGetDatum(relid);     key[1].sk_argument = Int16GetDatum((int16) attnum);
key[2].sk_argument= ObjectIdGetDatum(opid);
 
!     scan = heap_beginscan(rel, 0, SnapshotNow, 3, key);     tuple = heap_getnext(scan, 0);     if
(!HeapTupleIsValid(tuple))    {
 
--- 356,377 ----     bool        isnull;     HeapTuple    tuple; 
+     HeapTuple tup;
+     Form_pg_proc proc;
+     Form_pg_type typ;
+     Oid which_op;
+     Oid which_type;
+     int32 low_value;
+     int32 high_value;
+      rel = heap_openr(StatisticRelationName);      key[0].sk_argument = ObjectIdGetDatum(relid);
key[1].sk_argument= Int16GetDatum((int16) attnum);     key[2].sk_argument = ObjectIdGetDatum(opid);
 
!     printf("(selfuncs) gethilokey() obj %d attr %d opid %d (ignored)\n",
!            key[0].sk_argument,key[1].sk_argument,key[2].sk_argument);
!     scan = heap_beginscan(rel, 0, SnapshotNow, 2, key);     tuple = heap_getnext(scan, 0);     if
(!HeapTupleIsValid(tuple))    {
 
***************
*** 376,383 ****
--- 398,461 ----                                 &isnull));     if (isnull)         elog(DEBUG, "gethilokey: low key is
null");
+      heap_endscan(scan);     heap_close(rel);
+ 
+     /* now we deal with type conversion issues                                   */
+     /* when intltsel() calls this routine (who knows what other callers
might do)  */
+     /* it assumes that it can call atol() on the strings and then use
integer      */
+     /* comparison from there.  what we are going to do here, then, is try
to use   */
+     /* the type information from Anum_pg_statistic_staop to convert the
high       */
+     /* and low values                                                   
- er1p    */
+ 
+     /* WARNING: this code has only been tested with the date type and has
NOT      */
+     /* been regression tested.  consider it "sample" code of what might
be the     */
+     /* right kind of thing to do                                        
- er1p    */
+ 
+     /* get the 'op' from pg_statistic and look it up in pg_proc */
+     which_op = heap_getattr(tuple,
+                 Anum_pg_statistic_staop,
+                 RelationGetDescr(rel),
+                 &isnull);
+     if (InvalidOid == which_op) {
+       /* ignore all this stuff, try conversion only if we have a valid staop */
+       /* note that there is an accompanying change to 'vacuum analyze' that  */
+       /* gets this set to something useful.                                  */
+     } else {
+       /* staop looks valid, so let's see what we can do about conversion */
+       tup = SearchSysCacheTuple(PROOID, ObjectIdGetDatum(which_op), 0, 0, 0);
+       if (!HeapTupleIsValid(tup)) {
+         elog(ERROR, "selfuncs: unable to find op in pg_proc %d", which_op);
+       }
+       printf("(selfuncs) gethilokey() found op %d in pg_proc\n",which_op);
+       
+       /* use that to determine the type of stahikey and stalokey via pg_type */
+       proc = (Form_pg_proc) GETSTRUCT(tup);
+       which_type = proc->proargtypes[0]; /* XXX - use left and right
separately? */
+       tup = SearchSysCacheTuple(TYPOID, ObjectIdGetDatum(which_type), 0, 0, 0);
+       if (!HeapTupleIsValid(tup)) {
+         elog(ERROR, "selfuncs: unable to find type in pg_type %d", which_type);
+       }
+       printf("(selfuncs) gethilokey() found type %d in pg_type\n",which_type);
+       
+       /* and use that type to get the conversion function to int4 */
+       typ = (Form_pg_type) GETSTRUCT(tup);
+       printf("(selfuncs) gethilokey() going to use %d to convert type
%d\n",typ->typinput,which_type);
+       
+       /* and convert the low and high strings */
+       low_value = (int32) fmgr(typ->typinput, *low, -1);
+       high_value = (int32) fmgr(typ->typinput, *high, -1);
+       printf("(selfuncs) gethilokey() have low %d high
%d\n",low_value,high_value);
+       
+       /* now we have int4's, which we put back into strings because
that's what out  */
+       /* callers (intltsel() at least) expect                               - er1p */
+       pfree(*low); pfree(*high); /* let's not leak the old strings */
+       *low = int4out(low_value);
+       *high = int4out(high_value);
+ 
+       /* XXX - this probably leaks the two tups we got from
SearchSysCacheTuple() - er1p */
+     } }  float64
SHAR_EOF
fi
if test -f 'vacuum.c.diff'
thenecho shar: "will not over-write existing file 'vacuum.c.diff'"
else
cat << \SHAR_EOF > 'vacuum.c.diff'
***
/afs/ece.cmu.edu/project/lcs/lcs-004/er1p/postgres/611/src/backend/commands/
vacuum.c    Thu Mar 11 23:59:09 1999
---
/afs/ece.cmu.edu/project/lcs/lcs-004/er1p/postgres/615/src/backend/commands/
vacuum.c    Mon Mar 22 21:23:15 1999
***************
*** 1842,1848 ****                     i = 0;                     values[i++] = (Datum) relid;        /* 1 */
         values[i++] = (Datum) attp->attnum; /* 2 */
 
!                     values[i++] = (Datum) InvalidOid;    /* 3 */                     fmgr_info(stats->outfunc,
&out_function);                    out_string = (*fmgr_faddr(&out_function)) (stats->min,
 
stats->attr->atttypid);                     values[i++] = (Datum) fmgr(F_TEXTIN, out_string);
--- 1842,1848 ----                     i = 0;                     values[i++] = (Datum) relid;        /* 1 */
         values[i++] = (Datum) attp->attnum; /* 2 */
 
!                     values[i++] = (Datum) stats->f_cmplt.fn_oid;    /* 3 */ /* get the
'<' oid, instead of 'invalid' - er1p */                     fmgr_info(stats->outfunc, &out_function);
 out_string = (*fmgr_faddr(&out_function)) (stats->min,
 
stats->attr->atttypid);                     values[i++] = (Datum) fmgr(F_TEXTIN, out_string);
SHAR_EOF
fi
exit 0
#    End of shell archive



Re: [HACKERS] optimizer and type question

От
Bruce Momjian
Дата:
Erik, if you can, please stick around and keep digging into the code.

I am working on fixing the memory allocation problems you had with
expressions right now.

You are obviously coming up with some good ideas.  And with Tom Lane and
I, you are also in Pennsylvania.


--  Bruce Momjian                        |  http://www.op.net/~candle maillist@candle.pha.pa.us            |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: [HACKERS] optimizer and type question

От
Tom Lane
Дата:
Erik Riedel <riedel+@CMU.EDU> writes:
> OK, building on your high-level explanation, I am attaching a patch that
> attempts to do something "better" than the current code.  Note that I
> have only tested this with the date type and my particular query.

Glad to see you working on this.  I don't like the details of your
patch too much though ;-).  Here are some suggestions for making it
better.

1. I think just removing staop from the lookup in gethilokey is OK for
now, though I'm dubious about Bruce's thought that we could delete that
field entirely.  As you observe, vacuum will not currently put more
than one tuple for a column into pg_statistic, so we can just do the
lookup with relid and attno and leave it at that.  But I think we ought
to leave the field there, with the idea that vacuum might someday
compute more than one statistic for a data column.  Fixing vacuum to
put its sort op into the field is a good idea in the meantime.

2. The type conversion you're doing in gethilokey is a mess; I think
what you ought to make it do is simply the inbound conversion of the
string from pg_statistic into the internal representation for the
column's datatype, and return that value as a Datum.  It also needs
a cleaner success/failure return convention --- this business with
"n" return is ridiculously type-specific.  Also, the best and easiest
way to find the type to convert to is to look up the column type in
the info for the given relid, not search pg_proc with the staop value.
(I'm not sure that will even work, since there are pg_proc entries
with wildcard argument types.)

3. The atol() calls currently found in intltsel are a type-specific
cheat on what is conceptually a two-step process: * Convert the string stored in pg_statistic back to the internal
formfor the column data type. * Generate a numeric representation of the data value that can be   used as an estimate
ofthe range of values in the table.
 
The second step is trivial for integers, which may obscure the fact
that there are two steps involved, but nonetheless there are.  If
you think about applying selectivity logic to strings, say, it
becomes clear that the second step is a necessary component of the
process.  Furthermore, the second step must also be applied to the
probe value that's being passed into the selectivity operator.
(The probe value is already in internal form, of course; but it is
not necessarily in a useful numeric form.)

We can do the first of these steps by applying the appropriate "XXXin"
conversion function for the column data type, as you have done.  The
interesting question is how to do the second one.  A really clean
solution would require adding a column to pg_type that points to a
function that will do the appropriate conversion.  I'd be inclined to
make all of these functions return "double" (float8) and just have one
top-level selectivity routine for all data types that can use
range-based selectivity logic.

We could probably hack something together that would not use an explicit
conversion function for each data type, but instead would rely on
type-specific assumptions inside the selectivity routines.  We'd need many
more selectivity routines though (at least one for each of int, float4,
float8, and text data types) so I'm not sure we'd really save any work
compared to doing it right.

BTW, now that I look at this issue it's real clear that the selectivity
entries in pg_operator are horribly broken.  The intltsel/intgtsel
selectivity routines are currently applied to 32 distinct data types:

regression=> select distinct typname,oprleft from pg_operator, pg_type
regression-> where pg_type.oid = oprleft
regression-> and oprrest in (103,104);
typname  |oprleft
---------+-------
_aclitem |   1034
abstime  |    702
bool     |     16
box      |    603
bpchar   |   1042
char     |     18
cidr     |    650
circle   |    718
date     |   1082
datetime |   1184
float4   |    700
float8   |    701
inet     |    869
int2     |     21
int4     |     23
int8     |     20
line     |    628
lseg     |    601
macaddr  |    829
money    |    790
name     |     19
numeric  |   1700
oid      |     26
oid8     |     30
path     |    602
point    |    600
polygon  |    604
text     |     25
time     |   1083
timespan |   1186
timestamp|   1296
varchar  |   1043
(32 rows)

many of which are very obviously not compatible with integer for *any*
purpose.  It looks to me like a lot of data types were added to
pg_operator just by copy-and-paste, without paying attention to whether
the selectivity routines were actually correct for the data type.

As the code stands today, the bogus entries don't matter because
gethilokey always fails, so we always get 1/3 as the selectivity
estimate for any comparison operator (except = and != of course).
I had actually noticed that fact and assumed that it was supposed
to work that way :-(.  But, clearly, there is code in here that
is *trying* to be smarter.

As soon as we fix gethilokey so that it can succeed, we will start
getting essentially-random selectivity estimates for those data types
that aren't actually binary-compatible with integer.  That will not do;
we have to do something about the issue.
        regards, tom lane


Re: [HACKERS] optimizer and type question

От
Tom Lane
Дата:
Bruce Momjian <maillist@candle.pha.pa.us> writes:
> What we really need is some way to determine how far the requested value
> is from the min/max values.  With int, we just do (val-min)/(max-min). 
> That works, but how do we do that for types that don't support division.
> Strings come to mind in this case.

What I'm envisioning is that we still apply the (val-min)/(max-min)
logic, but apply it to numeric values that are produced in a
type-dependent way.

For ints and floats the conversion is trivial, of course.

For strings, the first thing that comes to mind is to return 0 for a
null string and the value of the first byte for a non-null string.
This would give you one-part-in-256 selectivity which is plenty good
enough for what the selectivity code needs to do.  (Actually, it's
only that good if the strings' first bytes are pretty well spread out.
If you have a table containing English words, for example, you might
only get about one part in 26 this way, since the first bytes will
probably only run from A to Z.  Might be better to use the first two
characters of the string to compute the selectivity representation.)

In general, you can apply this logic as long as you can come up with
some numerical approximation to the data type's sorting order.  It
doesn't have to be exact.
        regards, tom lane


longer-term optimizer musings

От
Erik Riedel
Дата:
[The changes discussed below are more longer-term issues than my two
previous posts, which is why I am not attaching the patch for these
changes.  My hope is that the description of this experiment will
serve as input to whoever works on the optimizer in the future.

This should be considered a long-term "think about" and "probably
ignore as too specific" optimization.  You have been warned.  I just
wanted to brain-dump this, in the chance that it is useful somehow in
the future.   - Erik ]

Context: cost and size estimates inside the optimizer, particularly
Aggregate/Group nodes.

Basic idea is as follows: we look at the min and max values for all
the GROUP BY attributes and if they have a small dynamic range then we
_know_ the result of the aggregation will have a small number of rows
(bounded by the range of these attributes).  This information is then
used to make better cost estimates during optimization.

e.g. if we have two attributes 'a' and 'b' and we know (from
pg_statistics) that all 'a' values are in the range 25 to 35 and all
'b' values are in the range 75 to 100 then a GROUP BY on columns 'a'
and 'b' cannot result in more than 250 [ (100 - 75) * (35 - 25) = 25 *
10 ] rows, no matter how many rows are input.  We might be wrong if
the statistics are out of date, but we are just doing size/cost
estimation, so that's ok.  Since aggregations are often very selective
(i.e. much smaller outputs than inputs), this can make a big
difference further up the tree.

Gory details are as follows:

Table    = lineitem
+------------------------+----------------------------------+-------+
|              Field     |              Type                | Length|
+------------------------+----------------------------------+-------+
| l_orderkey             | int4 not null                    |     4 |
| l_partkey              | int4 not null                    |     4 |
| l_suppkey              | int4 not null                    |     4 |
| l_linenumber           | int4 not null                    |     4 |
| l_quantity             | float4 not null                  |     4 |
| l_extendedprice        | float4 not null                  |     4 |
| l_discount             | float4 not null                  |     4 |
| l_tax                  | float4 not null                  |     4 |
| l_returnflag           | char() not null                  |     1 |
| l_linestatus           | char() not null                  |     1 |
| l_shipdate             | date                             |     4 |
| l_commitdate           | date                             |     4 |
| l_receiptdate          | date                             |     4 |
| l_shipinstruct         | char() not null                  |    25 |
| l_shipmode             | char() not null                  |    10 |
| l_comment              | char() not null                  |    44 |
+------------------------+----------------------------------+-------+
Index:    lineitem_index_

--
-- Query 1
--
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, 
sum(l_extendedprice) as sum_base_price, 
sum(l_extendedprice*(1-l_discount)) as sum_disc_price, 
sum(l_extendedprice*(1-l_discount)*(1+l_tax)) as sum_charge, 
avg(l_quantity) as avg_qty, avg(l_extendedprice) as avg_price, 
avg(l_discount) as avg_disc, count(*) as count_order 
from lineitem 
where l_shipdate <= '1998-09-02'
group by l_returnflag, l_linestatus 
order by l_returnflag, l_linestatus;

For our favorite query, the current code (with my selectivity fixes of
yesterday, but they aren't crucial to this discussion) produces the
original plan:

Sort  (cost=35623.88 size=0 width=0)->  Aggregate  (cost=35623.88 size=0 width=0) ->  Group  (cost=35623.88 size=0
width=0) ->  Sort  (cost=35623.88 size=0 width=0)   ->  Seq Scan on lineitem  (cost=35623.88 size=579166 width=60)
 

[Problem 1 - no cost estimates are done for sort nodes, and costs are
not passed up the query tree for Sorts, Groups, or Aggregates]

Now, let's say we have it actually add a cost for the sorts and pass
the size and width up the tree as we go:

Sort  (cost=713817.69 size=579166 width=60)->  Aggregate  (cost=374720.78 size=579166 width=60) ->  Group
(cost=374720.78size=579166 width=60)  ->  Sort  (cost=374720.78 size=579166 width=60)   ->  Seq Scan on lineitem
(cost=35623.88size=579166 width=60)
 

we do this by adding several bits of code.  For sort nodes near line
470 of planmain.c, something like:
 sortplan->plan.cost = subplan->cost +      cost_sort(XXX, subplan->plan_size, subplan->plan_width, false);
 /* sorting doesn't change size or width */ sortplan->plan.plan_size = subplan->plan_size; sortplan->plan.plan_width =
subplan->plan_width;

and for group nodes near line 480 of planmain.c, something like:
 /* pretty easy, right?  the size and width will not change    */ /* from the sort node, so we can just carry those
along      */ /* and group has (practically) no cost, so just keep that too */ grpplan->plan.cost =
sortplan->plan.cost;grpplan->plan.plan_size = sortplan->plan.plan_size; grpplan->plan.plan_width =
sortplan->plan.plan_width;

and something similar for the other sort nodes near line 410 of
planner.c, and a similar pass along of the cost values from the group
node through the aggregation node near line 260 of planner.c

And, of course, we'd have to check hash nodes, join nodes and all the
others to do the right thing, but I only looked at the ones necessary
for my query.

Adding all those parts, we now have the plan:

Sort  (cost=713817.69 size=579166 width=60)->  Aggregate  (cost=374720.78 size=579166 width=60) ->  Group
(cost=374720.78size=579166 width=60)  ->  Sort  (cost=374720.78 size=579166 width=60)   ->  Seq Scan on lineitem
(cost=35623.88size=579166 width=60)
 

which actually includes the cost of the sorts and passes the size and
width up the tree.

[End discussion of Problem 1 - no sort costs or cost-passing]

[Craziness starts here]

[Problem 2 - the Aggregate node is way over-estimated by assuming the
output is as large as the input, but how can you do better?           ]

It turns out the above plan estimate isn't very satisfactory, because
the Aggregate actually significantly reduces the size and the
estimation of the final Sort is way higher than it needs to be.  But
how can we estimate what happens at an Aggregation node?

For general types in the GROUP BY, there is probably not much we can
do, but if we can somehow "know" that the GROUP BY columns have a
limited dynamic range - e.g. the char(1) fields in my lineitem, or
integer fields with small ranges - then we should be able to do some
"back of the envelope" estimation and get a better idea of costs
higher up in the plan.  A quick look shows that this would be
applicable in 5 of the 17 queries from TPC-D that I am looking at, not
huge applicability, but not single-query specific either.  What are
the "normal" or "expected" types and columns for all the GROUP BYs out
there - I don't know.

Anyway, so we do the estimation described above and get to the much
more accurate plan:

Sort  (cost=374734.87 size=153 width=60)->  Aggregate  (cost=374720.78 size=153 width=60) ->  Group  (cost=374720.78
size=579166width=60)  ->  Sort  (cost=374720.78 size=579166 width=60)   ->  Seq Scan on lineitem  (cost=35623.88
size=579166width=60)
 

this works because pg_statistic knows that l_returnflag has at most 17
possible values (min 'A' to max 'R') and l_linestatus has at most 9
possible values (min 'F' to max 'O').  In fact, l_returnflag and
l_linestatus are categorical and have only 3 possible values each, but
the estimate of 153 (17 * 9) is much better than the 579166 that the
node must otherwise assume, and much closer to the actual result,
which has only 4 rows.

I get the above plan by further modifcation after line 260 of
planner.c.  I am not attaching the patch because it is ugly in spots
and I just did "proof of concept", full implementation of this idea
would require much more thinking about generality.

Basically I go through the target list of the agg node and identify
all the Var expressions.  I then figure out what columns in the
original relations these refer to.  I then look up the range of the
attributes in each of these columns (using the min and max values for
the attributes obtained via gethilokey() as discussed in my last post
and doing the "strings map to ASCII value of their first letter" that
Tom suggested).  I multiply all these values together (i.e. the cross
product of all possible combinations that the GROUP BY on all n
columns could produce - call this x).  The size of the agg result is
than the smaller of this value or the input size (can't be bigger than
the input, and if there are only x unique combinations of the GROUP BY
attributes, then there will never be more than x rows in the output of
the GROUP BY).

I do this very conservatively, if I can't get statistics for one of
the columns, then I forget it and go with the old estimate (or could
go with an estimate of 0, if one were feeling optimistic.  It is not
clear what a "good" default estimate is here, maybe the 1/3 that is
used for selectivities would be as good as anything else).

[End discussion of Problem 2 - aggregate/group estimation]

As with my previous posts, this is most likely not a general solution,
it's just an idea that works (very well) for the query I am looking
at, and has some general applicability.  I am sure that the above
ignores a number of "bigger picture" issues, but it does help the
particular query I care about.

Also note that none of this actually speeds up even my query, it only
makes the optimizer estimate much closer to the actual query cost
(which is what I care about for the work I am doing).

Maybe this will be of help in any future work on the optimizer.  Maybe
it is simply the rantings of a lunatic.

Enjoy.

Erik Riedel
Carnegie Mellon University
www.cs.cmu.edu/~riedel



Re: [HACKERS] longer-term optimizer musings

От
Bruce Momjian
Дата:
> As with my previous posts, this is most likely not a general solution,
> it's just an idea that works (very well) for the query I am looking
> at, and has some general applicability.  I am sure that the above
> ignores a number of "bigger picture" issues, but it does help the
> particular query I care about.
> 
> Also note that none of this actually speeds up even my query, it only
> makes the optimizer estimate much closer to the actual query cost
> (which is what I care about for the work I am doing).
> 
> Maybe this will be of help in any future work on the optimizer.  Maybe
> it is simply the rantings of a lunatic.

Interesting.  The problem I see is that trying to do a char(20) column
with min(A) and max(B) can have 256^19 possible unique values from A to
B, so it kind if kills many general cases.  Floats have the same
problem.

A nice general fix would be to assume GROUP BY/AGG returns only 10% of
the existing rows.  I don't even know if an Aggregate without a group by
knows it only returns one row.  Oops, I guess not:
test=> explain select max(relpages) from pg_class;NOTICE:  QUERY PLAN:Aggregate  (cost=2.58 size=0 width=0)  ->  Seq
Scanon pg_class  (cost=2.58 size=48 width=4)
 
Basically, there are some major issues with this optimizer.  Only in pre
6.5 have we really dug into it and cleaned up some glaring problems. 
Problems that were so bad, if I had know how bad they were, I would
certainly have started digging in there sooner.

We have even general cases that are not being handled as well as they
should be.  We just fixed a bug where "col = -3" was never using an
index, because -3 was being parsed as prefix "-" with an operand of 3,
and the index code can only handle constants.

Yes, we have some major things that need cleaning.  I have updated
optimizer/README to better explain what is happening in there, and have
renamed many of the structures/variables to be clearer.  I hope it
helps someone, someday.

So I guess I am saying that your ideas are good, but we need to walk
before we can run with this optimizer.

I am not saying the optimizer is terrible, just that it is complex, and
has not had the kind of code maintenance it needs.

--  Bruce Momjian                        |  http://www.op.net/~candle maillist@candle.pha.pa.us            |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: [HACKERS] longer-term optimizer musings

От
Bruce Momjian
Дата:
> Interesting.  The problem I see is that trying to do a char(20) column
> with min(A) and max(B) can have 256^19 possible unique values from A to
> B, so it kind if kills many general cases.  Floats have the same
> problem.
> 
> A nice general fix would be to assume GROUP BY/AGG returns only 10% of
> the existing rows.  I don't even know if an Aggregate without a group by
> knows it only returns one row.  Oops, I guess not:
> 
>     test=> explain select max(relpages) from pg_class;
>     NOTICE:  QUERY PLAN:
>     
>     Aggregate  (cost=2.58 size=0 width=0)
>       ->  Seq Scan on pg_class  (cost=2.58 size=48 width=4)
>     
> Basically, there are some major issues with this optimizer.  Only in pre
> 6.5 have we really dug into it and cleaned up some glaring problems. 
> Problems that were so bad, if I had know how bad they were, I would
> certainly have started digging in there sooner.
> 
> We have even general cases that are not being handled as well as they
> should be.  We just fixed a bug where "col = -3" was never using an
> index, because -3 was being parsed as prefix "-" with an operand of 3,
> and the index code can only handle constants.
> 
> Yes, we have some major things that need cleaning.  I have updated
> optimizer/README to better explain what is happening in there, and have
> renamed many of the structures/variables to be clearer.  I hope it
> helps someone, someday.
> 
> So I guess I am saying that your ideas are good, but we need to walk
> before we can run with this optimizer.
> 
> I am not saying the optimizer is terrible, just that it is complex, and
> has not had the kind of code maintenance it needs.

Also, let me not discourage you.

We are just learning about the optimizer, we welcome any ideas that you
may have.  I also enjoy discussing the issues, becuase it give me a
sounding-board for future coding.

Keep it up.

--  Bruce Momjian                        |  http://www.op.net/~candle maillist@candle.pha.pa.us            |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: [HACKERS] longer-term optimizer musings

От
Erik Riedel
Дата:
> Interesting.  The problem I see is that trying to do a char(20) column
> with min(A) and max(B) can have 256^19 possible unique values from A to
> B, so it kind if kills many general cases.  Floats have the same
> problem.
>  
Right, in most general cases, there isn't much you can do.

Although, if this seemed like an important thing, one could imagine an
extension to 'vacuum analyze' and pg_statistic that tried to track the
number of unique values while it finds the min and max.  Maybe tracking
some fixed number (10?) of unique attr values and stop searching once it
exceeds 10 different values (or maybe some tiny fraction of the tuples
in the relation, whatever gives a decent balance of memory and CPU at
analyze time).  Basically to find out if it might be .01% instead of the
10% default you suggest below.

This would work for a database that tracks all the CDs owned by "Bob"
and "Alice" even with char(20) first names.  For floats, it wouldn't be
very good for prices at Tiffany's, but should work pretty well for the
Everything's $1 store.

> A nice general fix would be to assume GROUP BY/AGG returns only 10% of
> the existing rows.  I don't even know if an Aggregate without a group by
> knows it only returns one row.  Oops, I guess not:
>  
>         test=> explain select max(relpages) from pg_class;
>         NOTICE:  QUERY PLAN:
>         
>         Aggregate  (cost=2.58 size=0 width=0)
>           ->  Seq Scan on pg_class  (cost=2.58 size=48 width=4)
>         
Yup, this would be easy to add (both the 10% and 1 for non-group aggs). 
The current code just passes along the cost and zeros the size and width
in all Sort, Group, and Aggregate nodes (this was the issue flagged as
Problem 1 in my message - and I tried to give line numbers where that
would have to be fixed).  Note that cost_sort() seems to work reasonably
enough, but has this non-obvious "sortkeys" argument that it does
nothing with.

> So I guess I am saying that your ideas are good, but we need to walk
> before we can run with this optimizer.
>  
Understood.  I am not discouraged and will continue throwing these
things out as I see them and think I have a reasonable explanation.

Erik



Re: [HACKERS] longer-term optimizer musings

От
Tom Lane
Дата:
Erik Riedel <riedel+@CMU.EDU> writes:
> This should be considered a long-term "think about" and "probably
> ignore as too specific" optimization.

Not at all --- it looks like useful stuff to work on.

As Bruce pointed out, the current generation of Postgres developers
don't understand the optimizer very well.  (Bruce and I have both been
digging into it a little, but we certainly welcome anyone else who
wants to study it.)  The optimizer has been suffering from software rot
for several releases now, and as far as I can tell there were a lot of
things that it never really did right in the first place.  So take what
you see with a grain of salt.

> [Problem 1 - no cost estimates are done for sort nodes, and costs are
> not passed up the query tree for Sorts, Groups, or Aggregates]

These things probably need to be fixed.  I have noticed that there are
places where the code does not bother to fill in estimates, for example
in a hash join the hash subnode never gets filled in, but it probably
doesn't matter as long as the top hash node does get filled in.  The
important thing is to propagate reasonable estimates upwards.

> [Problem 2 - the Aggregate node is way over-estimated by assuming the
> output is as large as the input, but how can you do better?           ]

An excellent point.  Bruce's idea of a default 10% estimate seems
reasonable to me (and of course, recognize the non-group-by case).

> [ get column min/max values and ] multiply all these values together

You'd have to watch out for integer overflow in this calculation ---
would be safer to do it in floating point I think.  A more serious
issue is how do you know what the granularity of the type is.  For
example, with a float8 column the min and max values might be 1 and 4,
but that doesn't entitle you to assume that there are only 4 values.
You could really only apply this optimization to int, bool, and char(1)
columns, I think.  Of course, there are plenty of those out there, so
it might still be worth doing.

> Also note that none of this actually speeds up even my query, it only
> makes the optimizer estimate much closer to the actual query cost
> (which is what I care about for the work I am doing).

Well, that could result in a more intelligently chosen plan further up
the tree, so it *could* lead to a faster query.  However this would
only be true if there were important choices to be made at higher tree
levels.  I suppose you would have to be looking at a subselect involving
GROUP BY for this to really make much difference in practice.
        regards, tom lane


Re: [HACKERS] longer-term optimizer musings

От
Erik Riedel
Дата:
> > Also note that none of this actually speeds up even my query, it only
> > makes the optimizer estimate much closer to the actual query cost
> > (which is what I care about for the work I am doing).
>  
> Well, that could result in a more intelligently chosen plan further up
> the tree, so it *could* lead to a faster query.  However this would
> only be true if there were important choices to be made at higher tree
> levels.  I suppose you would have to be looking at a subselect involving
> GROUP BY for this to really make much difference in practice.
>  
Right, if there are still choices higher up.  In particular, the case
that I was looking at was the possible combination of Aggregation and
Sort nodes that I'd mentioned before.  Having the proper estimate at
that point would tell you if it were worthwhile doing the aggregation
(or duplicate elimination) while sorting.  Which could save lots of
memory and writing/reading of run files for out-of-core mergesort.

While I'm at it, I should note that this combination of aggr and sort is
not my invention by a long shot.  The paper:

"Fast Algorithms for Universal Quantification in Large Databases"

referenced at:

http://www.cse.ogi.edu/DISC/projects/ereq/papers/graefe-papers.html

claims this as the "obvious optimization" and provides pointers to the
(by now ancient) papers that discuss both this and combination of
aggregation and hashing, which should be even cheaper in general. 
Section 2.2, page 10.

I guess those things just never found their way into Stonebraker's
version of the code.  Maybe they are of interest in the future.

Erik