Re: Shouldn't we have a way to avoid "risky" plans?

Поиск
Список
Период
Сортировка
От Claudio Freire
Тема Re: Shouldn't we have a way to avoid "risky" plans?
Дата
Msg-id AANLkTimvSPfL_xBUNS-fw9xuXh7gcYQoUy6UbZtZWAC5@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Shouldn't we have a way to avoid "risky" plans?  (Nathan Boley <npboley@gmail.com>)
Ответы Re: Shouldn't we have a way to avoid "risky" plans?  (Jim Nasby <jim@nasby.net>)
Список pgsql-performance
On Thu, Mar 24, 2011 at 5:30 PM, Nathan Boley <npboley@gmail.com> wrote:
> Another approach, that hasn't been suggested yet, is some Bayesian
> update method. There, rather than calculating a specific parameter
> value ( like ndistinct ), you try to store the entire distribution and
> choose the plan that minimizes cost averaged over all of the possible
> parameter values.

I've done similar stuff for work, you don't have to go all the way to
storing complete probability distributions, usually a simple
likelihood range is enough.

In essence, instead of having a scalar MLE for plan cost, you
implement a "ranged" estimator, that estimates the most-likely range
of plan costs, with mean and standard deviation from mean.

This essentially gives a risk value, since risky plans will have very
large standard deviations from the mean.

> Also, ( not that I have even close to the experience / expertise to
> make this claim - so take this with a grain of salt ) it seems that
> the code changes would be substantial but pretty straightforward and
> easy to localize. Rather than passing a selectivity, pass a pair of
> arrays with selectivities and probabilities.

If you approximage the probability distributions as I outlined above,
it's even simpler. Approximate, but simpler - and since you retain the
original cost estimations in the form of mean cost values, you can
easily tune the GEQO to perform as it currently does (ignore variance)
or with a safety margin (account for variance).


About issues like these being uncommon - I disagree.

I routinely have to work around query inefficiencies because GEQO does
something odd - and since postgres gives me too few tools to tweak
plans (increase statistics, use subqueries, rephrase joins, no direct
tool before CTEs which are rather new), it becomes an art form, and it
becomes very unpredictable and an administrative burden. Out of the
blue, statistics change, queries that worked fine start to perform
poorly, and sites go down.

If GEQO could detect unsafe plans and work around them automatically,
it would be a major improvement.

Granted, though, this should be approached with care.

В списке pgsql-performance по дате отправления:

Предыдущее
От: Nathan Boley
Дата:
Сообщение: Re: Shouldn't we have a way to avoid "risky" plans?
Следующее
От: DM
Дата:
Сообщение: Re: pg9.0.3 explain analyze running very slow compared to a different box with much less configuration