Обсуждение: Problem analyzing performance of query

Поиск
Список
Период
Сортировка

Problem analyzing performance of query

От
Greg Stark
Дата:
I have a query that is taking too long when run from a larger plpgsql function
(40-50s). However when I explain analyze it under psql it runs fine (4-5s).
This is with the same parameters, and I've even tried embedding the parameters
inside a subquery to avoid letting the planner see any more info than normal.

Is there any way to ask the server what plan it's using when it's actually
executing the query in production, rather than trying to feed it the same
query later in another context and hope it gets the same result?

--
greg

Re: Problem analyzing performance of query

От
Tom Lane
Дата:
Greg Stark <gsstark@mit.edu> writes:
> Is there any way to ask the server what plan it's using when it's actually
> executing the query in production, rather than trying to feed it the same
> query later in another context and hope it gets the same result?

From the planner's point of view, a plpgsql query involving plpgsql
variables is a parameterized query, which is the same as a PREPAREd
query with parameters.  So for instance

    create function foo(int) ...
    ...
       select ... where keycol = $1;
    ...

looks the same as

    PREPARE q(int) AS select ... where keycol = $1;

and you can investigate the plan for this with

    EXPLAIN [ANALYZE] EXECUTE q(42);

Clear?  It'd be nice to have more infrastructure for debugging plpgsql
code, but so far no one's got round to building any :-(

            regards, tom lane

Re: Problem analyzing performance of query

От
Greg Stark
Дата:
Tom Lane <tgl@sss.pgh.pa.us> writes:

> Clear?  It'd be nice to have more infrastructure for debugging plpgsql
> code, but so far no one's got round to building any :-(

Thank you. Very useful.

Though one of my worries was that there was some cached plan in the plpgsql.
What might be useful not just for plpgsql but for debugging other applications
would be a "trace" functionality like Oracle's. It could be as simple as a guc
variable which caused all queries to run under EXPLAIN ANALYZE and dumped the
results somewhere for later analysis.


As it happens I've narrowed the problem down and it wasn't related to any
planning at all. I was running EXPLAIN ANALYZE on only the SELECT portion of
the query but the full query was actually an "INSERT INTO foo () (SELECT ...)"
Apparently the INSERT side of it was taking 90% of the time.

Dropping a few gist indexes I was experimenting with have sped it up tenfold.

Now I'm wondering whether it's inevitable slow down of maintaining these
indexes or if something else was wrong. The gist indexes are not something I'm
using currently, but I may need them in the future to deal with larger
datasets.

This table is a pre-generated "cache" of denormalized data. It's periodically
truncated and regenerated. Perhaps each time I truncated it the gist indexes
accumulated dead pages?

Most rows are quite small but there are a few large rows (large integer
arrays) perhaps they caused the gist indexes using gist__intbig_ops to have to
do lots of extra work? How do I tell if the rows are getting toasted?

I did run this under profiling and the gprof output didn't seem to be pointing
the finger at gist functions. A significant amount of time was spent in them
to be sure, but not 90%. That's what initially led me astray or I would have
checked this much earlier.

Flat profile:

Each sample counts as 0.01 seconds.
  %   cumulative   self              self     total
 time   seconds   seconds    calls   s/call   s/call  name
 14.66     14.54    14.54  6327634     0.00     0.00  FunctionCall3
  6.42     20.91     6.37    96792     0.00     0.00  gistchoose
  4.43     25.31     4.40       26     0.17     0.17  ArrayGetNItems
  3.59     28.87     3.56   450164     0.00     0.00  MemoryContextAllocZero
  3.38     32.22     3.35  9505045     0.00     0.00  AllocSetFreeIndex
  2.93     35.12     2.91   181680     0.00     0.00  ExecMakeFunctionResultNoSets
  2.41     37.51     2.39  6431187     0.00     0.00  gistdentryinit
  2.28     39.77     2.26  5320723     0.00     0.00  AllocSetAlloc
  2.24     41.99     2.22  3294701     0.00     0.00  tas
  2.12     44.09     2.10   671001     0.00     0.00  hash_search
  1.71     45.79     1.70  1538836     0.00     0.00  _bt_compare
  1.47     47.25     1.46  1098578     0.00     0.00  nocachegetattr
  1.41     48.65     1.40  1478208     0.00     0.00  ExecEvalVar
  1.40     50.04     1.39   597373     0.00     0.00  StrategyBufferLookup
  1.29     51.31     1.28   769755     0.00     0.00  comparetup_heap
  1.14     52.44     1.13  5548263     0.00     0.00  gistpenalty
  1.11     53.55     1.11    28058     0.00     0.00  XLogInsert
  1.08     54.62     1.07  6687057     0.00     0.00  FunctionCall1
  1.06     55.67     1.05   641646     0.00     0.00  hash_any
  1.06     56.72     1.05    53305     0.00     0.00  gistlayerinsert
  1.04     57.75     1.04   475840     0.00     0.00  gistDeCompressAtt
  1.04     58.78     1.03   991639     0.00     0.00  AllocSetReset
  1.04     59.81     1.03   295968     0.00     0.00  ExecTargetList


[That's 60% of the time already]



--
greg