Kris Jurka <books@ejurka.com> writes:
> This actually is the problem. It works as three separate statements, but
> fails as one. The server doesn't seem to recognize the SET when other
> commands come in before Sync.
[ reads some code... ] The problem is that postgres.c only inspects
StatementTimeout when start_xact_command starts a transaction command,
and the placement of finish_xact_command calls is such that that's
not going to happen until after Sync. So the upshot is that the
"SET statement_timeout" isn't going to have effect until after Sync
(or after a transaction-control command, but there are none in your
example).
This suggests that the statement_timeout stuff is being done at the wrong
place. I'm not sure exactly what the more-right places would be for
V3 protocol though. What exactly would you expect statement_timeout to
cover in a Parse/Bind/Execute world --- especially if those aren't
issued in a purely sequential fashion?
A very simple definition would be that each Parse, Bind, or Execute
action is independently constrained by statement_timeout, but that would
act significantly differently from the simple-query case if planning
takes long enough to be a factor. (Bear in mind that planning can
include constant-folding of user-defined functions, so at least in some
cases you can imagine people would want statement_timeout to constrain
planning.) Also that would imply three times as many timer
enable/disable kernel calls, which might be an annoying amount of
overhead.
Anyway the short-term answer for Markus is "don't do it that way".
We ought to think about making the backend's behavior more consistent,
though.
regards, tom lane