Tom Lane wrote:
: Nathan Barnett <nbarnett@centuries.com> writes:
: > UPDATE pages SET createdtime = NOW();
:
: > Is there a reason why this would take up all of the memory??
:
: The now() function invocation leaks memory ... only a dozen or so bytes
: per invocation, but that adds up over millions of rows :-(. In 7.0.*
: the memory isn't recovered until end of statement. 7.1 fixes this by
: recovering temporary memory after each tuple.
As I can see this is not that simple :-(
On UPDATE -- maybe, but not on SELECT.
When SELECT is executing Postgres (7.0.3) allocate how many memory as
need for store full result set of query. On
select * from some_big_table;
this can be in some times more than "physical memory + swap" exist. :-(
In general case I can't disable executing of this (and similar) queries
for users.
Question: Can I say postmaster (or postgres backend) don't use more than
some number of memory (in per-backend basis or for all running backends
totally -- no difference) and when this limit will be exceed -- switch
to using temporary files or simple rollback transaction and close
connection if using temporary files is impossible? (Yes, I mean what
bring down one postrgres process is more cheap that bring down or hang
up all machine.)
Any ideas/workarounds?
--
Andrew W. Nosenko (awn@bcs.zp.ua)