On Wed, 2006-09-06 at 16:06, Sriram Dandapani wrote:
> Curious why autovacuum does not handle this problem. Here are my
> settings
>
> max_fsm_pages = 2000000
>
> autovacuum = on # enable autovacuum
>
> autovacuum_naptime = 300 # time between autovacuum runs,
> in
>
> autovacuum_vacuum_threshold = 10000 # min # of tuple updates before
> # vacuum
> autovacuum_analyze_threshold = 1500 # min # of tuple updates before
> # analyze
> autovacuum_vacuum_scale_factor = 0.4 # fraction of rel size before
> # vacuum
> autovacuum_analyze_scale_factor = 0.2 # fraction of rel size before
> # analyze
> autovacuum_vacuum_cost_delay = 100
>
> autovacuum_vacuum_cost_limit = 1000
>
>
> The database has a constant rate of about 50-100G a day of data flowing
> in which gets deleted after 2 days.(this cycle keeps repeating). There
> are historical tables that grow at a rate of 2G-4G a day
The most common cause of these problems is that you have long standing
transactions that never get closed. Look for some connection to the
database(s) (any of them) that are never committed or rolled back. One
"idle in transaction" connection from one program can cause this
problem.
If you're doing your updates and / or deletes individually instead of in
transactional batches that will just make it worse.