On 07/23/2017 12:03 PM, Joshua D. Drake wrote:
> As you can see even with aggressive vacuuming, over a period of 36 hours
> life gets increasingly miserable.
>
> The largest table is:
>
> postgres=# select
> pg_size_pretty(pg_total_relation_size('bmsql_order_line'));
> pg_size_pretty
> ----------------
> 148 GB
> (1 row)
>
[snip]
> With the PK being
>
> postgres=# select
> pg_size_pretty(pg_relation_size('bmsql_order_line_pkey'));
> pg_size_pretty
> ----------------
> 48 GB
> (1 row)
>
> I tried to see how much data we are dealing with here:
-hackers,
I cleaned up the table with VACUUM FULL and ended up with the following:
postgres=# select
pg_size_pretty(pg_total_relation_size('bmsql_order_line')); pg_size_pretty
---------------- 118 GB
(1 row)
postgres=# select pg_size_pretty(pg_relation_size('bmsql_order_line_pkey')); pg_size_pretty
---------------- 27 GB
(1 row)
Does this suggest that we don't have a cleanup problem but a
fragmentation problem (or both at least for the index)? Having an index
that is almost twice the uncleaned up size isn't that uncommon.
Thanks in advance,
JD
--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
PostgreSQL Centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://pgconf.us
***** Unless otherwise stated, opinions are my own. *****