Re: db size

Поиск
Список
Период
Сортировка
От PFC
Тема Re: db size
Дата
Msg-id op.t9lf4tvrcigqcu@apollo13.peufeu.com
обсуждение исходный текст
Ответ на db size  (Adrian Moisey <adrian@careerjunction.co.za>)
Ответы Re: db size  (Adrian Moisey <adrian@careerjunction.co.za>)
Список pgsql-performance
> Hi
>
> We currently have a 16CPU 32GB box running postgres 8.2.
>
> When I do a pg_dump with the following parameters "/usr/bin/pg_dump -E
> UTF8 -F c -b" I get a file of 14GB in size.
>
> But the database is 110GB in size on the disk.  Why the big difference
> in size?  Does this have anything to do with performance?

    I have a 2GB database, which dumps to a 340 MB file...
    Two reasons :

    - I have lots of big fat but very necessary indexes (not included in dump)
    - Dump is compressed with gzip which really works well on database data.

    If you suspect your tables or indexes are bloated, restore your dump to a
test box.
    Use fsync=off during restore, you don't care about integrity on the test
box.
    This will avoid slowing down your production database.
    Then look at the size of the restored database.
    If it is much smaller than your production database, then you have bloat.
    Time to CLUSTER, or REINDEX, or VACUUM FULL (your choice), on the tables
that are bloated, and take note to vacuum those more often (and perhaps
tune the autovacuum).
    Judicious use of CLUSTER on that small, but extremely often updated table
can also be a very good option.
    8.3 and its new HOT feature are also a good idea.

В списке pgsql-performance по дате отправления:

Предыдущее
От: Gaetano Mendola
Дата:
Сообщение: shared_buffers performance
Следующее
От: Adrian Moisey
Дата:
Сообщение: Re: db size