pg_dump resulting in excessive memory use by postmaster process

Поиск
Список
Период
Сортировка
От Ben
Тема pg_dump resulting in excessive memory use by postmaster process
Дата
Msg-id 00ed01cece6f$da9a4c80$8fcee580$@champsoftware.com
обсуждение исходный текст
Ответы Re: pg_dump resulting in excessive memory use by postmaster process  (Stephen Frost <sfrost@snowman.net>)
Список pgsql-general

We’ve been using pg_dump and pg_restore for many years now and it has always worked well for us. However, we are currently undertaking a major db architecture to partition our tenant data into separate postgres schemas instead of storing all data in the public schema. When attempting to perform a pg_dump with the --schema-only option using our new db architecture it causes the postmaster process to consume massive amounts of memory until it is killed off (6GB+ used by a single postmaster process).  Here are the details:

 

PG version: 9.2.5

Total number of postgres schemas: 248  

Total number of relations across all schemas: 53,154

 

If I perform a --schema-only dump on a DB where there are only 11 schemas it succeeds with a dump file that is only 7.5mb in size. All of our tenant schemas have the same exact relations so things should scale linearly when more tenant schemas exist.

 

I should also mention that when performing these dumps there is absolutely no other DB activity occurring. Do you have any ideas why the excessive memory growth on the postmaster service is occurring when doing a pg_dump?

 

В списке pgsql-general по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: pg_dump doesn't restore on newer database due to schema issues (bug or limitation?)
Следующее
От: Stephen Frost
Дата:
Сообщение: Re: Preserving the source code of views