pg_dump --binary-upgrade out of memory

Поиск
Список
Период
Сортировка
От Антон Глушаков
Тема pg_dump --binary-upgrade out of memory
Дата
Msg-id CAHnOmaeCZN0tOEfmu3VOLjDujfY2jbS0HaEfwTboY1knhfwzvQ@mail.gmail.com
обсуждение исходный текст
Ответы Re: pg_dump --binary-upgrade out of memory  (Tom Lane <tgl@sss.pgh.pa.us>)
AW: pg_dump --binary-upgrade out of memory  ("Dischner, Anton" <Anton.Dischner@med.uni-muenchen.de>)
Список pgsql-admin
Hi.
I encountered the problem of not being able to upgrade my instance (14->15) via pg_upgrade.
The utility crashed with an error in out of memory.
After researching a bit I found that this happens at the moment of export schema  with pg_dump.

Then I tried to manually perform a dump schema with the parameter --binary-upgrade option and also got an out of memory.
Digging a little deeper, I discovered quite a large number of blob objects in the database (pg_largeobject 10GB and pg_largeobject_metadata 1GB (31kk rows))
I was able to reproduce the problem on a clean server by simply putting some random data in pg_largeobject_metadata

$insert into pg_largeobject_metadata (select i,16390 from generate_series(107659,34274365) as i);

$pg_dump --binary-upgrade --format=custom -d mydb -s -f tmp.dmp
and after 1-2 min get out of memory ( i tried on server with 4 and 8 gb RAM)

Perhaps this is a bug? How can I perform an upgrade?

В списке pgsql-admin по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Temp table + inde + FDW table on Redshift: MOVE BACKWARD ALL IN not supported
Следующее
От: hellen jiang
Дата:
Сообщение: migrating data from non-partitioned table to partitioned table in Aurora Postgresql