Previously, I estimated that processing 1 million rows from pg_largeobject_metadata with pg_dump requires about 750MB of memory (data from ps - RSS, AlmaLinux 8).
But the running time of the process is frustrating.It took about 40 minutes.
-I encountered the problem of not being able to upgrade my instance (14->15) via pg_upgrade.
-The utility crashed with an error in out of memory.-
-After researching a bit I found that this happens at the moment of export schema with pg_dump.
-Then I tried to manually perform a dump schema with the parameter --binary-upgrade option and also got an out of memory.
-Digging a little deeper, I discovered quite a large number of blob objects in the database (pg_largeobject 10GB and pg_largeobject_metadata 1GB (31kk rows))
-I was able to reproduce the problem on a clean server by simply putting some random data in pg_largeobject_metadata
-$insert into pg_largeobject_metadata (select i,16390 from generate_series(107659,34274365) as i);