Re: pg_upgrade failing for 200+ million Large Objects

Поиск
Список
Период
Сортировка
От Andrew Dunstan
Тема Re: pg_upgrade failing for 200+ million Large Objects
Дата
Msg-id b9aca8be-c579-0206-b2a6-8579b8027090@dunslane.net
обсуждение исходный текст
Ответ на Re: pg_upgrade failing for 200+ million Large Objects  (Jan Wieck <jan@wi3ck.info>)
Список pgsql-hackers
On 3/20/21 12:39 AM, Jan Wieck wrote:
> On 3/8/21 11:58 AM, Tom Lane wrote:
>> The answer up to now has been "raise max_locks_per_transaction enough
>> so you don't see the failure".  Having now consumed a little more
>> caffeine, I remember that that works in pg_upgrade scenarios too,
>> since the user can fiddle with the target cluster's postgresql.conf
>> before starting pg_upgrade.
>>
>> So it seems like the path of least resistance is
>>
>> (a) make pg_upgrade use --single-transaction when calling pg_restore
>>
>> (b) document (better) how to get around too-many-locks failures.
>
> That would first require to fix how pg_upgrade is creating the
> databases. It uses "pg_restore --create", which is mutually exclusive
> with --single-transaction because we cannot create a database inside
> of a transaction. On the way pg_upgrade also mangles the
> pg_database.datdba (all databases are owned by postgres after an
> upgrade; will submit a separate patch for that as I consider that a
> bug by itself).
>
> All that aside, the entire approach doesn't scale.
>
> In a hacked up pg_upgrade that does "createdb" first before calling
> pg_upgrade with --single-transaction. I can upgrade 1M large objects with
>     max_locks_per_transaction = 5300
>     max_connectinons=100
> which contradicts the docs. Need to find out where that math went off
> the rails because that config should only have room for 530,000 locks,
> not 1M. The same test fails with max_locks_per_transaction = 5200.
>
> But this would mean that one has to modify the postgresql.conf to
> something like 530,000 max_locks_per_transaction at 100
> max_connections in order to actually run a successful upgrade of 100M
> large objects. This config requires 26GB of memory just for locks. Add
> to that the memory pg_restore needs to load the entire TOC before even
> restoring a single object.
>
> Not going to work. But tests are still ongoing ...



I thought Tom's suggestion upthread:


> Would it be sane to have the backend not bother to
> take any locks in binary-upgrade mode?


was interesting. Could we do that on the restore side? After all, what
are we locking against in binary upgrade mode?


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com




В списке pgsql-hackers по дате отправления:

Предыдущее
От: Amit Kapila
Дата:
Сообщение: Re: Logical Replication vs. 2PC
Следующее
От: Tom Lane
Дата:
Сообщение: Re: pg_upgrade failing for 200+ million Large Objects