Re: Upgrade from PG12 to PG

Поиск
Список
Период
Сортировка
От Scott Ribe
Тема Re: Upgrade from PG12 to PG
Дата
Msg-id F6998B6F-11FA-4A9F-954B-E4388B835483@elevated-dev.com
обсуждение исходный текст
Ответ на Re: Upgrade from PG12 to PG  (Jef Mortelle <jefmortelle@gmail.com>)
Ответы Re: Upgrade from PG12 to PG  (Jef Mortelle <jefmortelle@gmail.com>)
Список pgsql-admin
> On Jul 20, 2023, at 11:05 AM, Jef Mortelle <jefmortelle@gmail.com> wrote:
>
> so, yes pg_ugrade start a pg_dump session,

Only for the schema, which you can see in the output you posted.

> Server is a VM server, my VM has 64GB SuseSLES  attached to a SAN with SSD disk (Hp3Par)

VM + SAN can perform well, or introduce all sorts of issues: busy neighbor, poor VM drivers, SAN only fast for large
sequentialwrites, etc. 

> On Jul 20, 2023, at 11:22 AM, Ron <ronljohnsonjr@gmail.com> wrote:
>
> Note also that there's a known issue with pg_upgrade and millions of Large Objects (not bytea or text, but lo_*
columns).


Good to know, but it would be weird to have millions of large objects in a 1TB database. (Then again, I found an old
postabout 3M large objects taking 5.5GB...) 

Try:
  time a run of that pg_dump command, then time a run of pg_restore of the schema only dump
  time a file copy of the db to a location on the SAN--purpose is not to produce a usable backup, but rather to check
IOthroughput 
  use the link option on pg_upgrade

Searching on this subject turns up some posts about slow restore of large objects under much older versions of PG--not
sureif any of it still applies. 

Finally given the earlier confusion between text and large objects, your apparent belief that text columns correlated
tolarge objects, and that text could hold more data than varchar, it's worth asking: do you actually need large objects
atall? (Is this even under your control?) 


В списке pgsql-admin по дате отправления:

Предыдущее
От: Ron
Дата:
Сообщение: Re: Upgrade from PG12 to PG
Следующее
От: Jef Mortelle
Дата:
Сообщение: Re: Upgrade from PG12 to PG