Re: Postgres storage migration

Поиск
Список
Период
Сортировка
От Ron Johnson
Тема Re: Postgres storage migration
Дата
Msg-id CANzqJaB1crxv5ueGvCf_e_s=kSCG7VZUHm2gi6ncVxa+SEHXNQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Postgres storage migration  (Rajesh Kumar <rajeshkumar.dba09@gmail.com>)
Список pgsql-admin
You're not being very helpful to the people who are trying to help you.

This works for me every time:
Source:
pg_dumpall -gvf ${BackupDir}/globals.sql 2> /dev/null
pg_dump -j ${Threads} -Z${ZLvl} -v -C -Fd --file=${SrcBackupDir}/$DB $DB 2> ${DB}_pgdump.log
Destination:
(Edit globals.sql and apply only the required statements to the target instance.)
pg_restore -v --jobs=$Threads --clean --create -Fd --dbname=postgres ${TargetBackupDir}/$DB

On Fri, Dec 8, 2023 at 11:31 PM Rajesh Kumar <rajeshkumar.dba09@gmail.com> wrote:
We are facing error while restoring..So, I used dumpall to dump only schema and restored schema first . Then dumping data.

On Sat, 9 Dec 2023, 01:30 Ron Johnson, <ronljohnsonjr@gmail.com> wrote:
In --format=d mode,--file= is the directory where the data files are stored, not regular file.

This works for me:
pg_dump -j${Threads} -Z${ZLvl} -v -C -Fd --file=${BackupDir}/$DB $DB 2> ${DB}_pgdump.log

(Also, why are you just dumping the data, if you need to migrate the whole database?

On Fri, Dec 8, 2023 at 2:05 PM Rajesh Kumar <rajeshkumar.dba09@gmail.com> wrote:
Pg_dump -a -v -j2 -Z0 -Fd -d dbname -f dumpfilename

On Fri, 8 Dec, 2023, 5:04 PM Ron Johnson, <ronljohnsonjr@gmail.com> wrote:
On Fri, Dec 8, 2023 at 4:44 AM Rajesh Kumar <rajeshkumar.dba09@gmail.com> wrote:
Hi

We are using openshift environment and postgres version 15.2. We want to change storage from ceph to local storage. So, Storage team made default storage as local. Now, I created a new cluster with same postgres version and I am planning to take backup from old cluster to new cluster. Size is 100gb. Ram 24gb, cpu 2.

My question is, is there a combination of pg_dump and pg_restore that takes less time to complete the task?

Last time it took more than 8hours. We were taking schema only dump using dumpall . Then data only backup using pg_dump in directory format.

8 hours is really slow for just 100GB of database.  What exact command are you using?

Paul Smith is right, though: Just shut down the instance, copy the files, and start up the instance with new "-D" location.
(Will need to edit postgresql.conf if it defines file locations.)

В списке pgsql-admin по дате отправления:

Предыдущее
От: Rajesh Kumar
Дата:
Сообщение: Re: Postgres storage migration
Следующее
От: Achilleas Mantzios
Дата:
Сообщение: Re: application_name backend (not) reporting back to the client : pgbouncer, PgSQL 16.1, pgbouncer 1.21.0