Re: Moving data from huge table slow, min() query on indexed column taking 38s

Поиск
Список
Период
Сортировка
От Lorusso Domenico
Тема Re: Moving data from huge table slow, min() query on indexed column taking 38s
Дата
Msg-id CAJMpnG4SwAVULPbWazSBezTMoCwyS8+2AmWeUgxKu5uz8dQOfg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Moving data from huge table slow, min() query on indexed column taking 38s  (Dimitrios Apostolou <jimis@gmx.net>)
Список pgsql-general
Hello Dimitry,
at the end, a table is a file with many other functionalities, these functionalities consume resources.

If the DBMS (like oracle or db2) allow us to disable the functionalities so we can perform a copy between tables, otherwise (and often also for oracle and db2) the best approach is to use an export.

because export /import functionalities are very optimized to do their job.

Anyway, when you approach as DBA you must block the db or at least a table.
Don't try to  reorg schema or db with connected users.

Il giorno lun 10 lug 2023 alle ore 17:58 Dimitrios Apostolou <jimis@gmx.net> ha scritto:
Thanks! I have completed the transfer by taking down the database and
working exclusively on it, but I still wonder how one could do it in
production without exclusive locks. The loop with small DELETE...INSERT
transactions I posted on the parent post bloats the table fast.

The link you posted contains very useful info, I was not expecting that
the constraints could blow up the memory like that. Comments from me:

Disabling and then re-enabling the foreign key constraints is easily done
with ALTER TABLE.

Unfortunately it doesn't seem to be the same case for indices. One has to
create the table without indices, and then create the indices separately.
With such a process there is a risk of ending up with non-identical
table schemas.

By the way, with COPY one must use an intermediate file, right? There is
no way to COPY from table to table directly?


Thanks,
Dimitris

On Thu, 6 Jul 2023, Lorusso Domenico wrote:

> Hello,
> this is a standard problem during bulk copy.
>
> here some suggestions; for example disable indexes.
>
> The main issue is related to index, lock escalation and log writing.
> In other dbms you should set log off on the table, but postgresql does not seem to have this feature.
>
> Anyway, using an explicit lock table exclusive should prevent lock escalation.
>
> So: disable indexes in target table
> lock exclusive both table
> insert data
> truncate old table
>
> If this doesn't work you can consider using the copy command.
>
>
>
>
>
>
>
> Il giorno gio 6 lug 2023 alle ore 18:12 Dimitrios Apostolou <jimis@gmx.net> ha scritto:
>       On Thu, 6 Jul 2023, Dimitrios Apostolou wrote:
>
>       > + First I chose the method to DELETE...INSERT everything.
>
>       Just to clarify, the query looked more like
>
>          WITH rows AS ( DELETE FROM tbl_legacy RETURNING * )
>            INSERT INTO tbl
>              SELECT * FROM rows;
>
>       >   I noticed that the postgres process was growing without bounds, up to
>       >   the point that it consumed almost all of swap space and I had to kill
>       >   the command. Any idea why that happens?
>
>       Also note that my settings for work_mem, temp_buffers, shared_buffers etc
>       are all well below the RAM size and postgres has never shown unbound
>       growth again. Postgres version is 15.2.
>
>
>       Dimitris
>
>
>
>
>
> --
> Domenico L.
>
> per stupire mezz'ora basta un libro di storia,
> io cercai di imparare la Treccani a memoria... [F.d.A.]
>
>


--
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]

В списке pgsql-general по дате отправления:

Предыдущее
От: Gurjeet Singh
Дата:
Сообщение: Re: BI Reports and Postgres
Следующее
От: Lorusso Domenico
Дата:
Сообщение: Re: How to add function schema in search_path in option definitio