Re: batch insertion

Поиск
Список
Период
Сортировка
От Adrian Klaver
Тема Re: batch insertion
Дата
Msg-id 52195811.4020702@gmail.com
обсуждение исходный текст
Ответ на batch insertion  (Korisk <korisk@yandex.ru>)
Список pgsql-general
On 08/24/2013 05:15 PM, Korisk wrote:
> Hi!
> I want quick insert into db a lot of data (in form of triplets). Data is formed dynamical so "COPY" is not suitable.
> I tried batch insert like this:
>
> insert into triplets values (1,1,1);
> insert into triplets values (1,1,1), (3,2,5), (4,5,5);
> ...
> insert into triplets values (1,1,1), (3,2,5), (4,5,5) .... ;
>
> The more triplets I use the quicker operation is.
> With preparation it looks like this:
>
> res = PQprepare(conn, "qu", "insert into triplets values ($1::bigint, $2::bigint, $3::float);",3, NULL);
> ...
> res = PQprepare(conn, "qu", "insert into triplets values ($1::bigint, $2::bigint, $3::float), ($4::bigint,
$5::bigint,$6::float), ($7::bigint, $8::bigint, $9::float), ($10::bigint, $11::bigint, $12::float);",12, NULL); 
> ...
>
> The question:
> Is there any way to prepare query with any number of triplets without casting such a long string?,

Others have already posted about using COPY, so I will go another route.
The increased speed you see is probably a result of more data being
included in each transaction. From your example it is not clear if you
are batching your INSERTs. If not that is another way go, prepare your
statement then loop through your data in batches where a batch is
between a BEGIN and a COMMIT. This is one of the reasons COPY is so
fast, the data is dumped inside a single transaction. Unfortunately I do
not use libpq so I cannot provide an example.

>
> Thank you.
>
>


--
Adrian Klaver
adrian.klaver@gmail.com


В списке pgsql-general по дате отправления:

Предыдущее
От: Steve Atkins
Дата:
Сообщение: Re: batch insertion
Следующее
От: "Janek Sendrowski"
Дата:
Сообщение: how to use aggregate functions in this case