Re: hundreds of millions row dBs

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: hundreds of millions row dBs
Дата
Msg-id 13956.1104816742@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: hundreds of millions row dBs  ("Guy Rouillier" <guyr@masergy.com>)
Список pgsql-general
"Guy Rouillier" <guyr@masergy.com> writes:
> Greer, Doug wrote:
>> I am interested in using Postgresql for a dB of hundreds of
>> millions of rows in several tables.  The COPY command seems to be way
>> too slow.  Is there any bulk import program similar to Oracle's SQL
>> loader for Postgresql? Sincerely,

> We're getting about 64 million rows inserted in about 1.5 hrs into a
> table with a multiple-column primary key - that's the only index.
> That's seems pretty good to me - SQL Loader takes about 4 hrs to do the
> same job.

If you're talking about loading into an initially empty database, it's
worth a try to load into bare tables and then create indexes and add
foreign key constraints.  Index build and FK checking are both
significantly faster as "bulk" operations than "incremental".  Don't
forget to pump up sort_mem as much as you can stand in the backend doing
such chores, too.

I have heard of people who would actually drop and recreate indexes
and/or FKs when adding a lot of data to an existing table.

            regards, tom lane

В списке pgsql-general по дате отправления:

Предыдущее
От: "Guy Rouillier"
Дата:
Сообщение: Re: hundreds of millions row dBs
Следующее
От: David Teran
Дата:
Сообщение: changing column from int4 to int8, what happens with indexes?