Re: Pushing PostgreSQL to the Limit (urgent!)

Поиск
Список
Период
Сортировка
От Curt Sampson
Тема Re: Pushing PostgreSQL to the Limit (urgent!)
Дата
Msg-id Pine.NEB.4.44.0207161808030.465-100000@angelic.cynic.net
обсуждение исходный текст
Ответ на Re: Pushing PostgreSQL to the Limit (urgent!)  (Chris Albertson <chrisalbertson90278@yahoo.com>)
Ответы Re: Pushing PostgreSQL to the Limit (urgent!)  (Chris Albertson <chrisalbertson90278@yahoo.com>)
Список pgsql-general
> --- Paulo Henrique Baptista de Oliveira
> <baptista@linuxsolutions.com.br> wrote:
>
> >     I will put it to insert 30 M (millions) registers by month (or 1
> > Million by day) in a year is about 400 Millions registers.
> >     Can pgsql support this? In What Machine?

Yes. A reasonably powerful PC with at least two nice fast IDE drives
should do the trick. I recommend you buy such a machine, set up
postgres, and start experimenting. It will probably take a couple
of weeks of work to figure out how to make your application run
efficiently.

On Mon, 15 Jul 2002, Chris Albertson wrote:
>
> I have a similar application.  I am storing astronomical data
> from a set of automated cameras.  The data just floods in
> forever.  I can see a billion rows in the future.
> I find that I _can_ keep up using only modest hardware IF I use
> "COPY" and not "INSERT" to input the data.  "COPY" is much, much
> faster.  Also indexes help with SELECT speed not really hurt
> COPY/INSERT speed so you need to ballance.

Right. You may find it worthwhile to drop the indexes, import, and rebuild
instead of import with the indexes in place, if you're not doing queries
at the same time. Or maybe partial indexes could help....

cjs
--
Curt Sampson  <cjs@cynic.net>   +81 90 7737 2974   http://www.netbsd.org
    Don't you know, in this new Dark Age, we're all light.  --XTC


В списке pgsql-general по дате отправления:

Предыдущее
От: Michael Meskes
Дата:
Сообщение: Re: Embedded SQL in a function
Следующее
От: stefan@extum.com
Дата:
Сообщение: sequence id