Slow Inserts Leads To Unable To Dump

Поиск
Список
Период
Сортировка
От Frank Morton
Тема Slow Inserts Leads To Unable To Dump
Дата
Msg-id 041c01be9ae7$668ea5a0$8355e5ce@base2inc.com
обсуждение исходный текст
Ответ на Having problem retrieving huge? table  (Ana Roizen <aroizen@sinectis.com.ar>)
Ответы Re: [SQL] Slow Inserts Leads To Unable To Dump
Список pgsql-sql
Remember the thread last week about slow inserts? I still
have more to do, but basically I ended up waiting about 7
DAYS to insert 150,000 rows into a single table.

Now that that is done, I thought I should dump the database
before doing any more to save those 7 days if I mess up.
After processing 18 HOURS doing the "pg_dump -d", it ran
out of memory and quit. Are there tools to do this differently,
that is, not requiring the system to do a SELECT on the
whole table first? Just dump it? (I did try pg_dump without
-d but stopped it after an hour or so)

I am getting concerned that I can't really use postgreSQL
for large databases. Sure, some have built large databases,
but have any of you ever had to dump and restore them?
What happens when an update requires that? I think some
have reported 1,000,000 rows databases. By my calculation,
if you could get it to dump, it would take my database 46 DAYS
to reload if I get to that size. Maybe "copy" will help, but now
I'm more concerned about being totally unable to "dump."

I really like postgreSQL and think everyone working on it doing
great work. But, I need to hear some comments from all of you
confirming what I am seeing or telling me I am missing something.

Thanks to all, Frank






В списке pgsql-sql по дате отправления:

Предыдущее
От: Ana Roizen
Дата:
Сообщение: Having problem retrieving huge? table
Следующее
От: "Matthias Seiferth"
Дата:
Сообщение: Retrieving column names and table names of a database