Re: a back up question

Поиск
Список
Период
Сортировка
От Carl Karsten
Тема Re: a back up question
Дата
Msg-id CADmzSSi0jTJ05pEyQPd0Jz=7pczwxGPVwr9dRNBwG7MBLHp=Wg@mail.gmail.com
обсуждение исходный текст
Ответ на a back up question  (Martin Mueller <martinmueller@northwestern.edu>)
Ответы Re: a back up question  (Alvaro Herrera <alvherre@alvh.no-ip.org>)
Список pgsql-general
Nothing wrong with lots of tables and data.

Don't impose any constraints on your problem you don't need to.

Like what are you backing up to?    $400 for a 1T ssd or $80 fo a 2T usb3 spinny disk.

If you are backing up while the db is being updated, you need to make sure updates are queued until the backup is done.  don't mess with that process.   personally I would assume the db is always being updated and expect that.




On Tue, Dec 5, 2017 at 3:52 PM, Martin Mueller <martinmueller@northwestern.edu> wrote:

Are there rules for thumb for deciding when you can dump a whole database and when you’d be better off dumping groups of tables? I have a database that has around 100 tables, some of them quite large, and right now the data directory is well over 100GB. My hunch is that I should divide and conquer, but I don’t have a clear sense of what counts as  “too big” these days. Nor do I have a clear sense of whether the constraints have to do with overall size, the number of tables, or machine memory (my machine has 32GB of memory).

 

Is 10GB a good practical limit to keep in mind?

 

 




--
Carl K

В списке pgsql-general по дате отправления:

Предыдущее
От: "David G. Johnston"
Дата:
Сообщение: Re: a back up question
Следующее
От: Martin Mueller
Дата:
Сообщение: Re: a back up question