Re: Backup history file should be replicated in Streaming Replication?

Поиск
Список
Период
Сортировка
От Dimitri Fontaine
Тема Re: Backup history file should be replicated in Streaming Replication?
Дата
Msg-id A5420689-4CCF-475C-83B4-671416E95F0B@hi-media.com
обсуждение исходный текст
Ответ на Re: Backup history file should be replicated in Streaming Replication?  (Heikki Linnakangas <heikki.linnakangas@enterprisedb.com>)
Ответы Re: Backup history file should be replicated in Streaming Replication?  (Alvaro Herrera <alvherre@commandprompt.com>)
Список pgsql-hackers
Hi,

Le 18 déc. 2009 à 19:21, Heikki Linnakangas a écrit :
> On Fri, Dec 18, 2009 at 12:22 PM, Florian Pflug <fgp.phlo.org@gmail.com> wrote:
>>> It'd prefer if the slave could automatically fetch a new base backup if it
>>> falls behind too far to catch up with the available logs. That way, old logs
>>> don't start piling up on the server if a slave goes offline for a long time.

Well I did propose to consider a state machine with clear transition for such problems, a while ago, and I think my
remarksstill do apply: http://www.mail-archive.com/pgsql-hackers@postgresql.org/msg131511.html 

Sorry for non archives.postgresql.org link, couldn't find the mail there.

> Yeah, for small databases, it's probably a better tradeoff. The problem
> with keeping WAL around in the master indefinitely is that you will
> eventually run out of disk space if the standby disappears for too long.

I'd vote for having a setting on the master for how long you keep WALs. If slave loose sync then comes back, either you
stillhave the required WALs and you're back to catchup or you don't and you're back either to base/init dance. 

Maybe you want to add a control on the slave to require explicit DBA action before getting back to taking a base backup
fromthe master, though, as that could be provided from a nightly PITR backup rather than the live server. 

>> but it's almost certainly much harder
>> to implement.  In particular, there's no hard and fast rule for
>> figuring out when you've dropped so far behind that resnapping the
>> whole thing is faster than replaying the WAL bit by bit.
>
> I'd imagine that you take a new base backup only if you have to, ie. the
> old WAL files the slave needs have already been deleted from the master.

Well consider a slave can be in one of those states: base, init, setup, catchup, sync. Now what you just said is
reducedto saying what transitions you can do without resorting to base backup, and I don't see that many as soon as the
lastsync point is no more available on the master. 

>> I think (as I did/do with Hot Standby) that the most important thing
>> here is to get to a point where we have a reasonably good feature that
>> is of some use, and commit it. It will probably have some annoying
>> limitations; we can remove those later.  I have a feel that what we
>> have right now is going to be non-robust in the face of network
>> breaks, but that is a problem that can be fixed by a future patch.
>
> Agreed. About a year ago, I was vocal about not relying on the file
> based shipping, but I don't have a problem with relying on it as an
> intermediate step, until we add the other options. It's robust as it is,
> if you set up WAL archiving.

I think I'd like to have the feature that a slave never pretends it's in-sync or soon-to-be when clearly it's not. For
theasynchronous case, we can live with it. As soon as we're talking synchronous, you really want the master to skip any
not-in-syncslave at COMMIT. To be even more clear, a slave that is not in sync is NOT a slave as far as synchronous
replicationis concerned. 

Regards,
--
dim



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: PATCH: Add hstore_to_json()
Следующее
От: Bruce Momjian
Дата:
Сообщение: Re: Removing pg_migrator limitations