Обсуждение: Skip checkpoint on promoting from streaming replication

Поиск
Список
Период
Сортировка

Skip checkpoint on promoting from streaming replication

От
Kyotaro HORIGUCHI
Дата:
Hello,

I have a problem with promoting from hot-standby that exclusive
checkpointing retards completion of promotion.

This checkpoint is "shutdown checkpoint" as a convention in
realtion to TLI increment according to the comment shown below. I
suppose "shutdown checkpoint" means exclusive checkpoint - in
other words, checkpoint without WAL inserts meanwhile.

>      * one. This is not particularly critical, but since we may be
>      * assigning a new TLI, using a shutdown checkpoint allows us to have
>      * the rule that TLI only changes in shutdown checkpoints, which
>      * allows some extra error checking in xlog_redo.

I depend on this and suppose we can omit it if latest checkpoint
has been taken so as to be able to do crash recovery thereafter.
This condition could be secured by my another patch for
checkpoint_segments on standby.

After applying this patch, checkpoint after archive recovery at
near the end of StartupXLOG() will be skiped on the condition
follows,

- WAL receiver has been launched so far. (using WalRcvStarted())

- XLogCheckpointNeeded() against replayEndRecPtr says no need of checkpoint.

What do you think about this?


This patch needs WalRcvStarted() introduced by my another patch.

http://archives.postgresql.org/pgsql-hackers/2012-06/msg00287.php

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center

== My e-mail address has been changed since Apr. 1, 2012.
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 0f2678c..48c0cf6 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -6905,9 +6905,41 @@ StartupXLOG(void)         * allows some extra error checking in xlog_redo.         */        if
(bgwriterLaunched)
-            RequestCheckpoint(CHECKPOINT_END_OF_RECOVERY |
-                              CHECKPOINT_IMMEDIATE |
-                              CHECKPOINT_WAIT);
+        {
+            bool do_checkpoint = true;
+
+            if (WalRcvStarted())
+            {
+                /*
+                 * This shutdown checkpoint on promotion should retards
+                 * failover completion. In spite of the rule for TLI and
+                 * shutdown checkpoint mentioned above, we want to skip this
+                 * checkpoint securing recoveribility by crash recovery after
+                 * this point.
+                 */
+                uint32 replayEndId = 0;
+                uint32 replayEndSeg = 0;
+                XLogRecPtr replayEndRecPtr;
+                /* use volatile pointer to prevent code rearrangement */
+                volatile XLogCtlData *xlogctl = XLogCtl;
+
+                SpinLockAcquire(&xlogctl->info_lck);
+                replayEndRecPtr = xlogctl->replayEndRecPtr;
+                SpinLockRelease(&xlogctl->info_lck);
+                XLByteToSeg(replayEndRecPtr, replayEndId, replayEndSeg);
+                if (!XLogCheckpointNeeded(replayEndId, replayEndSeg))
+                {
+                    do_checkpoint = false;
+                    ereport(LOG,
+                            (errmsg("Checkpoint on recovery end was skipped")));
+                }
+            }
+            
+            if (do_checkpoint)
+                RequestCheckpoint(CHECKPOINT_END_OF_RECOVERY |
+                                  CHECKPOINT_IMMEDIATE |
+                                  CHECKPOINT_WAIT);
+        }        else            CreateCheckPoint(CHECKPOINT_END_OF_RECOVERY | CHECKPOINT_IMMEDIATE);

Re: Skip checkpoint on promoting from streaming replication

От
Simon Riggs
Дата:
On 8 June 2012 09:22, Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote:

> I have a problem with promoting from hot-standby that exclusive
> checkpointing retards completion of promotion.

Agreed, we have that problem.

> I depend on this and suppose we can omit it if latest checkpoint
> has been taken so as to be able to do crash recovery thereafter.

I don't see any reason to special case this. If a checkpoint has no
work to do, then it will go very quickly. Why seek to speed it up even
further?

> This condition could be secured by my another patch for
> checkpoint_segments on standby.

More frequent checkpoints are very unlikely to secure a condition that
no checkpoint at all is required at failover.

Making a change that has a negative effect for everybody, in the hope
of sometimes improving performance for something that is already fast
doesn't seem a good trade off to me.

Regrettably, the line of thought explained here does not seem useful to me.

As you know, I was working on avoiding shutdown checkpoints completely
myself. You are welcome to work on the approach Fujii and I discussed.

--
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: Skip checkpoint on promoting from streaming replication

От
Kyotaro HORIGUCHI
Дата:
Hello, sorry for vague understanding.

> > I depend on this and suppose we can omit it if latest checkpoint
> > has been taken so as to be able to do crash recovery thereafter.
> 
> I don't see any reason to special case this. If a checkpoint has no
> work to do, then it will go very quickly. Why seek to speed it up even
> further?

I want the standby to start to serve as soon as possible even by
a few seconds on failover in a HA cluster.

> > This condition could be secured by my another patch for
> > checkpoint_segments on standby.
> 
> More frequent checkpoints are very unlikely to secure a condition that
> no checkpoint at all is required at failover.

I understand that checkpoint at the end of recovery is
indispensable to ensure the availability of crash recovery
afterward. Putting aside the convention about TLI increment and
shutdown checkpoint, shutdown checkpoints there seems for me to
be omittable if (and not 'only if', I suppose) crash recovery is
available at the time.

Shutdown checkpoint itself seems dispansable to me, but I'm
shamingly not convinced so taking the TLI convention into
consideration.


> Making a change that has a negative effect for everybody, in the hope
> of sometimes improving performance for something that is already fast
> doesn't seem a good trade off to me.

Hmm.. I suppose the negative effect you've pointed is possible
slowing down of hot-standby by the extra checkpoints being
discussed in another thread, is it correct? Could you accept this
kind of modification if it could be turned off by, say, GUC?

> Regrettably, the line of thought explained here does not seem useful to me.
> 
> As you know, I was working on avoiding shutdown checkpoints completely
> myself. You are welcome to work on the approach Fujii and I discussed.

Sorry, I'm afraid that I've failed to find that discussion. Could
you let me have a pointer to that? Of cource I'd be very happy if
the checkpoints are completely avoided on the approach.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center

== My e-mail address has been changed since Apr. 1, 2012.


Re: Skip checkpoint on promoting from streaming replication

От
Simon Riggs
Дата:
On 12 June 2012 03:38, Kyotaro HORIGUCHI
<horiguchi.kyotaro@lab.ntt.co.jp> wrote:
> Hello, sorry for vague understanding.
>
>> > I depend on this and suppose we can omit it if latest checkpoint
>> > has been taken so as to be able to do crash recovery thereafter.
>>
>> I don't see any reason to special case this. If a checkpoint has no
>> work to do, then it will go very quickly. Why seek to speed it up even
>> further?
>
> I want the standby to start to serve as soon as possible even by
> a few seconds on failover in a HA cluster.

Please implement a prototype and measure how many seconds we are discussing.


>> > This condition could be secured by my another patch for
>> > checkpoint_segments on standby.
>>
>> More frequent checkpoints are very unlikely to secure a condition that
>> no checkpoint at all is required at failover.
>
> I understand that checkpoint at the end of recovery is
> indispensable to ensure the availability of crash recovery
> afterward. Putting aside the convention about TLI increment and
> shutdown checkpoint, shutdown checkpoints there seems for me to
> be omittable if (and not 'only if', I suppose) crash recovery is
> available at the time.
>
> Shutdown checkpoint itself seems dispansable to me, but I'm
> shamingly not convinced so taking the TLI convention into
> consideration.
>
>
>> Making a change that has a negative effect for everybody, in the hope
>> of sometimes improving performance for something that is already fast
>> doesn't seem a good trade off to me.
>
> Hmm.. I suppose the negative effect you've pointed is possible
> slowing down of hot-standby by the extra checkpoints being
> discussed in another thread, is it correct? Could you accept this
> kind of modification if it could be turned off by, say, GUC?


This proposal is for a performance enhancement. We normally require
some proof that the enhancement is real and that it doesn't have a
poor effect on people not using it. Please make measurements.

It's easy to force more frequent checkpointsif you wish them, so
please compare the case of more frequent checkpoints.


>> Regrettably, the line of thought explained here does not seem useful to me.
>>
>> As you know, I was working on avoiding shutdown checkpoints completely
>> myself. You are welcome to work on the approach Fujii and I discussed.
>
> Sorry, I'm afraid that I've failed to find that discussion. Could
> you let me have a pointer to that? Of cource I'd be very happy if
> the checkpoints are completely avoided on the approach.

Discussion on a patch submitted to me to the Januray 2012 CommitFest
to reduce failover time.

--
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: Skip checkpoint on promoting from streaming replication

От
Kyotaro HORIGUCHI
Дата:
Hello, Thank you to head me the previous discussion. I'll
consider them from now.

> > I want the standby to start to serve as soon as possible even by
> > a few seconds on failover in a HA cluster.
> 
> Please implement a prototype and measure how many seconds we
> are discussing.

I'm sorry to have omitted measurement data. (But it might be
shown in previous discussion.)

Our previous measurement of failover of PostgreSQL 9.1 +
Pacemaker on some workload showed that shutdown snapshot takes 8
seconds out of 42 seconds of total failover time (about 20%).

OS        : RHEL6.1-64
DBMS      : PostgeSQL 9.1.1
HA        : pacemaker-1.0.11-1.2.2 x64
Repl      : sync
Workload  : master : pgbench / scale factor = 100 (aprx. 1.5GB)           standby: none (warm-standby)

shared_buffers      = 2.5GB
wal_buffers         = 4MB
checkpoint_segments = 300
checkpoint_timeout  = 15min
checpoint_completion_target = 0.7
archive_mode        = on

WAL segment comsumption was about 310 segments / 15 mins under
the condition above.

> This proposal is for a performance enhancement. We normally require
> some proof that the enhancement is real and that it doesn't have a
> poor effect on people not using it. Please make measurements.

On the benchmark above, extra load by more frequent (but the same
to the its master) checkpoint is not a problem. On the other
hand, failover time is expected to be shortened to 34 seconds
from 42 seconds by omitting the shutdown checkpoint.
(But I have not measured that..)

> Discussion on a patch submitted to me to the Januray 2012 CommitFest
> to reduce failover time.

Thank you and I'm sorry for missing it. I've found that
discussions and read them from now.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center

== My e-mail address has been changed since Apr. 1, 2012.


Re: Skip checkpoint on promoting from streaming replication

От
Kyotaro HORIGUCHI
Дата:
Hello, This is the new version of the patch.

Your patch introduced new WAL record type XLOG_END_OF_RECOVERY to
mark the chenge point of TLI. But I think the information is
already stored in history files and already ready to use in
current code.

I looked into your first patch and looked over the discussion on
it, and find that my understanding that TLI switch is operable
also for crash recovery besides archive recovery was half
wrong. The correct half was that it can be operable for crash
recovery if we properly set TimeLineID in StartupXLOG().

To achieve this, I added a new field 'latestTLI' (more proper
name is welcome) and make it always catch up with the latest TLI
with no relation to checkpoints. Then set the recovery target in
StartupXLOG() referring it. Additionaly, in previous patch, I
checked only checkpoint intervals but this ended with no effect
as you said. Because the WAL files in pg_xlog are preserved as
many as required for crash recovery, as I knew...


The new patch seems to work correctly for changing of TLI without
checkpoint following.  And archive recovery and PITR also seems
to work correctly. The test script for the former is attached
too.

The new patch consists of two parts. These might should be
treated as two separate ones..

1. Allow_TLI_Increment_without_Checkpoint_20120618.patch
 Removes the assumption after the 'convension' that TLI should be incremented only on shutdown checkpoint. This seems
actuallyhas no problem as the commnet(This is not particularly critical).
 

2. Skip_Checkpoint_on_Promotion_20120618.patch
 Skips checkpoint if redo record can be read in-place.

3. Test script for TLI increment patch.
 This is only to show how the patch is tested. The point is creating TLI increment point not followed by any kind of
checkpoints. pg_controldata shows like following after running this test script. Latest timeline ID is the new field.
 
  > pg_control version number:            923  > Database cluster state:               in production !> Latest timeline
ID:                  2  > Latest checkpoint location:           0/2000058  > Prior checkpoint location:
0/2000058 > Latest checkpoint's REDO location:    0/2000020 !> Latest checkpoint's TimeLineID:       1
 
 We will see this changing as follows after crash recovery,
  > Latest timeline ID:                   2  > Latest checkpoint location:           0/54D9918  > Prior checkpoint
location:           0/2000058  > Latest checkpoint's REDO location:    0/54D9918  > Latest checkpoint's TimeLineID:
 2
 
 Then, we should see both two 'ABCDE...'s and two 'VWXYZ...'s in the table after the crash recovery.

What do you think about this?

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center

== My e-mail address has been changed since Apr. 1, 2012.
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 0d68760..70b4972 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -5276,6 +5276,7 @@ BootStrapXLOG(void)    ControlFile->system_identifier = sysidentifier;    ControlFile->state =
DB_SHUTDOWNED;   ControlFile->time = checkPoint.time;
 
+    ControlFile->latestTLI = ThisTimeLineID;    ControlFile->checkPoint = checkPoint.redo;
ControlFile->checkPointCopy= checkPoint;
 
@@ -6083,7 +6084,7 @@ StartupXLOG(void)     * Initialize on the assumption we want to recover to the same timeline
*that's active according to pg_control.     */
 
-    recoveryTargetTLI = ControlFile->checkPointCopy.ThisTimeLineID;
+    recoveryTargetTLI = ControlFile->latestTLI;    /*     * Check for recovery control file, and if so set up state
foroffline
 
@@ -6100,11 +6101,11 @@ StartupXLOG(void)     * timeline.     */    if (!list_member_int(expectedTLIs,
-                         (int) ControlFile->checkPointCopy.ThisTimeLineID))
+                         (int) ControlFile->latestTLI))        ereport(FATAL,                (errmsg("requested
timeline%u is not a child of database system timeline %u",                        recoveryTargetTLI,
 
-                        ControlFile->checkPointCopy.ThisTimeLineID)));
+                        ControlFile->latestTLI)));    /*     * Save the selected recovery target timeline ID and
@@ -6791,9 +6792,12 @@ StartupXLOG(void)     *     * In a normal crash recovery, we can just extend the timeline we
werein.     */
 
+
+    ThisTimeLineID = findNewestTimeLine(recoveryTargetTLI);
+        if (InArchiveRecovery)    {
-        ThisTimeLineID = findNewestTimeLine(recoveryTargetTLI) + 1;
+        ThisTimeLineID++;        ereport(LOG,                (errmsg("selected new timeline ID: %u",
ThisTimeLineID)));       writeTimeLineHistory(ThisTimeLineID, recoveryTargetTLI,
 
@@ -6946,6 +6950,7 @@ StartupXLOG(void)    LWLockAcquire(ControlFileLock, LW_EXCLUSIVE);    ControlFile->state =
DB_IN_PRODUCTION;   ControlFile->time = (pg_time_t) time(NULL);
 
+    ControlFile->latestTLI = ThisTimeLineID;    UpdateControlFile();    LWLockRelease(ControlFileLock);
@@ -8710,12 +8715,6 @@ xlog_redo(XLogRecPtr lsn, XLogRecord *record)            SpinLockRelease(&xlogctl->info_lck);
   }
 
-        /* TLI should not change in an on-line checkpoint */
-        if (checkPoint.ThisTimeLineID != ThisTimeLineID)
-            ereport(PANIC,
-                    (errmsg("unexpected timeline ID %u (should be %u) in checkpoint record",
-                            checkPoint.ThisTimeLineID, ThisTimeLineID)));
-        RecoveryRestartPoint(&checkPoint);    }    else if (info == XLOG_NOOP)
diff --git a/src/bin/pg_controldata/pg_controldata.c b/src/bin/pg_controldata/pg_controldata.c
index 38c263c..7f2cdb8 100644
--- a/src/bin/pg_controldata/pg_controldata.c
+++ b/src/bin/pg_controldata/pg_controldata.c
@@ -192,6 +192,8 @@ main(int argc, char *argv[])           dbState(ControlFile.state));    printf(_("pg_control last
modified:            %s\n"),           pgctime_str);
 
+    printf(_("Latest timeline ID:                   %d\n"),
+           ControlFile.latestTLI);    printf(_("Latest checkpoint location:           %X/%X\n"),
ControlFile.checkPoint.xlogid,          ControlFile.checkPoint.xrecoff);
 
diff --git a/src/include/catalog/pg_control.h b/src/include/catalog/pg_control.h
index 5cff396..c78d483 100644
--- a/src/include/catalog/pg_control.h
+++ b/src/include/catalog/pg_control.h
@@ -21,7 +21,7 @@/* Version identifier for this pg_control format */
-#define PG_CONTROL_VERSION    922
+#define PG_CONTROL_VERSION    923/* * Body of CheckPoint XLOG records.  This is declared here because we keep
@@ -116,6 +116,7 @@ typedef struct ControlFileData     */    DBState        state;            /* see enum above */
pg_time_t   time;            /* time stamp of last pg_control update */
 
+    TimeLineID  latestTLI;      /* latest TLI we reached */    XLogRecPtr    checkPoint;        /* last check point
recordptr */    XLogRecPtr    prevCheckPoint; /* previous check point record ptr */ 
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 70b4972..574ecfb 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -6914,9 +6914,17 @@ StartupXLOG(void)         * allows some extra error checking in xlog_redo.         */        if
(bgwriterLaunched)
-            RequestCheckpoint(CHECKPOINT_END_OF_RECOVERY |
-                              CHECKPOINT_IMMEDIATE |
-                              CHECKPOINT_WAIT);
+        {
+            checkPointLoc = ControlFile->prevCheckPoint;
+            record = ReadCheckpointRecord(checkPointLoc, 2);
+            if (record != NULL)
+                ereport(LOG,
+                        (errmsg("Checkpoint on recovery end was skipped")));
+            else
+                RequestCheckpoint(CHECKPOINT_END_OF_RECOVERY |
+                                  CHECKPOINT_IMMEDIATE |
+                                  CHECKPOINT_WAIT);
+        }        else            CreateCheckPoint(CHECKPOINT_END_OF_RECOVERY | CHECKPOINT_IMMEDIATE);
#! /bin/sh
export PGDATA1=/ext/horiguti/pgdata1
export PGDATA2=/ext/horiguti/pgdata2
echo Shutting down servers
pg_ctl -D $PGDATA2 stop -m i
pg_ctl -D $PGDATA1 stop -m i
sleep 5
echo Remove old clusters
rm -rf $PGDATA1 $PGDATA2 /tmp/hoge /tmp/hoge1
echo Creating master database cluster
initdb -D $PGDATA1 --no-locale --encoding=utf8
cp ~/work_repl/mast_conf/* $PGDATA
echo Starting master
pg_ctl -D $PGDATA1 start
sleep 5
echo Taking base backup for slave
pg_basebackup -h /tmp -p 5432 -D $PGDATA2 -X stream
cp ~/work_repl/repl_conf/* $PGDATA2
echo Done, starting slave
pg_controldata $PGDATA2 > ~/control_01_before_slave_start
pg_ctl -D $PGDATA2 start
sleep 5
pg_controldata $PGDATA2 > ~/control_02_after_slave_start
echo creating database.
createdb $USER
echo Proceeding WALS
psql -h /tmp -p 5432 -c "create table foo (a text)";
psql -h /tmp -p 5432 -c "insert into foo (select repeat('abcde', 1000) from generate_series(1, 200000)); delete from
foo;"
psql -h /tmp -p 5432 -c "insert into foo (select repeat('ABCDE', 10) from generate_series(1, 2));"
pg_controldata $PGDATA2 > ~/control_03_WAL_proceeded
echo Promoting slave
pg_ctl -D $PGDATA2 promote
sleep 5
pg_controldata $PGDATA2 > ~/control_04_After_promoted
echo "Killing PostgreSQL's without taking checkpoint"
psql -h /tmp -p 5433 -c "insert into foo (select repeat('VWXYZ', 10) from generate_series(1, 2));"
killall -9 postgres
pg_controldata $PGDATA2 > ~/control_05_Killed_without_checkpoint
rm -f /tmp/hoge /tmp/hoge1
echo DONE

Re: Skip checkpoint on promoting from streaming replication

От
Fujii Masao
Дата:
On Mon, Jun 18, 2012 at 5:42 PM, Kyotaro HORIGUCHI
<horiguchi.kyotaro@lab.ntt.co.jp> wrote:
> What do you think about this?

What happens if the server skips an end-of-recovery checkpoint, is promoted to
the master, runs some write transactions, crashes and restarts automatically
before it completes checkpoint? In this case, the server needs to do crash
recovery from the last checkpoint record with old timeline ID to the latest WAL
record with new timeline ID. How does crash recovery do recovery beyond
timeline?

Regards,

-- 
Fujii Masao


Re: Skip checkpoint on promoting from streaming replication

От
Kyotaro HORIGUCHI
Дата:
Thank you.

> What happens if the server skips an end-of-recovery checkpoint,
> is promoted to the master, runs some write transactions,
> crashes and restarts automatically before it completes
> checkpoint? In this case, the server needs to do crash recovery
> from the last checkpoint record with old timeline ID to the
> latest WAL record with new timeline ID. How does crash recovery
> do recovery beyond timeline?

Basically the same as archive recovery as far as I saw. It is
already implemented to work in that way.

After this patch applied, StartupXLOG() gets its
recoveryTargetTLI from the new field lastestTLI in the control
file instead of the latest checkpoint. And the latest checkpoint
record informs its TLI and WAL location as before, but its TLI
does not have a significant meaning in the recovery sequence.

Suggest the case following,
     |seg 1     | seg 2    |
TLI 1 |...c......|....000000|         C           P  X
TLI 2            |........00|

* C - checkpoint, P - promotion, X - crash just after here

This shows the situation that the latest checkpoint(restartpoint)
has been taken at TLI=1/SEG=1/OFF=4 and promoted at
TLI=1/SEG=2/OFF=5, then crashed just after TLI=2/SEG=2/OFF=8.
Promotion itself inserts no wal records but creates a copy of the
current segment for new TLI. the file for TLI=2/SEG=1 should not
exist. (Who will create it?)

The control file will looks as follows

latest checkpoint : TLI=1/SEG=1/OFF=4
latest TLI        : 2

So the crash recovery sequence starts from SEG=1/LOC=4.
expectedTLIs will be (2, 1) so 1 will naturally be selected as
the TLI for SEG1 and 2 for SEG2 in XLogFileReadAnyTLI().

In the closer view, startup constructs expectedTLIs reading the
timeline hisotry file corresponds to the recoveryTargetTLI. Then
runs the recovery sequence from the redo point of the latest
checkpoint using WALs with the largest TLI - which is
distinguised by its file name, not header - within the
expectedTLIs in XLogPageRead(). The only difference to archive
recovery is XLogFileReadAnyTLI() reads only the WAL files already
sit in pg_xlog directory, and not reaches archive. The pages with
the new TLI will be naturally picked up as mentioned above in
this sequence and then will stop at the last readable record.

latestTLI field in the control file is updated just after the TLI
was incremented and the new WAL files with the new TLI was
created. So the crash recovery sequence won't stop before
reaching the WAL with new TLI disignated in the control file.


regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center

== My e-mail address has been changed since Apr. 1, 2012.


Re: Skip checkpoint on promoting from streaming replication

От
Fujii Masao
Дата:
On Tue, Jun 19, 2012 at 5:30 PM, Kyotaro HORIGUCHI
<horiguchi.kyotaro@lab.ntt.co.jp> wrote:
> Thank you.
>
>> What happens if the server skips an end-of-recovery checkpoint,
>> is promoted to the master, runs some write transactions,
>> crashes and restarts automatically before it completes
>> checkpoint? In this case, the server needs to do crash recovery
>> from the last checkpoint record with old timeline ID to the
>> latest WAL record with new timeline ID. How does crash recovery
>> do recovery beyond timeline?
>
> Basically the same as archive recovery as far as I saw. It is
> already implemented to work in that way.
>
> After this patch applied, StartupXLOG() gets its
> recoveryTargetTLI from the new field lastestTLI in the control
> file instead of the latest checkpoint. And the latest checkpoint
> record informs its TLI and WAL location as before, but its TLI
> does not have a significant meaning in the recovery sequence.
>
> Suggest the case following,
>
>      |seg 1     | seg 2    |
> TLI 1 |...c......|....000000|
>          C           P  X
> TLI 2            |........00|
>
> * C - checkpoint, P - promotion, X - crash just after here
>
> This shows the situation that the latest checkpoint(restartpoint)
> has been taken at TLI=1/SEG=1/OFF=4 and promoted at
> TLI=1/SEG=2/OFF=5, then crashed just after TLI=2/SEG=2/OFF=8.
> Promotion itself inserts no wal records but creates a copy of the
> current segment for new TLI. the file for TLI=2/SEG=1 should not
> exist. (Who will create it?)
>
> The control file will looks as follows
>
> latest checkpoint : TLI=1/SEG=1/OFF=4
> latest TLI        : 2
>
> So the crash recovery sequence starts from SEG=1/LOC=4.
> expectedTLIs will be (2, 1) so 1 will naturally be selected as
> the TLI for SEG1 and 2 for SEG2 in XLogFileReadAnyTLI().
>
> In the closer view, startup constructs expectedTLIs reading the
> timeline hisotry file corresponds to the recoveryTargetTLI. Then
> runs the recovery sequence from the redo point of the latest
> checkpoint using WALs with the largest TLI - which is
> distinguised by its file name, not header - within the
> expectedTLIs in XLogPageRead(). The only difference to archive
> recovery is XLogFileReadAnyTLI() reads only the WAL files already
> sit in pg_xlog directory, and not reaches archive. The pages with
> the new TLI will be naturally picked up as mentioned above in
> this sequence and then will stop at the last readable record.
>
> latestTLI field in the control file is updated just after the TLI
> was incremented and the new WAL files with the new TLI was
> created. So the crash recovery sequence won't stop before
> reaching the WAL with new TLI disignated in the control file.

Is it guaranteed that all the files (e.g., the latest timeline history file)
required for such crash recovery exist in pg_xlog? If yes, your
approach might work well.

Regards,

--
Fujii Masao


Re: Skip checkpoint on promoting from streaming replication

От
Kyotaro HORIGUCHI
Дата:
Hello,

> Is it guaranteed that all the files (e.g., the latest timeline history file)
> required for such crash recovery exist in pg_xlog? If yes, your
> approach might work well.

Particularly regarding the promotion, the files reuiqred are the
history file of the latest timeline, the WAL file including redo
location of the latest restartpoint, and all WAL files after the
first one each of which is of appropriate timeline.

On current (9.2/9.3dev) implement, as far as I know, archive
recovery and stream replication will create regular WAL files
requireded during recovery sequence in slave's pg_xlog
direcotory. And only restart point removes them older than the
one on which the restart point takes place. If so, all required
files mentioned above should be in pg_xlog directory. Is there
something I've forgotten?

However, it will be more robust if we could check if all required
files available on promotion. I could guess two approaches which
might accomplish that.

1. Record the id of the WAL segment which is not in pg_xlog as  regular WAL file on reading.
  For example, if we modify archive recovery so as to make work  WAL files out of pg_xlog or give a special name which
cannot be refferred to for fetching them in crash recovery afterward,  record the id of the segment. The shutdown
checkpointon  promotion or end of recovery cannot be skipped if this  recorded segment id is equal or larger than redo
pointof the  latest of checkpoint. This approach of cource reduces the  chance to skip shutdown checkpoint than forcing
tocopy all  required files into pg_xlog, but still seems to be effective  for most common cases, say promoting enough
minutesafter  wal-streaming started to have a restart point on a WAL in  pg_xlog.
 
  I hope this is promising.
  Temporary WAL file for streaming? It seems for me to make  shutdown checkpoint mandatory since no WAL files before
promotionbecomes accessible at the moment. On the other hand,  preserving somehow the WALs after the latest
restartpoint seems to have not significant difference to the current way  from the viewpoint of disk consumption.
 

2. Check for all required WAL files on promotion or end of  recovery.
  We could check the existence of all required files on  promotion scanning with the manner similar to recovery. But
thisrequires to add the codes similar to the existing or  induces the labor to weave new function into existing  code.
Furthurmore,this seems to take certain time on  promotion (or end of recovery).
 
  The discussion about temporary wal files would be the same to 1.


regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center

== My e-mail address has been changed since Apr. 1, 2012.


Re: Skip checkpoint on promoting from streaming replication

От
Simon Riggs
Дата:
On 22 June 2012 05:03, Kyotaro HORIGUCHI
<horiguchi.kyotaro@lab.ntt.co.jp> wrote:

>    I hope this is promising.

I've reviewed this and thought about it over some time.

At first I was unhappy that you'd removed the restriction that
timelines only change on a shutdown checkpoint. But the reality is
timelines change at any point in the WAL stream - the only way to tell
between end of WAL and a timeline change is by looking for later
timelines.

The rest of the logic seems OK, but its a big thing we're doing here
and so it will take longer yet. Putting all the theory into comments
in code would certainly help here.

I don't have much else to say on this right now. I'm not committing
anything on this now since I'm about to go on holiday, but I'll be
looking at this when I get back.

For now, I'm going to mark this as Returned With Feedback, but please
don't be discouraged by that.

-- Simon Riggs                   http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services


Re: Skip checkpoint on promoting from streaming replication

От
Kyotaro HORIGUCHI
Дата:
Hello, sorry for long absense.

> At first I was unhappy that you'd removed the restriction that
> timelines only change on a shutdown checkpoint. But the reality is
> timelines change at any point in the WAL stream - the only way to tell
> between end of WAL and a timeline change is by looking for later
> timelines.
Yes, I felt uncomfortable with that point. The overlooking map
of timeline evolution on WAL stream seems obscure, and it should
have been made clear to do this. I couldn't show the clear map
for this CF.

> The rest of the logic seems OK, but its a big thing we're doing here
> and so it will take longer yet. Putting all the theory into comments
> in code would certainly help here.

Ok, I agreed.

> I don't have much else to say on this right now. I'm not committing
> anything on this now since I'm about to go on holiday, but I'll be
> looking at this when I get back.

Have a nice holyday.

> For now, I'm going to mark this as Returned With Feedback, but please
> don't be discouraged by that.

I think we have enough time to think about that yet, and I
believe this will be worth doing.

Thank you.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center

== My e-mail address has been changed since Apr. 1, 2012.



Re: Skip checkpoint on promoting from streaming replication

От
Alvaro Herrera
Дата:
This patch seems to have been neglected by both its submitter and the
reviewer.  Also, Simon said he was going to set it
returned-with-feedback on his last reply, but I see it as needs-review
still in the CF app.  Is this something that is going to be reconsidered
and resubmitted for the next commitfest?  If so, please close it up in
the current one.

Thanks.

--
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Re: Skip checkpoint on promoting from streaming replication

От
Simon Riggs
Дата:
On 18 October 2012 21:22, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:

> This patch seems to have been neglected by both its submitter and the
> reviewer.  Also, Simon said he was going to set it
> returned-with-feedback on his last reply, but I see it as needs-review
> still in the CF app.  Is this something that is going to be reconsidered
> and resubmitted for the next commitfest?  If so, please close it up in
> the current one.

I burned time on the unlogged table problems, so haven't got round to
this yet. I'm happier than I was with this.

I'm also conscious this is very important and there are no later patch
dependencies, so there's no rush to commit it and every reason to make
sure it happens without any mistakes. It will be there for 9.3.

-- Simon Riggs                   http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services



Re: Skip checkpoint on promoting from streaming replication

От
Simon Riggs
Дата:
On 9 August 2012 10:45, Simon Riggs <simon@2ndquadrant.com> wrote:
> On 22 June 2012 05:03, Kyotaro HORIGUCHI
> <horiguchi.kyotaro@lab.ntt.co.jp> wrote:
>
>>    I hope this is promising.
>
> I've reviewed this and thought about it over some time.

I've been torn between the need to remove the checkpoint for speed and
being worried about the implications of doing so.

We promote in multiple use cases. When we end a PITR, or are
performing a switchover, it doesn't really matter how long the
shutdown checkpoint takes, so I'm inclined to leave it there in those
cases. For failover, we need fast promotion.

So my thinking is to make   pg_ctl promote -m fast
be the way to initiate a fast failover that skips the shutdown checkpoint.

That way all existing applications work the same as before, while new
users that explicitly choose to do so will gain from the new option.

-- Simon Riggs                   http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services



Re: Skip checkpoint on promoting from streaming replication

От
Simon Riggs
Дата:
On 6 January 2013 21:58, Simon Riggs <simon@2ndquadrant.com> wrote:
> On 9 August 2012 10:45, Simon Riggs <simon@2ndquadrant.com> wrote:
>> On 22 June 2012 05:03, Kyotaro HORIGUCHI
>> <horiguchi.kyotaro@lab.ntt.co.jp> wrote:
>>
>>>    I hope this is promising.
>>
>> I've reviewed this and thought about it over some time.
>
> I've been torn between the need to remove the checkpoint for speed and
> being worried about the implications of doing so.
>
> We promote in multiple use cases. When we end a PITR, or are
> performing a switchover, it doesn't really matter how long the
> shutdown checkpoint takes, so I'm inclined to leave it there in those
> cases. For failover, we need fast promotion.
>
> So my thinking is to make   pg_ctl promote -m fast
> be the way to initiate a fast failover that skips the shutdown checkpoint.
>
> That way all existing applications work the same as before, while new
> users that explicitly choose to do so will gain from the new option.


Here's a patch to skip checkpoint when we do

  pg_ctl promote -m fast

We keep the end of recovery checkpoint in all other cases.

The only thing left from Kyotaro's patch is a single line of code -
the call to ReadCheckpointRecord() that checks to see if the WAL
records for the last two restartpoints is on disk, which was an
important line of code.

Patch implements a new record type XLOG_END_OF_RECOVERY that behaves
on replay like a shutdown checkpoint record. I put this back in from
my patch because I believe its important that we have a clear place
where the WAL history changes timelineId. WAL format change bump
required.

So far this is only barely tested, but considering time goes on, I
thought people might want to pass comment on this.

--
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Вложения

Re: Skip checkpoint on promoting from streaming replication

От
Heikki Linnakangas
Дата:
On 24.01.2013 18:24, Simon Riggs wrote:
> On 6 January 2013 21:58, Simon Riggs<simon@2ndquadrant.com>  wrote:
>> I've been torn between the need to remove the checkpoint for speed and
>> being worried about the implications of doing so.
>>
>> We promote in multiple use cases. When we end a PITR, or are
>> performing a switchover, it doesn't really matter how long the
>> shutdown checkpoint takes, so I'm inclined to leave it there in those
>> cases. For failover, we need fast promotion.
>>
>> So my thinking is to make   pg_ctl promote -m fast
>> be the way to initiate a fast failover that skips the shutdown checkpoint.
>>
>> That way all existing applications work the same as before, while new
>> users that explicitly choose to do so will gain from the new option.
>
> Here's a patch to skip checkpoint when we do
>
>    pg_ctl promote -m fast
>
> We keep the end of recovery checkpoint in all other cases.

Hmm, there seems to be no way to do a "fast" promotion with a trigger file.

I'm a bit confused why there needs to be special mode for this. Can't we 
just always do the "fast" promotion? I agree that there's no urgency 
when you're doing PITR, but shouldn't do any harm either. Or perhaps 
always do "fast" promotion when starting up from standby mode, and 
"slow" otherwise.

Are we comfortable enough with this to skip the checkpoint after crash 
recovery?

I may be missing something, but it looks like after a "fast" promotion, 
you don't request a new checkpoint. So it can take quite a while for the 
next checkpoint to be triggered by checkpoint_timeout/segments. That 
shouldn't be a problem, but I feel that it'd be prudent to request a new 
checkpoint immediately (not necessarily an "immediate" checkpoint, though).

> The only thing left from Kyotaro's patch is a single line of code -
> the call to ReadCheckpointRecord() that checks to see if the WAL
> records for the last two restartpoints is on disk, which was an
> important line of code.

Why's that important, just for paranoia? If the last two restartpoints 
have disappeared, something's seriously wrong, and you will be in 
trouble e.g if you crash at that point. Do we need to be extra paranoid 
when doing a "fast" promotion?

> Patch implements a new record type XLOG_END_OF_RECOVERY that behaves
> on replay like a shutdown checkpoint record. I put this back in from
> my patch because I believe its important that we have a clear place
> where the WAL history changes timelineId. WAL format change bump
> required.

Agreed, such a WAL record is essential.

At replay, an end-of-recovery record should be a signal to the hot 
standby mechanism that there are no transactions running in the master 
at that point, same as a shutdown checkpoint.

- Heikki



Re: Skip checkpoint on promoting from streaming replication

От
Simon Riggs
Дата:
On 24 January 2013 16:52, Heikki Linnakangas <hlinnakangas@vmware.com> wrote:
> On 24.01.2013 18:24, Simon Riggs wrote:
>>
>> On 6 January 2013 21:58, Simon Riggs<simon@2ndquadrant.com>  wrote:
>>>
>>> I've been torn between the need to remove the checkpoint for speed and
>>>
>>> being worried about the implications of doing so.
>>>
>>> We promote in multiple use cases. When we end a PITR, or are
>>> performing a switchover, it doesn't really matter how long the
>>> shutdown checkpoint takes, so I'm inclined to leave it there in those
>>> cases. For failover, we need fast promotion.
>>>
>>> So my thinking is to make   pg_ctl promote -m fast
>>> be the way to initiate a fast failover that skips the shutdown
>>> checkpoint.
>>>
>>> That way all existing applications work the same as before, while new
>>> users that explicitly choose to do so will gain from the new option.
>>
>>
>> Here's a patch to skip checkpoint when we do
>>
>>    pg_ctl promote -m fast
>>
>> We keep the end of recovery checkpoint in all other cases.
>
>
> Hmm, there seems to be no way to do a "fast" promotion with a trigger file.

True. I thought we were moving away from trigger files to use of "promote"

> I'm a bit confused why there needs to be special mode for this. Can't we
> just always do the "fast" promotion? I agree that there's no urgency when
> you're doing PITR, but shouldn't do any harm either. Or perhaps always do
> "fast" promotion when starting up from standby mode, and "slow" otherwise.
>
> Are we comfortable enough with this to skip the checkpoint after crash
> recovery?

I'm not. Maybe if we get no bugs we can make it do this always, in next release.

It;s fast when it needs to be and safe otherwise.


> I may be missing something, but it looks like after a "fast" promotion, you
> don't request a new checkpoint. So it can take quite a while for the next
> checkpoint to be triggered by checkpoint_timeout/segments. That shouldn't be
> a problem, but I feel that it'd be prudent to request a new checkpoint
> immediately (not necessarily an "immediate" checkpoint, though).

I thought of that and there is a long comment to explain why I didn't.

Two problems:

1) an immediate checkpoint can cause a disk/resource usage spike,
which is definitely not what you need just when a spike of connections
and new SQL hits the system.

2) If we did that, we would have an EndOfRecovery record, some other
records and then a Shutdown checkpoint.
As I right this, (2) is wrong, because we shouldn't do a a Shutdown
checkpoint anyway.

But I still think (1) is a valid concern.

>> The only thing left from Kyotaro's patch is a single line of code -
>> the call to ReadCheckpointRecord() that checks to see if the WAL
>> records for the last two restartpoints is on disk, which was an
>> important line of code.
>
>
> Why's that important, just for paranoia? If the last two restartpoints have
> disappeared, something's seriously wrong, and you will be in trouble e.g if
> you crash at that point. Do we need to be extra paranoid when doing a "fast"
> promotion?

The check is cheap, so what do we gain by skipping the check?

>> Patch implements a new record type XLOG_END_OF_RECOVERY that behaves
>> on replay like a shutdown checkpoint record. I put this back in from
>> my patch because I believe its important that we have a clear place
>> where the WAL history changes timelineId. WAL format change bump
>> required.
>
>
> Agreed, such a WAL record is essential.
>
> At replay, an end-of-recovery record should be a signal to the hot standby
> mechanism that there are no transactions running in the master at that
> point, same as a shutdown checkpoint.

I had a reason why I didn't do that, but it seems to have slipped my mind.

If I can't remember, I'll add it.

-- Simon Riggs                   http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services



Re: Skip checkpoint on promoting from streaming replication

От
Simon Riggs
Дата:
On 24 January 2013 17:44, Simon Riggs <simon@2ndquadrant.com> wrote:

>> At replay, an end-of-recovery record should be a signal to the hot standby
>> mechanism that there are no transactions running in the master at that
>> point, same as a shutdown checkpoint.
>
> I had a reason why I didn't do that, but it seems to have slipped my mind.
>
> If I can't remember, I'll add it.

I think it was simply to keep things simple and avoid bugs in this release.

-- Simon Riggs                   http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services



Re: Skip checkpoint on promoting from streaming replication

От
Heikki Linnakangas
Дата:
On 24.01.2013 19:44, Simon Riggs wrote:
> On 24 January 2013 16:52, Heikki Linnakangas<hlinnakangas@vmware.com>  wrote:
>> I may be missing something, but it looks like after a "fast" promotion, you
>> don't request a new checkpoint. So it can take quite a while for the next
>> checkpoint to be triggered by checkpoint_timeout/segments. That shouldn't be
>> a problem, but I feel that it'd be prudent to request a new checkpoint
>> immediately (not necessarily an "immediate" checkpoint, though).
>
> I thought of that and there is a long comment to explain why I didn't.
>
> Two problems:
>
> 1) an immediate checkpoint can cause a disk/resource usage spike,
> which is definitely not what you need just when a spike of connections
> and new SQL hits the system.

It doesn't need to be an "immediate" checkpoint, ie. you don't need to 
rush through it with checkpoint_completion_target=0. I think you should 
initiate a regular, slow, checkpoint, right after writing the 
end-of-recovery record. It can take some time for it to finish, which is ok.

There's no hard correctness reason here for any particular behavior, I 
just feel that that would make most sense. It seems prudent to initiate 
a checkpoint right after timeline switch, so that you get a new 
checkpoint on the new timeline fairly soon - it could take up to 
checkpoint_timeout otherwise, but there's no terrible rush to finish it 
ASAP.

- Heikki



Re: Skip checkpoint on promoting from streaming replication

От
Tom Lane
Дата:
Heikki Linnakangas <hlinnakangas@vmware.com> writes:
> There's no hard correctness reason here for any particular behavior, I 
> just feel that that would make most sense. It seems prudent to initiate 
> a checkpoint right after timeline switch, so that you get a new 
> checkpoint on the new timeline fairly soon - it could take up to 
> checkpoint_timeout otherwise, but there's no terrible rush to finish it 
> ASAP.

+1.  The way I would think about it is that we're switching from a
checkpointing regime appropriate to a slave to one appropriate to a
master.  If the last restartpoint was far back, compared to the
configured checkpoint timing for master operation, we're at risk that a
crash could take longer than desired to recover.  So we ought to embark
right away on a fresh checkpoint, but do it in the same way it would be
done in normal master operation (thus, not immediate).  Once it's done
we'll be in the expected checkpointing state for a master.
        regards, tom lane



Re: Skip checkpoint on promoting from streaming replication

От
Simon Riggs
Дата:
On 25 January 2013 12:15, Heikki Linnakangas <hlinnakangas@vmware.com> wrote:

>> 1) an immediate checkpoint can cause a disk/resource usage spike,
>> which is definitely not what you need just when a spike of connections
>> and new SQL hits the system.
>
>
> It doesn't need to be an "immediate" checkpoint, ie. you don't need to rush
> through it with checkpoint_completion_target=0. I think you should initiate
> a regular, slow, checkpoint, right after writing the end-of-recovery record.
> It can take some time for it to finish, which is ok.

OK, will add.

-- Simon Riggs                   http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services