Обсуждение: Add LSN <-> time conversion functionality

Поиск
Список
Период
Сортировка

Add LSN <-> time conversion functionality

От
Melanie Plageman
Дата:
Hi,

Elsewhere [1] I required a way to estimate the time corresponding to a
particular LSN in the past. I devised the attached LSNTimeline, a data
structure mapping LSNs <-> timestamps with decreasing precision for
older time, LSN pairs. This can be used to locate and translate a
particular time to LSN or vice versa using linear interpolation.

I've added an instance of the LSNTimeline to PgStat_WalStats and insert
new values to it in background writer's main loop. This patch set also
introduces some new pageinspect functions exposing LSN <-> time
translations.

Outside of being useful to users wondering about the last modification
time of a particular block in a relation, the LSNTimeline can be put to
use in other Postgres sub-systems to govern behavior based on resource
consumption -- using the LSN consumption rate as a proxy.

As mentioned in [1], the LSNTimeline is a prerequisite for my
implementation of a new freeze heuristic which seeks to freeze only
pages which will remain unmodified for a certain amount of wall clock
time. But one can imagine other uses for such translation capabilities.

The pageinspect additions need a bit more work. I didn't bump the
pageinspect version (didn't add the new functions to a new pageinspect
version file). I also didn't exercise the new pageinspect functions in a
test. I was unsure how to write a test which would be guaranteed not to
flake. Because the background writer updates the timeline, it seemed a
remote possibility that the time or LSN returned by the functions would
be 0 and as such, I'm not sure even a test that SELECT time/lsn > 0
would always pass.

I also noticed the pageinspect functions don't have XML id attributes
for link discoverability. I planned to add that in a separate commit.

- Melanie

[1] https://www.postgresql.org/message-id/CAAKRu_b3tpbdRPUPh1Q5h35gXhY%3DspH2ssNsEsJ9sDfw6%3DPEAg%40mail.gmail.com

Вложения

Re: Add LSN <-> time conversion functionality

От
Melanie Plageman
Дата:
On Wed, Dec 27, 2023 at 5:16 PM Melanie Plageman
<melanieplageman@gmail.com> wrote:
>
> Elsewhere [1] I required a way to estimate the time corresponding to a
> particular LSN in the past. I devised the attached LSNTimeline, a data
> structure mapping LSNs <-> timestamps with decreasing precision for
> older time, LSN pairs. This can be used to locate and translate a
> particular time to LSN or vice versa using linear interpolation.

Attached is a new version which fixes one overflow danger I noticed in
the original patch set.

I have also been doing some thinking about the LSNTimeline data
structure. Its array elements are combined before all elements have
been used. This sacrifices precision earlier than required. I tried
some alternative structures that would use the whole array. There are
a lot of options, though. Currently each element fits twice as many
members as the preceding element. To use the whole array, we'd have to
change the behavior from filling each element to its max capacity to
something that filled elements only partially. I'm not sure what the
best distribution would be.

> I've added an instance of the LSNTimeline to PgStat_WalStats and insert
> new values to it in background writer's main loop. This patch set also
> introduces some new pageinspect functions exposing LSN <-> time
> translations.

I was thinking that maybe it is silly to have the functions allowing
for translation between LSN and time in the pageinspect extension --
since they are not specifically related to pages (pages are just an
object that has an accessible LSN). I was thinking perhaps we add them
as system information functions. However, the closest related
functions I can think of are those to get the current LSN (like
pg_current_wal_lsn ()). And those are listed as system administration
functions under backup control [1]. I don't think the LSN <-> time
functionality fits under backup control.

If I did put them in one of the system information function sections
[2], which one would work best?

- Melanie

[1] https://www.postgresql.org/docs/devel/functions-admin.html#FUNCTIONS-ADMIN-BACKUP
[2] https://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO

Вложения

Re: Add LSN <-> time conversion functionality

От
Tomas Vondra
Дата:
Hi,

I took a look at this today, to try to understand the purpose and how it
works. Let me share some initial thoughts and questions I have. Some of
this may be wrong/missing the point, so apologies for that.

The goal seems worthwhile in general - the way I understand it, the
patch aims to provide tracking of WAL "velocity", i.e. how much WAL was
generated over time. Which we now don't have, as we only maintain simple
cumulative stats/counters. And then uses it to estimate timestamp for a
given LSN, and vice versa, because that's what the pruning patch needs.

When I first read this, I immediately started wondering if this might
use the commit timestamp stuff we already have. Because for each commit
we already store the LSN and commit timestamp, right? But I'm not sure
that would be a good match - the commit_ts serves a very special purpose
of mapping XID => (LSN, timestamp), I don't see how to make it work for
(LSN=>timestmap) and (timestamp=>LSN) very easily.


As for the inner workings of the patch, my understanding is this:

- "LSNTimeline" consists of "LSNTime" entries representing (LSN,ts)
points, but those points are really "buckets" that grow larger and
larger for older periods of time.

- The entries are being added from bgwriter, i.e. on each loop we add
the current (LSN, timestamp) into the timeline.

- We then estimate LSN/timestamp using the data stored in LSNTimeline
(either LSN => timestamp, or the opposite direction).


Some comments in arbitrary order:

- AFAIK each entry represent an interval of time, and the next (older)
interval is twice as long, right? So the first interval is 1 second,
then 2 seconds, 4 seconds, 8 seconds, ...

- But I don't understand how the LSNTimeline entries are "aging" and get
less accurate, while the "current" bucket is short. lsntime_insert()
seems to simply move to the next entry, but doesn't that mean we insert
the entries into larger and larger buckets?

- The comments never really spell what amount of time the entries cover
/ how granular it is. My understanding is it's simply measured in number
of entries added, which is assumed to be constant and drive by
bgwriter_delay, right? Which is 200ms by default. Which seems fine, but
isn't the hibernation (HIBERNATE_FACTOR) going to mess with it?

Is there some case where bgwriter would just loop without sleeping,
filling the timeline much faster? (I can't think of any, but ...)

- The LSNTimeline comment claims an array of size 64 is large enough to
not need to care about filling it, but maybe it should briefly explain
why we can never fill it (I guess 2^64 is just too many).

- I don't quite understand why 0005 adds the functions to pageinspect.
This has nothing to do with pages, right?

- Not sure why we need 0001. Just so that the "estimate" functions in
0002 have a convenient "start" point? Surely we could look at the
current LSNTimeline data and use the oldest value, or (if there's no
data) use the current timestamp/LSN?

- I wonder what happens if we lose the data - we know that if people
reset statistics for whatever reason (or just lose them because of a
crash, or because they're on a replica), bad things happen to
autovacuum. What's the (expected) impact on pruning?

- What about a SRF function that outputs the whole LSNTimeline? Would be
useful for debugging / development, I think. (Just a suggestion).


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Add LSN <-> time conversion functionality

От
Melanie Plageman
Дата:
Thanks so much for reviewing!

On Fri, Feb 16, 2024 at 3:41 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
>
> When I first read this, I immediately started wondering if this might
> use the commit timestamp stuff we already have. Because for each commit
> we already store the LSN and commit timestamp, right? But I'm not sure
> that would be a good match - the commit_ts serves a very special purpose
> of mapping XID => (LSN, timestamp), I don't see how to make it work for
> (LSN=>timestmap) and (timestamp=>LSN) very easily.

I took a look at the code in commit_ts.c, and I really don't see a way
of reusing any of this commit<->timestamp infrastructure for
timestamp<->LSN mappings.

> As for the inner workings of the patch, my understanding is this:
>
> - "LSNTimeline" consists of "LSNTime" entries representing (LSN,ts)
> points, but those points are really "buckets" that grow larger and
> larger for older periods of time.

Yes, they are buckets in the sense that they represent multiple values
but each contains a single LSNTime value which is the minimum of all
the LSNTimes we "merged" into that single array element. In order to
represent a range of time, you need to use two array elements. The
linear interpolation from time <-> LSN is all done with two elements.

> - AFAIK each entry represent an interval of time, and the next (older)
> interval is twice as long, right? So the first interval is 1 second,
> then 2 seconds, 4 seconds, 8 seconds, ...
>
> - But I don't understand how the LSNTimeline entries are "aging" and get
> less accurate, while the "current" bucket is short. lsntime_insert()
> seems to simply move to the next entry, but doesn't that mean we insert
> the entries into larger and larger buckets?

Because the earlier array elements can represent fewer logical members
than later ones and because elements are merged into the next element
when space runs out, later array elements will contain older data and
more of it, so those "ranges" will be larger. But, after thinking
about it and also reading your feedback, I realized my algorithm was
silly because it starts merging logical members before it has even
used the whole array.

The attached v3 has a new algorithm. Now, LSNTimes are added from the
end of the array forward until all array elements have at least one
logical member (array length == volume). Once array length == volume,
new LSNTimes will result in merging logical members in existing
elements. We want to merge older members because those can be less
precise. So, the number of logical members per array element will
always monotonically increase starting from the beginning of the array
(which contains the most recent data) and going to the end. We want to
use all the available space in the array. That means that each LSNTime
insertion will always result in a single merge. We want the timeline
to be inclusive of the oldest data, so merging means taking the
smaller value of two LSNTime values. I had to pick a rule for choosing
which elements to merge. So, I choose the merge target as the oldest
element whose logical membership is < 2x its predecessor. I merge the
merge target's predecessor into the merge target. Then I move all of
the intervening elements down 1. Then I insert the new LSNTime at
index 0.

> - The comments never really spell what amount of time the entries cover
> / how granular it is. My understanding is it's simply measured in number
> of entries added, which is assumed to be constant and drive by
> bgwriter_delay, right? Which is 200ms by default. Which seems fine, but
> isn't the hibernation (HIBERNATE_FACTOR) going to mess with it?
>
> Is there some case where bgwriter would just loop without sleeping,
> filling the timeline much faster? (I can't think of any, but ...)

bgwriter will wake up when there are buffers to flush, which is likely
correlated with there being new LSNs. So, actually it seems like it
might work well to rely on only filling the timeline when there are
things for bgwriter to do.

> - The LSNTimeline comment claims an array of size 64 is large enough to
> not need to care about filling it, but maybe it should briefly explain
> why we can never fill it (I guess 2^64 is just too many).

The new structure fits a different number of members. I have yet to
calculate that number, but it should be explained in the comments once
I do.

For example, if we made an LSNTimeline with volume 4, once every
element had one LSNTime and we needed to start merging, the following
is how many logical members each element would have after each of four
merges:
1111
1112
1122
1114
1124
So, if we store the number of members as an unsigned 64-bit int and we
have an LSNTimeline with volume 64, what is the maximum number of
members can we store if we hold all of the invariants described in my
algorithm above (we only merge when required, every element holds < 2x
the number of logical members as its predecessor, we do exactly one
merge every insertion [when required], membership must monotonically
increase [choose the oldest element meeting the criteria when deciding
what to merge])?

> - I don't quite understand why 0005 adds the functions to pageinspect.
> This has nothing to do with pages, right?

You're right. I just couldn't think of a good place to put the
functions. In version 3, I just put the SQL functions in pgstat_wal.c
and made them generally available (i.e. not in a contrib module). I
haven't added docs back yet. But perhaps a section near the docs
describing pg_xact_commit_timestamp() [1]? I wasn't sure if I should
put the SQL function source code in pgstatfuncs.c -- I kind of prefer
it in pgstat_wal.c but there are no other SQL functions there.

> - Not sure why we need 0001. Just so that the "estimate" functions in
> 0002 have a convenient "start" point? Surely we could look at the
> current LSNTimeline data and use the oldest value, or (if there's no
> data) use the current timestamp/LSN?

When there are 0 or 1 entries in the timeline you'll get an answer
that could be very off if you just return the current timestamp or
LSN. I guess that is okay?

> - I wonder what happens if we lose the data - we know that if people
> reset statistics for whatever reason (or just lose them because of a
> crash, or because they're on a replica), bad things happen to
> autovacuum. What's the (expected) impact on pruning?

This is an important question. Because stats aren't crashsafe, we
could return very inaccurate numbers for some time/LSN values if we
crash. I don't actually know what we could do about that. When I use
the LSNTimeline for the freeze heuristic it is less of an issue
because the freeze heuristic has a fallback strategy when there aren't
enough stats to do its calculations. But other users of this
LSNTimeline will simply get highly inaccurate results (I think?). Is
there anything we could do about this? It seems bad.

Andres had brought up something at some point about, what if the
database is simply turned off for awhile and then turned back on. Even
if you cleanly shut down, will there be "gaps" in the timeline? I
think that could be okay, but it is something to think about.

> - What about a SRF function that outputs the whole LSNTimeline? Would be
> useful for debugging / development, I think. (Just a suggestion).

Good idea! I've added this. Though, maybe there was a simpler way to
implement than I did.

Just a note, all of my comments could use a lot of work, but I want to
get consensus on the algorithm before I make sure and write about it
in a perfect way.

- Melanie

[1] https://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO-COMMIT-TIMESTAMP

Вложения

Re: Add LSN <-> time conversion functionality

От
Daniel Gustafsson
Дата:
> On 22 Feb 2024, at 03:45, Melanie Plageman <melanieplageman@gmail.com> wrote:
> On Fri, Feb 16, 2024 at 3:41 PM Tomas Vondra
> <tomas.vondra@enterprisedb.com> wrote:

>> - Not sure why we need 0001. Just so that the "estimate" functions in
>> 0002 have a convenient "start" point? Surely we could look at the
>> current LSNTimeline data and use the oldest value, or (if there's no
>> data) use the current timestamp/LSN?
>
> When there are 0 or 1 entries in the timeline you'll get an answer
> that could be very off if you just return the current timestamp or
> LSN. I guess that is okay?

I don't think that's a huge problem at such a young "lsn-age", but I might be
missing something.

>> - I wonder what happens if we lose the data - we know that if people
>> reset statistics for whatever reason (or just lose them because of a
>> crash, or because they're on a replica), bad things happen to
>> autovacuum. What's the (expected) impact on pruning?
>
> This is an important question. Because stats aren't crashsafe, we
> could return very inaccurate numbers for some time/LSN values if we
> crash. I don't actually know what we could do about that. When I use
> the LSNTimeline for the freeze heuristic it is less of an issue
> because the freeze heuristic has a fallback strategy when there aren't
> enough stats to do its calculations. But other users of this
> LSNTimeline will simply get highly inaccurate results (I think?). Is
> there anything we could do about this? It seems bad.

A complication with this over stats is that we can't recompute this in case of
a crash/corruption issue.  The simple solution is to consider this unlogged
data and start fresh at every unclean shutdown, but I have a feeling that won't
be good enough for basing heuristics on?

> Andres had brought up something at some point about, what if the
> database is simply turned off for awhile and then turned back on. Even
> if you cleanly shut down, will there be "gaps" in the timeline? I
> think that could be okay, but it is something to think about.

The gaps would represent reality, so there is nothing wrong per se with gaps,
but if they inflate the interval of a bucket which in turns impact the
precision of the results then that can be a problem.

> Just a note, all of my comments could use a lot of work, but I want to
> get consensus on the algorithm before I make sure and write about it
> in a perfect way.

I'm not sure "a lot of work" is accurate, they seem pretty much there to me,
but I think that an illustration of running through the algorithm in an
ascii-art array would be helpful.


Reading through this I think such a function has merits, not only for your
usecase but other heuristic based work and quite possibly systems debugging.

While the bucketing algorithm is a clever algorithm for degrading precision for
older entries without discarding them, I do fear that we'll risk ending up with
answers like "somewhere between in the past and even further in the past".
I've been playing around with various compression algorithms for packing the
buckets such that we can retain precision for longer.  Since you were aiming to
work on other patches leading up to the freeze, let's pick this up again once
things calm down.

When compiling I got this warning for lsntime_merge_target:

pgstat_wal.c:242:1: warning: non-void function does not return a value in all control paths [-Wreturn-type]
}
^
1 warning generated.

The issue seems to be that the can't-really-happen path is protected with an
Assertion, which is a no-op for production builds.  I think we should handle
the error rather than rely on testing catching it (since if it does happen even
though it can't, it's going to be when it's for sure not tested).  Returning an
invalid array subscript like -1 and testing for it in lsntime_insert, with an
elog(WARNING..), seems enough.


-    last_snapshot_lsn <= GetLastImportantRecPtr())
+    last_snapshot_lsn <= current_lsn)
I think we should delay extracting the LSN with GetLastImportantRecPtr until we
know that we need it, to avoid taking locks in this codepath unless needed.

I've attached a diff with the above suggestions which applies on op of your
patchset.

--
Daniel Gustafsson



Вложения

Re: Add LSN <-> time conversion functionality

От
Tomas Vondra
Дата:

On 2/22/24 03:45, Melanie Plageman wrote:
> Thanks so much for reviewing!
> 
> On Fri, Feb 16, 2024 at 3:41 PM Tomas Vondra
> <tomas.vondra@enterprisedb.com> wrote:
>>
>> When I first read this, I immediately started wondering if this might
>> use the commit timestamp stuff we already have. Because for each commit
>> we already store the LSN and commit timestamp, right? But I'm not sure
>> that would be a good match - the commit_ts serves a very special purpose
>> of mapping XID => (LSN, timestamp), I don't see how to make it work for
>> (LSN=>timestmap) and (timestamp=>LSN) very easily.
> 
> I took a look at the code in commit_ts.c, and I really don't see a way
> of reusing any of this commit<->timestamp infrastructure for
> timestamp<->LSN mappings.
> 
>> As for the inner workings of the patch, my understanding is this:
>>
>> - "LSNTimeline" consists of "LSNTime" entries representing (LSN,ts)
>> points, but those points are really "buckets" that grow larger and
>> larger for older periods of time.
> 
> Yes, they are buckets in the sense that they represent multiple values
> but each contains a single LSNTime value which is the minimum of all
> the LSNTimes we "merged" into that single array element. In order to
> represent a range of time, you need to use two array elements. The
> linear interpolation from time <-> LSN is all done with two elements.
> 
>> - AFAIK each entry represent an interval of time, and the next (older)
>> interval is twice as long, right? So the first interval is 1 second,
>> then 2 seconds, 4 seconds, 8 seconds, ...
>>
>> - But I don't understand how the LSNTimeline entries are "aging" and get
>> less accurate, while the "current" bucket is short. lsntime_insert()
>> seems to simply move to the next entry, but doesn't that mean we insert
>> the entries into larger and larger buckets?
> 
> Because the earlier array elements can represent fewer logical members
> than later ones and because elements are merged into the next element
> when space runs out, later array elements will contain older data and
> more of it, so those "ranges" will be larger. But, after thinking
> about it and also reading your feedback, I realized my algorithm was
> silly because it starts merging logical members before it has even
> used the whole array.
> 
> The attached v3 has a new algorithm. Now, LSNTimes are added from the
> end of the array forward until all array elements have at least one
> logical member (array length == volume). Once array length == volume,
> new LSNTimes will result in merging logical members in existing
> elements. We want to merge older members because those can be less
> precise. So, the number of logical members per array element will
> always monotonically increase starting from the beginning of the array
> (which contains the most recent data) and going to the end. We want to
> use all the available space in the array. That means that each LSNTime
> insertion will always result in a single merge. We want the timeline
> to be inclusive of the oldest data, so merging means taking the
> smaller value of two LSNTime values. I had to pick a rule for choosing
> which elements to merge. So, I choose the merge target as the oldest
> element whose logical membership is < 2x its predecessor. I merge the
> merge target's predecessor into the merge target. Then I move all of
> the intervening elements down 1. Then I insert the new LSNTime at
> index 0.
> 

I can't help but think about t-digest [1], which also merges data into
variable-sized buckets (called centroids, which is a pair of values,
just like LSNTime). But the merging is driven by something called "scale
function" which I found like a pretty nice approach to this, and it
yields some guarantees regarding accuracy. I wonder if we could do
something similar here ...

The t-digest is a way to approximate quantiles, and the default scale
function is optimized for best accuracy on the extremes (close to 0.0
and 1.0), but it's possible to use scale functions that optimize only
for accuracy close to 1.0.

This may be misguided, but I see similarity between quantiles and what
LSNTimeline does - timestamps are ordered, and quantiles close to 0.0
are "old timestamps" while quantiles close to 1.0 are "now".

And t-digest also defines a pretty efficient algorithm to merge data in
a way that gradually combines older "buckets" into larger ones.

>> - The comments never really spell what amount of time the entries cover
>> / how granular it is. My understanding is it's simply measured in number
>> of entries added, which is assumed to be constant and drive by
>> bgwriter_delay, right? Which is 200ms by default. Which seems fine, but
>> isn't the hibernation (HIBERNATE_FACTOR) going to mess with it?
>>
>> Is there some case where bgwriter would just loop without sleeping,
>> filling the timeline much faster? (I can't think of any, but ...)
> 
> bgwriter will wake up when there are buffers to flush, which is likely
> correlated with there being new LSNs. So, actually it seems like it
> might work well to rely on only filling the timeline when there are
> things for bgwriter to do.
> 
>> - The LSNTimeline comment claims an array of size 64 is large enough to
>> not need to care about filling it, but maybe it should briefly explain
>> why we can never fill it (I guess 2^64 is just too many).
> 
> The new structure fits a different number of members. I have yet to
> calculate that number, but it should be explained in the comments once
> I do.
> 
> For example, if we made an LSNTimeline with volume 4, once every
> element had one LSNTime and we needed to start merging, the following
> is how many logical members each element would have after each of four
> merges:
> 1111
> 1112
> 1122
> 1114
> 1124
> So, if we store the number of members as an unsigned 64-bit int and we
> have an LSNTimeline with volume 64, what is the maximum number of
> members can we store if we hold all of the invariants described in my
> algorithm above (we only merge when required, every element holds < 2x
> the number of logical members as its predecessor, we do exactly one
> merge every insertion [when required], membership must monotonically
> increase [choose the oldest element meeting the criteria when deciding
> what to merge])?
> 

I guess that should be enough for (2^64-1) logical members, because it's
a sequence 1, 2, 4, 8, ..., 2^63. Seems enough.

But now that I think about it, does it make sense to do the merging
based on the number of logical members? Shouldn't this really be driven
by the "length" of the time interval the member covers?

>> - I don't quite understand why 0005 adds the functions to pageinspect.
>> This has nothing to do with pages, right?
> 
> You're right. I just couldn't think of a good place to put the
> functions. In version 3, I just put the SQL functions in pgstat_wal.c
> and made them generally available (i.e. not in a contrib module). I
> haven't added docs back yet. But perhaps a section near the docs
> describing pg_xact_commit_timestamp() [1]? I wasn't sure if I should
> put the SQL function source code in pgstatfuncs.c -- I kind of prefer
> it in pgstat_wal.c but there are no other SQL functions there.
> 

OK, pgstat_wal seems like a good place.

>> - Not sure why we need 0001. Just so that the "estimate" functions in
>> 0002 have a convenient "start" point? Surely we could look at the
>> current LSNTimeline data and use the oldest value, or (if there's no
>> data) use the current timestamp/LSN?
> 
> When there are 0 or 1 entries in the timeline you'll get an answer
> that could be very off if you just return the current timestamp or
> LSN. I guess that is okay?
> 
>> - I wonder what happens if we lose the data - we know that if people
>> reset statistics for whatever reason (or just lose them because of a
>> crash, or because they're on a replica), bad things happen to
>> autovacuum. What's the (expected) impact on pruning?
> 
> This is an important question. Because stats aren't crashsafe, we
> could return very inaccurate numbers for some time/LSN values if we
> crash. I don't actually know what we could do about that. When I use
> the LSNTimeline for the freeze heuristic it is less of an issue
> because the freeze heuristic has a fallback strategy when there aren't
> enough stats to do its calculations. But other users of this
> LSNTimeline will simply get highly inaccurate results (I think?). Is
> there anything we could do about this? It seems bad.
> 
> Andres had brought up something at some point about, what if the
> database is simply turned off for awhile and then turned back on. Even
> if you cleanly shut down, will there be "gaps" in the timeline? I
> think that could be okay, but it is something to think about.
> 
>> - What about a SRF function that outputs the whole LSNTimeline? Would be
>> useful for debugging / development, I think. (Just a suggestion).
> 
> Good idea! I've added this. Though, maybe there was a simpler way to
> implement than I did.
> 

Thanks. I'll take a look.

> Just a note, all of my comments could use a lot of work, but I want to
> get consensus on the algorithm before I make sure and write about it
> in a perfect way.
> 

Makes sense, as long as the comments are sufficiently clear. It's hard
to reach consensus on something not explained clearly enough.


regards


[1]
https://github.com/tdunning/t-digest/blob/main/docs/t-digest-paper/histo.pdf

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Add LSN <-> time conversion functionality

От
Tomas Vondra
Дата:
On 3/18/24 15:02, Daniel Gustafsson wrote:
>> On 22 Feb 2024, at 03:45, Melanie Plageman <melanieplageman@gmail.com> wrote:
>> On Fri, Feb 16, 2024 at 3:41 PM Tomas Vondra
>> <tomas.vondra@enterprisedb.com> wrote:
> 
>>> - Not sure why we need 0001. Just so that the "estimate" functions in
>>> 0002 have a convenient "start" point? Surely we could look at the
>>> current LSNTimeline data and use the oldest value, or (if there's no
>>> data) use the current timestamp/LSN?
>>
>> When there are 0 or 1 entries in the timeline you'll get an answer
>> that could be very off if you just return the current timestamp or
>> LSN. I guess that is okay?
> 
> I don't think that's a huge problem at such a young "lsn-age", but I might be
> missing something.
> 
>>> - I wonder what happens if we lose the data - we know that if people
>>> reset statistics for whatever reason (or just lose them because of a
>>> crash, or because they're on a replica), bad things happen to
>>> autovacuum. What's the (expected) impact on pruning?
>>
>> This is an important question. Because stats aren't crashsafe, we
>> could return very inaccurate numbers for some time/LSN values if we
>> crash. I don't actually know what we could do about that. When I use
>> the LSNTimeline for the freeze heuristic it is less of an issue
>> because the freeze heuristic has a fallback strategy when there aren't
>> enough stats to do its calculations. But other users of this
>> LSNTimeline will simply get highly inaccurate results (I think?). Is
>> there anything we could do about this? It seems bad.
> 

Do we have something to calculate a sufficiently good "average" to use
as a default, if we don't have a better value? For example, we know the
timestamp of the last checkpoint, and we know the LSN, right? Maybe if
we're sufficiently far from the checkpoint, we could use that.

Or maybe checkpoint_timeout / max_wal_size would be enough to calculate
some default value?

I wonder how long it takes until LSNTimeline gives us sufficiently good
data for all LSNs we need. That is, if we lose this, how long it takes
until we get enough data to do good decisions?

Why don't we simply WAL-log this in some trivial way? It's pretty small,
so if we WAL-log this once in a while (after a merge happens), that
should not be a problem.

Or a different idea - if we lost the data, but commit_ts is enabled,
can't we "simply" walk commit_ts and feed LSN/timestamp into the
timeline? I guess we don't want to walk 2B transactions, but even just
sampling some recent transactions might be enough, no?

> A complication with this over stats is that we can't recompute this in case of
> a crash/corruption issue.  The simple solution is to consider this unlogged
> data and start fresh at every unclean shutdown, but I have a feeling that won't
> be good enough for basing heuristics on?
> 
>> Andres had brought up something at some point about, what if the
>> database is simply turned off for awhile and then turned back on. Even
>> if you cleanly shut down, will there be "gaps" in the timeline? I
>> think that could be okay, but it is something to think about.
> 
> The gaps would represent reality, so there is nothing wrong per se with gaps,
> but if they inflate the interval of a bucket which in turns impact the
> precision of the results then that can be a problem.
> 

Well, I think the gaps are a problem in the sense that they disappear
once we start merging the buckets. But maybe that's fine, if we're only
interested in approximate data.

>> Just a note, all of my comments could use a lot of work, but I want to
>> get consensus on the algorithm before I make sure and write about it
>> in a perfect way.
> 
> I'm not sure "a lot of work" is accurate, they seem pretty much there to me,
> but I think that an illustration of running through the algorithm in an
> ascii-art array would be helpful.
> 

+1

> 
> Reading through this I think such a function has merits, not only for your
> usecase but other heuristic based work and quite possibly systems debugging.
> 
> While the bucketing algorithm is a clever algorithm for degrading precision for
> older entries without discarding them, I do fear that we'll risk ending up with
> answers like "somewhere between in the past and even further in the past".
> I've been playing around with various compression algorithms for packing the
> buckets such that we can retain precision for longer.  Since you were aiming to
> work on other patches leading up to the freeze, let's pick this up again once
> things calm down.
> 

I guess this ambiguity is pretty inherent to a structure that does not
keep all the data, and gradually reduces the resolution for old stuff.
But my understanding was that's sufficient for the freezing patch.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Add LSN <-> time conversion functionality

От
"Andrey M. Borodin"
Дата:
Hi everyone!

Me, Bharath, and Ilya are on patch review session at the PGConf.dev :) Maybe we got everything wrong, please consider
thatwe are just doing training on reviewing patches. 


=== Purpose of the patch ===
Currently, we have checkpoint_timeout and max_wal size to know when we need a checkpoint. This patch brings a
capabilityto freeze page not only by internal state of the system, but also by wall clock time. 
To do so we need an infrastructure which will tell when page was modified.

The patch in this thread is doing exactly this: in-memory information to map LSNs with wall clock time. Mapping is
maintainedby bacgroundwriter. 

=== Questions ===
1. The patch does not handle server restart. All pages will need freeze after any crash?
2. Some benchmarks to proof the patch does not have CPU footprint.

=== Nits ===
"Timeline" term is already taken.
The patch needs rebase due to some header changes.
Tests fail on Windows.
The patch lacks tests.
Some docs would be nice, but the feature is for developers.
Mapping is protected for multithreaded access by walstats LWlock and might have tuplestore_putvalues() under that lock.
Thatmight be a little dangerous, if tuplestore will be on-disk for some reason (should not happen). 


Overall, the patch is a base for good feature which would help to do freeze right in time. Thanks!


Best regards, Bharath, Andrey, Ilya.