Обсуждение: CompactCheckpointerRequestQueue versus pad bytes

Поиск
Список
Период
Сортировка

CompactCheckpointerRequestQueue versus pad bytes

От
Tom Lane
Дата:
CompactCheckpointerRequestQueue supposes that it can use an entry of the
checkpointer request queue directly as a hash table key.  This will work
reliably only if there are no pad bytes in the CheckpointerRequest
struct, which means in turn that neither RelFileNodeBackend nor
RelFileNode can contain any pad bytes.

It might have accidentally failed to fail if tested on a compiler that
gives a full 32 bits to enum ForkNumber, but there absolutely would be
padding there if ForkNumber is allocated as short or char.

As best I can tell, a failure from uninitialized padding would not cause
visible misbehavior but only make it not notice that two requests are
identical, so that the queue compaction would not accomplish as much as
it should.  Nonetheless, this seems pretty broken.

We could fairly cheaply dodge the problem with padding after ForkNumber
if we were to zero out the whole request array at shmem initialization,
so that any such pad bytes are guaranteed zero.  However, padding in
RelFileNodeBackend would be more annoying to deal with, and at least
in the current instantiation of those structs it's probably impossible
anyway.  Should we document those structs as required to not contain
any padding, or do what's needful in checkpointer.c to not depend on
there not being padding?
        regards, tom lane


Re: CompactCheckpointerRequestQueue versus pad bytes

От
Amit Kapila
Дата:
From: pgsql-hackers-owner@postgresql.org
[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Tom Lane
Sent: Monday, July 16, 2012 3:06 AM

> It might have accidentally failed to fail if tested on a compiler that
> gives a full 32 bits to enum ForkNumber, but there absolutely would be
> padding there if ForkNumber is allocated as short or char.

> As best I can tell, a failure from uninitialized padding would not cause
> visible misbehavior but only make it not notice that two requests are
> identical, so that the queue compaction would not accomplish as much as
> it should.  Nonetheless, this seems pretty broken.

> We could fairly cheaply dodge the problem with padding after ForkNumber
> if we were to zero out the whole request array at shmem initialization,
> so that any such pad bytes are guaranteed zero.  However, padding in
> RelFileNodeBackend would be more annoying to deal with, and at least
> in the current instantiation of those structs it's probably impossible
> anyway.  Should we document those structs as required to not contain
> any padding, or do what's needful in checkpointer.c to not depend on
> there not being padding?

If we just document those structs, then how to handle the case where
ForkNumber
is allocated as short or char?




Re: CompactCheckpointerRequestQueue versus pad bytes

От
Robert Haas
Дата:
On Sun, Jul 15, 2012 at 5:36 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> We could fairly cheaply dodge the problem with padding after ForkNumber
> if we were to zero out the whole request array at shmem initialization,
> so that any such pad bytes are guaranteed zero.  However, padding in
> RelFileNodeBackend would be more annoying to deal with, and at least
> in the current instantiation of those structs it's probably impossible
> anyway.  Should we document those structs as required to not contain
> any padding, or do what's needful in checkpointer.c to not depend on
> there not being padding?

I would expect that every method we could devise for allocating a
shared memory segment would produce all-zero bytes.  There was a time
long ago when the OS would simply hand over previously-freed pages
with their existing contents, but I believe that was recognized as a
security problems more than 20 years ago - maybe 30 - and I can't
believe there is any OS we care about supporting that fails to zero
pages before handing them out.  Of course you can't count on malloc()
to return zero'd memory, but that's because the process might get a
page (all zeros) from the OS, allocate it, free it, and then
reallocate it for an unrelated purpose.  But we have no method that I
know of for freeing shared memory, and even if we did, the memory used
by the fsync queue is allocated during startup and therefore
presumably prior to any hypothetical ShmemFree operations that might
occur subsequently.

So I'm having a hard time understanding under what imaginable set of
circumstances this might break.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: CompactCheckpointerRequestQueue versus pad bytes

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> On Sun, Jul 15, 2012 at 5:36 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> We could fairly cheaply dodge the problem with padding after ForkNumber
>> if we were to zero out the whole request array at shmem initialization,
>> so that any such pad bytes are guaranteed zero.  However, padding in
>> RelFileNodeBackend would be more annoying to deal with, and at least
>> in the current instantiation of those structs it's probably impossible
>> anyway.  Should we document those structs as required to not contain
>> any padding, or do what's needful in checkpointer.c to not depend on
>> there not being padding?

> I would expect that every method we could devise for allocating a
> shared memory segment would produce all-zero bytes.

Well, it'd likely produce all-something bytes, but I don't believe
shmget is documented to produce zeroes.  In any case we are not in
the habit of relying on that and I don't see why we'd do so here rather
than explicitly zeroing the relatively small amount of memory involved.

> So I'm having a hard time understanding under what imaginable set of
> circumstances this might break.

Padding inside RelFileNodeBackend would break it, because
ForwardFsyncRequest copies the rnode as a struct.  So that's why I'm
asking whether we want to establish an explicit requirement that that
struct not contain any padding.

It would not be that hard to avoid the problem: we could make
CompactCheckpointerRequestQueue pre-zero a tag variable and then copy
the live fields into it.  Unless there is some other place in the system
that depends on RelFileNodeBackend being non-padded, and will break in a
more visible fashion with padding, I'm now inclined to do it like that.
        regards, tom lane


Re: CompactCheckpointerRequestQueue versus pad bytes

От
Heikki Linnakangas
Дата:
On 16.07.2012 18:24, Robert Haas wrote:
> On Sun, Jul 15, 2012 at 5:36 PM, Tom Lane<tgl@sss.pgh.pa.us>  wrote:
>> We could fairly cheaply dodge the problem with padding after ForkNumber
>> if we were to zero out the whole request array at shmem initialization,
>> so that any such pad bytes are guaranteed zero.  However, padding in
>> RelFileNodeBackend would be more annoying to deal with, and at least
>> in the current instantiation of those structs it's probably impossible
>> anyway.  Should we document those structs as required to not contain
>> any padding, or do what's needful in checkpointer.c to not depend on
>> there not being padding?
>
> I would expect that every method we could devise for allocating a
> shared memory segment would produce all-zero bytes.  There was a time
> long ago when the OS would simply hand over previously-freed pages
> with their existing contents, but I believe that was recognized as a
> security problems more than 20 years ago - maybe 30 - and I can't
> believe there is any OS we care about supporting that fails to zero
> pages before handing them out.

I wouldn't rely on that, though. I wouldn't be surprised if there was 
some debugging flag or similar that initialized all pages to random 
values or 0xdeadbeef or something, before handing them out to the 
application. We could easily zero all shared memory on allocation 
ourselves, though.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


Re: CompactCheckpointerRequestQueue versus pad bytes

От
Tom Lane
Дата:
I wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> So I'm having a hard time understanding under what imaginable set of
>> circumstances this might break.

> Padding inside RelFileNodeBackend would break it, because
> ForwardFsyncRequest copies the rnode as a struct.  So that's why I'm
> asking whether we want to establish an explicit requirement that that
> struct not contain any padding.

BTW, I'd be a lot happier about assuming that bare RelFileNode contains
no padding, because that's at least got all the fields the same type.
So that brings us back to the question of why this code is supporting
fsync requests for local relations in the first place.  Couldn't we have
it ignore those, and then only ship RelFileNode to the checkpointer?
        regards, tom lane


Re: CompactCheckpointerRequestQueue versus pad bytes

От
Robert Haas
Дата:
On Mon, Jul 16, 2012 at 11:57 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> I wrote:
>> Robert Haas <robertmhaas@gmail.com> writes:
>>> So I'm having a hard time understanding under what imaginable set of
>>> circumstances this might break.
>
>> Padding inside RelFileNodeBackend would break it, because
>> ForwardFsyncRequest copies the rnode as a struct.  So that's why I'm
>> asking whether we want to establish an explicit requirement that that
>> struct not contain any padding.
>
> BTW, I'd be a lot happier about assuming that bare RelFileNode contains
> no padding, because that's at least got all the fields the same type.
> So that brings us back to the question of why this code is supporting
> fsync requests for local relations in the first place.  Couldn't we have
> it ignore those, and then only ship RelFileNode to the checkpointer?

That's an awfully good point.  I think that was just sloppy coding on
my part (cf commit debcec7dc31a992703911a9953e299c8d730c778).  +1 for
changing it as you suggest.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: CompactCheckpointerRequestQueue versus pad bytes

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> On Mon, Jul 16, 2012 at 11:57 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> BTW, I'd be a lot happier about assuming that bare RelFileNode contains
>> no padding, because that's at least got all the fields the same type.
>> So that brings us back to the question of why this code is supporting
>> fsync requests for local relations in the first place.  Couldn't we have
>> it ignore those, and then only ship RelFileNode to the checkpointer?

> That's an awfully good point.  I think that was just sloppy coding on
> my part (cf commit debcec7dc31a992703911a9953e299c8d730c778).  +1 for
> changing it as you suggest.

OK, so I think the current proposal is:

1. Document that RelFileNode must not contain padding.

2. Change the fsync forwarding code to ignore backend-local relations,
and include only RelFileNode not RelFileNodeBackend in requests.

3. Zero the checkpointer requests[] array at shmem init time, so as
to ensure consistency of any pad bytes elsewhere in the request structs.

I will see about making this happen.  Since the fsync queue compaction
code got back-patched awhile ago, we need to back-patch the relevant
parts of this too.
        regards, tom lane


Re: CompactCheckpointerRequestQueue versus pad bytes

От
Robert Haas
Дата:
On Mon, Jul 16, 2012 at 11:44 AM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:
> I wouldn't rely on that, though. I wouldn't be surprised if there was some
> debugging flag or similar that initialized all pages to random values or
> 0xdeadbeef or something, before handing them out to the application. We
> could easily zero all shared memory on allocation ourselves, though.

Well, the documentation for mmap (which we're currently using) on Linux says:
      MAP_ANONYMOUS             The mapping is not backed by any file; its contents are initial‐             ized to
zero. The fd and offset arguments are ignored; however,             some implementations require fd to be -1  if
MAP_ANONYMOUS (or             MAP_ANON)  is specified, and portable applications should ensure             this.  The
useof MAP_ANONYMOUS in conjunction  with  MAP_SHARED             is only supported on Linux since kernel 2.4. 

shmget says:
      When  a new shared memory segment is created, its contents are initial‐      ized to zero values, and its
associateddata structure,  shmid_ds  (see      shmctl(2)), is initialized as follows: 

And shm_open says:
                 A new shared memory object initially has zero length  —  the                 size of the object can be
setusing ftruncate(2).  The newly                 allocated bytes of a shared memory object are  automatically
      initialized to 0. 

The documentation on MacOS X isn't quite as explicit, but I'd still be
astonished if we found any other behavior.  TBH, I'd be kind of
surprised if this is the only place in our code base that relies on
the initial contents of shared memory being all-zeros.  If we really
care about that we probably ought to make --enable-cassert write
0xdeadbeef all over the whole shared-memory segment on startup, or
something like that, because otherwise it's only a matter of time
before someone will break it.  Personally I'd like to see some
evidence that the problem is more than strictly hypothetical before we
spend time on it, though.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: CompactCheckpointerRequestQueue versus pad bytes

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> The documentation on MacOS X isn't quite as explicit, but I'd still be
> astonished if we found any other behavior.  TBH, I'd be kind of
> surprised if this is the only place in our code base that relies on
> the initial contents of shared memory being all-zeros.

Maybe so, but if we find any others, I'll be wanting to change them too.
It's bad practice and worse documentation for modules to be silently
assuming that anything has a value they didn't explicitly give it.

A related practice that probably costs us a lot more, in both code space
and time, is that most (all?) places that create Node objects explicitly
initialize every field of the Node struct, even though makeNode() has
a palloc0 underneath it and so setting fields to zero is redundant.
I believe that this is a good practice anyway, for documentation and
code greppability reasons.
        regards, tom lane


Re: CompactCheckpointerRequestQueue versus pad bytes

От
Robert Haas
Дата:
On Mon, Jul 16, 2012 at 12:27 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> The documentation on MacOS X isn't quite as explicit, but I'd still be
>> astonished if we found any other behavior.  TBH, I'd be kind of
>> surprised if this is the only place in our code base that relies on
>> the initial contents of shared memory being all-zeros.
>
> Maybe so, but if we find any others, I'll be wanting to change them too.
> It's bad practice and worse documentation for modules to be silently
> assuming that anything has a value they didn't explicitly give it.
>
> A related practice that probably costs us a lot more, in both code space
> and time, is that most (all?) places that create Node objects explicitly
> initialize every field of the Node struct, even though makeNode() has
> a palloc0 underneath it and so setting fields to zero is redundant.
> I believe that this is a good practice anyway, for documentation and
> code greppability reasons.

I don't really agree; I find it nice and clean to assume that
functions that promise to return zero'd memory really do.  In my book,
the main reason for keeping things as they are is that it's probably
not costing enough to matter very much.  Which is true here, too, so
I'm not going to make a huge stink, but I still think it's a waste of
effort.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: CompactCheckpointerRequestQueue versus pad bytes

От
Tom Lane
Дата:
I wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> On Mon, Jul 16, 2012 at 11:57 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> So that brings us back to the question of why this code is supporting
>>> fsync requests for local relations in the first place.  Couldn't we have
>>> it ignore those, and then only ship RelFileNode to the checkpointer?

>> That's an awfully good point.  I think that was just sloppy coding on
>> my part (cf commit debcec7dc31a992703911a9953e299c8d730c778).  +1 for
>> changing it as you suggest.

> 2. Change the fsync forwarding code to ignore backend-local relations,
> and include only RelFileNode not RelFileNodeBackend in requests.

So I started to do this, and immediately hit a roadblock: although it's
clear that we can discard any attempt to fsync a backend-local relation,
it's not so clear that we don't need to queue UNLINK_RELATION_REQUEST
operations for local relations.

I think that this is in fact okay.  The reason for delaying unlink until
after the next checkpoint is that if we did not, and the relfilenode got
re-used for an unrelated relation, and then we crashed and had to replay
WAL, we would replay any WAL referencing the old relation into the
unrelated relation's storage, with potential bad consequences if you're
unlucky.  However, no WAL should ever be generated for a backend-local
relation, so there is nothing to guard against and hence no harm in
immediately unlinking backend-local rels when they are deleted.  So we
can adjust mdunlink to include SmgrIsTemp() among the reasons to unlink
immediately rather than doing the truncate-and-register_unlink dance.

If anybody sees a hole in this reasoning, speak now ...
        regards, tom lane


Re: CompactCheckpointerRequestQueue versus pad bytes

От
Robert Haas
Дата:
On Mon, Jul 16, 2012 at 2:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> I wrote:
>> Robert Haas <robertmhaas@gmail.com> writes:
>>> On Mon, Jul 16, 2012 at 11:57 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>>> So that brings us back to the question of why this code is supporting
>>>> fsync requests for local relations in the first place.  Couldn't we have
>>>> it ignore those, and then only ship RelFileNode to the checkpointer?
>
>>> That's an awfully good point.  I think that was just sloppy coding on
>>> my part (cf commit debcec7dc31a992703911a9953e299c8d730c778).  +1 for
>>> changing it as you suggest.
>
>> 2. Change the fsync forwarding code to ignore backend-local relations,
>> and include only RelFileNode not RelFileNodeBackend in requests.
>
> So I started to do this, and immediately hit a roadblock: although it's
> clear that we can discard any attempt to fsync a backend-local relation,
> it's not so clear that we don't need to queue UNLINK_RELATION_REQUEST
> operations for local relations.
>
> I think that this is in fact okay.  The reason for delaying unlink until
> after the next checkpoint is that if we did not, and the relfilenode got
> re-used for an unrelated relation, and then we crashed and had to replay
> WAL, we would replay any WAL referencing the old relation into the
> unrelated relation's storage, with potential bad consequences if you're
> unlucky.  However, no WAL should ever be generated for a backend-local
> relation, so there is nothing to guard against and hence no harm in
> immediately unlinking backend-local rels when they are deleted.  So we
> can adjust mdunlink to include SmgrIsTemp() among the reasons to unlink
> immediately rather than doing the truncate-and-register_unlink dance.
>
> If anybody sees a hole in this reasoning, speak now ...

Hmm, yeah, I have a feeling this might be why I didn't do this when I
created RelFileNodeBackend.  But I think your reasoning is correct.
Sticking the above text in a comment might not be out of order,
however.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: CompactCheckpointerRequestQueue versus pad bytes

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> On Mon, Jul 16, 2012 at 2:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> So I started to do this, and immediately hit a roadblock: although it's
>> clear that we can discard any attempt to fsync a backend-local relation,
>> it's not so clear that we don't need to queue UNLINK_RELATION_REQUEST
>> operations for local relations.
>>
>> I think that this is in fact okay.  The reason for delaying unlink until
>> after the next checkpoint is that if we did not, and the relfilenode got
>> re-used for an unrelated relation, and then we crashed and had to replay
>> WAL, we would replay any WAL referencing the old relation into the
>> unrelated relation's storage, with potential bad consequences if you're
>> unlucky.  However, no WAL should ever be generated for a backend-local
>> relation, so there is nothing to guard against and hence no harm in
>> immediately unlinking backend-local rels when they are deleted.  So we
>> can adjust mdunlink to include SmgrIsTemp() among the reasons to unlink
>> immediately rather than doing the truncate-and-register_unlink dance.
>>
>> If anybody sees a hole in this reasoning, speak now ...

> Hmm, yeah, I have a feeling this might be why I didn't do this when I
> created RelFileNodeBackend.  But I think your reasoning is correct.
> Sticking the above text in a comment might not be out of order,
> however.

Most of this info was already in the comment for mdunlink, so I just
added a bit there.

The attached patch covers everything discussed in this thread, except
for the buggy handling of stats, which I think should be fixed in a
separate patch since it's only relevant to 9.2+.

I had thought that we might get a performance boost here by saving fsync
queue traffic, but I see that md.c was already not calling
register_dirty_segment for temp rels, so there's no joy there.  We might
win a bit by not queuing deletion of temp rels, though.  There's also
some distributed savings by eliminating one field from the request queue
and hash table, but probably not enough to notice.  I think the main
reason to do this is just to tighten up the question of whether pad
bytes matter.

Haven't started on back-patching yet.  Some but not all of this will
need to go back as far as we back-patched the queue compaction logic.

            regards, tom lane

diff --git a/src/backend/postmaster/checkpointer.c b/src/backend/postmaster/checkpointer.c
index 417e3bb0d1b0c90632aafa5fbec351ab21f7e5a0..92fd4276cd1b3be81d1ac741f9f6ea09d241ea52 100644
*** a/src/backend/postmaster/checkpointer.c
--- b/src/backend/postmaster/checkpointer.c
***************
*** 105,111 ****
   */
  typedef struct
  {
!     RelFileNodeBackend rnode;
      ForkNumber    forknum;
      BlockNumber segno;            /* see md.c for special values */
      /* might add a real request-type field later; not needed yet */
--- 105,111 ----
   */
  typedef struct
  {
!     RelFileNode    rnode;
      ForkNumber    forknum;
      BlockNumber segno;            /* see md.c for special values */
      /* might add a real request-type field later; not needed yet */
*************** CheckpointerShmemSize(void)
*** 924,940 ****
  void
  CheckpointerShmemInit(void)
  {
      bool        found;

      CheckpointerShmem = (CheckpointerShmemStruct *)
          ShmemInitStruct("Checkpointer Data",
!                         CheckpointerShmemSize(),
                          &found);

      if (!found)
      {
!         /* First time through, so initialize */
!         MemSet(CheckpointerShmem, 0, sizeof(CheckpointerShmemStruct));
          SpinLockInit(&CheckpointerShmem->ckpt_lck);
          CheckpointerShmem->max_requests = NBuffers;
      }
--- 924,945 ----
  void
  CheckpointerShmemInit(void)
  {
+     Size        size = CheckpointerShmemSize();
      bool        found;

      CheckpointerShmem = (CheckpointerShmemStruct *)
          ShmemInitStruct("Checkpointer Data",
!                         size,
                          &found);

      if (!found)
      {
!         /*
!          * First time through, so initialize.  Note that we zero the whole
!          * requests array; this is so that CompactCheckpointerRequestQueue
!          * can assume that any pad bytes in the request structs are zeroes.
!          */
!         MemSet(CheckpointerShmem, 0, size);
          SpinLockInit(&CheckpointerShmem->ckpt_lck);
          CheckpointerShmem->max_requests = NBuffers;
      }
*************** RequestCheckpoint(int flags)
*** 1091,1101 ****
   *        Forward a file-fsync request from a backend to the checkpointer
   *
   * Whenever a backend is compelled to write directly to a relation
!  * (which should be seldom, if the checkpointer is getting its job done),
   * the backend calls this routine to pass over knowledge that the relation
   * is dirty and must be fsync'd before next checkpoint.  We also use this
   * opportunity to count such writes for statistical purposes.
   *
   * segno specifies which segment (not block!) of the relation needs to be
   * fsync'd.  (Since the valid range is much less than BlockNumber, we can
   * use high values for special flags; that's all internal to md.c, which
--- 1096,1110 ----
   *        Forward a file-fsync request from a backend to the checkpointer
   *
   * Whenever a backend is compelled to write directly to a relation
!  * (which should be seldom, if the background writer is getting its job done),
   * the backend calls this routine to pass over knowledge that the relation
   * is dirty and must be fsync'd before next checkpoint.  We also use this
   * opportunity to count such writes for statistical purposes.
   *
+  * This functionality is only supported for regular (not backend-local)
+  * relations, so the rnode argument is intentionally RelFileNode not
+  * RelFileNodeBackend.
+  *
   * segno specifies which segment (not block!) of the relation needs to be
   * fsync'd.  (Since the valid range is much less than BlockNumber, we can
   * use high values for special flags; that's all internal to md.c, which
*************** RequestCheckpoint(int flags)
*** 1112,1119 ****
   * let the backend know by returning false.
   */
  bool
! ForwardFsyncRequest(RelFileNodeBackend rnode, ForkNumber forknum,
!                     BlockNumber segno)
  {
      CheckpointerRequest *request;
      bool        too_full;
--- 1121,1127 ----
   * let the backend know by returning false.
   */
  bool
! ForwardFsyncRequest(RelFileNode rnode, ForkNumber forknum, BlockNumber segno)
  {
      CheckpointerRequest *request;
      bool        too_full;
*************** ForwardFsyncRequest(RelFileNodeBackend r
*** 1169,1174 ****
--- 1177,1183 ----
  /*
   * CompactCheckpointerRequestQueue
   *        Remove duplicates from the request queue to avoid backend fsyncs.
+  *        Returns "true" if any entries were removed.
   *
   * Although a full fsync request queue is not common, it can lead to severe
   * performance problems when it does happen.  So far, this situation has
*************** ForwardFsyncRequest(RelFileNodeBackend r
*** 1178,1184 ****
   * gets very expensive and can slow down the whole system.
   *
   * Trying to do this every time the queue is full could lose if there
!  * aren't any removable entries.  But should be vanishingly rare in
   * practice: there's one queue entry per shared buffer.
   */
  static bool
--- 1187,1193 ----
   * gets very expensive and can slow down the whole system.
   *
   * Trying to do this every time the queue is full could lose if there
!  * aren't any removable entries.  But that should be vanishingly rare in
   * practice: there's one queue entry per shared buffer.
   */
  static bool
*************** CompactCheckpointerRequestQueue(void)
*** 1200,1217 ****
      /* must hold CheckpointerCommLock in exclusive mode */
      Assert(LWLockHeldByMe(CheckpointerCommLock));

      /* Initialize temporary hash table */
      MemSet(&ctl, 0, sizeof(ctl));
      ctl.keysize = sizeof(CheckpointerRequest);
      ctl.entrysize = sizeof(struct CheckpointerSlotMapping);
      ctl.hash = tag_hash;
      htab = hash_create("CompactCheckpointerRequestQueue",
                         CheckpointerShmem->num_requests,
                         &ctl,
!                        HASH_ELEM | HASH_FUNCTION);
!
!     /* Initialize skip_slot array */
!     skip_slot = palloc0(sizeof(bool) * CheckpointerShmem->num_requests);

      /*
       * The basic idea here is that a request can be skipped if it's followed
--- 1209,1228 ----
      /* must hold CheckpointerCommLock in exclusive mode */
      Assert(LWLockHeldByMe(CheckpointerCommLock));

+     /* Initialize skip_slot array */
+     skip_slot = palloc0(sizeof(bool) * CheckpointerShmem->num_requests);
+
      /* Initialize temporary hash table */
      MemSet(&ctl, 0, sizeof(ctl));
      ctl.keysize = sizeof(CheckpointerRequest);
      ctl.entrysize = sizeof(struct CheckpointerSlotMapping);
      ctl.hash = tag_hash;
+     ctl.hcxt = CurrentMemoryContext;
+
      htab = hash_create("CompactCheckpointerRequestQueue",
                         CheckpointerShmem->num_requests,
                         &ctl,
!                        HASH_ELEM | HASH_FUNCTION | HASH_CONTEXT);

      /*
       * The basic idea here is that a request can be skipped if it's followed
*************** CompactCheckpointerRequestQueue(void)
*** 1226,1244 ****
       * anyhow), but it's not clear that the extra complexity would buy us
       * anything.
       */
!     for (n = 0; n < CheckpointerShmem->num_requests; ++n)
      {
          CheckpointerRequest *request;
          struct CheckpointerSlotMapping *slotmap;
          bool        found;

          request = &CheckpointerShmem->requests[n];
          slotmap = hash_search(htab, request, HASH_ENTER, &found);
          if (found)
          {
              skip_slot[slotmap->slot] = true;
!             ++num_skipped;
          }
          slotmap->slot = n;
      }

--- 1237,1264 ----
       * anyhow), but it's not clear that the extra complexity would buy us
       * anything.
       */
!     for (n = 0; n < CheckpointerShmem->num_requests; n++)
      {
          CheckpointerRequest *request;
          struct CheckpointerSlotMapping *slotmap;
          bool        found;

+         /*
+          * We use the request struct directly as a hashtable key.  This
+          * assumes that any padding bytes in the structs are consistently the
+          * same, which should be okay because we zeroed them in
+          * CheckpointerShmemInit.  Note also that RelFileNode had better
+          * contain no pad bytes.
+          */
          request = &CheckpointerShmem->requests[n];
          slotmap = hash_search(htab, request, HASH_ENTER, &found);
          if (found)
          {
+             /* Duplicate, so mark the previous occurrence as skippable */
              skip_slot[slotmap->slot] = true;
!             num_skipped++;
          }
+         /* Remember slot containing latest occurrence of this request value */
          slotmap->slot = n;
      }

*************** CompactCheckpointerRequestQueue(void)
*** 1253,1259 ****
      }

      /* We found some duplicates; remove them. */
!     for (n = 0, preserve_count = 0; n < CheckpointerShmem->num_requests; ++n)
      {
          if (skip_slot[n])
              continue;
--- 1273,1280 ----
      }

      /* We found some duplicates; remove them. */
!     preserve_count = 0;
!     for (n = 0; n < CheckpointerShmem->num_requests; n++)
      {
          if (skip_slot[n])
              continue;
diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c
index 78145472e169decd91557181367b4cc68af84578..a3bf9a4d44e9d880e42ccf53e0fcc02d2e7a04fa 100644
*** a/src/backend/storage/buffer/bufmgr.c
--- b/src/backend/storage/buffer/bufmgr.c
*************** DropRelFileNodeBuffers(RelFileNodeBacken
*** 2049,2055 ****
      int            i;

      /* If it's a local relation, it's localbuf.c's problem. */
!     if (rnode.backend != InvalidBackendId)
      {
          if (rnode.backend == MyBackendId)
              DropRelFileNodeLocalBuffers(rnode.node, forkNum, firstDelBlock);
--- 2049,2055 ----
      int            i;

      /* If it's a local relation, it's localbuf.c's problem. */
!     if (RelFileNodeBackendIsTemp(rnode))
      {
          if (rnode.backend == MyBackendId)
              DropRelFileNodeLocalBuffers(rnode.node, forkNum, firstDelBlock);
*************** DropRelFileNodeAllBuffers(RelFileNodeBac
*** 2103,2109 ****
      int            i;

      /* If it's a local relation, it's localbuf.c's problem. */
!     if (rnode.backend != InvalidBackendId)
      {
          if (rnode.backend == MyBackendId)
              DropRelFileNodeAllLocalBuffers(rnode.node);
--- 2103,2109 ----
      int            i;

      /* If it's a local relation, it's localbuf.c's problem. */
!     if (RelFileNodeBackendIsTemp(rnode))
      {
          if (rnode.backend == MyBackendId)
              DropRelFileNodeAllLocalBuffers(rnode.node);
diff --git a/src/backend/storage/smgr/md.c b/src/backend/storage/smgr/md.c
index e5dec9d2a329b1a36e82c4ed62f2ba6be48217c6..10afe949786dac3908fce9c0c5e61c8cb4ba9fbd 100644
*** a/src/backend/storage/smgr/md.c
--- b/src/backend/storage/smgr/md.c
***************
*** 38,44 ****
  /*
   * Special values for the segno arg to RememberFsyncRequest.
   *
!  * Note that CompactcheckpointerRequestQueue assumes that it's OK to remove an
   * fsync request from the queue if an identical, subsequent request is found.
   * See comments there before making changes here.
   */
--- 38,44 ----
  /*
   * Special values for the segno arg to RememberFsyncRequest.
   *
!  * Note that CompactCheckpointerRequestQueue assumes that it's OK to remove an
   * fsync request from the queue if an identical, subsequent request is found.
   * See comments there before making changes here.
   */
*************** static MemoryContext MdCxt;        /* context
*** 122,134 ****
   * be deleted after the next checkpoint, but we use a linked list instead of
   * a hash table, because we don't expect there to be any duplicate requests.
   *
   * (Regular backends do not track pending operations locally, but forward
   * them to the checkpointer.)
   */
  typedef struct
  {
!     RelFileNodeBackend rnode;    /* the targeted relation */
!     ForkNumber    forknum;
      BlockNumber segno;            /* which segment */
  } PendingOperationTag;

--- 122,138 ----
   * be deleted after the next checkpoint, but we use a linked list instead of
   * a hash table, because we don't expect there to be any duplicate requests.
   *
+  * These mechanisms are only used for non-temp relations; we never fsync
+  * temp rels, nor do we need to postpone their deletion (see comments in
+  * mdunlink).
+  *
   * (Regular backends do not track pending operations locally, but forward
   * them to the checkpointer.)
   */
  typedef struct
  {
!     RelFileNode    rnode;            /* the targeted relation */
!     ForkNumber    forknum;        /* which fork */
      BlockNumber segno;            /* which segment */
  } PendingOperationTag;

*************** typedef struct
*** 143,149 ****

  typedef struct
  {
!     RelFileNodeBackend rnode;    /* the dead relation to delete */
      CycleCtr    cycle_ctr;        /* mdckpt_cycle_ctr when request was made */
  } PendingUnlinkEntry;

--- 147,153 ----

  typedef struct
  {
!     RelFileNode    rnode;            /* the dead relation to delete */
      CycleCtr    cycle_ctr;        /* mdckpt_cycle_ctr when request was made */
  } PendingUnlinkEntry;

*************** mdcreate(SMgrRelation reln, ForkNumber f
*** 302,312 ****
  /*
   *    mdunlink() -- Unlink a relation.
   *
!  * Note that we're passed a RelFileNode --- by the time this is called,
   * there won't be an SMgrRelation hashtable entry anymore.
   *
!  * Actually, we don't unlink the first segment file of the relation, but
!  * just truncate it to zero length, and record a request to unlink it after
   * the next checkpoint.  Additional segments can be unlinked immediately,
   * however.  Leaving the empty file in place prevents that relfilenode
   * number from being reused.  The scenario this protects us from is:
--- 306,316 ----
  /*
   *    mdunlink() -- Unlink a relation.
   *
!  * Note that we're passed a RelFileNodeBackend --- by the time this is called,
   * there won't be an SMgrRelation hashtable entry anymore.
   *
!  * For regular relations, we don't unlink the first segment file of the rel,
!  * but just truncate it to zero length, and record a request to unlink it after
   * the next checkpoint.  Additional segments can be unlinked immediately,
   * however.  Leaving the empty file in place prevents that relfilenode
   * number from being reused.  The scenario this protects us from is:
*************** mdcreate(SMgrRelation reln, ForkNumber f
*** 323,328 ****
--- 327,336 ----
   * number until it's safe, because relfilenode assignment skips over any
   * existing file.
   *
+  * We do not need to go through this dance for temp relations, though, because
+  * we never make WAL entries for temp rels, and so a temp rel poses no threat
+  * to the health of a regular rel that has taken over its relfilenode number.
+  *
   * All the above applies only to the relation's main fork; other forks can
   * just be removed immediately, since they are not needed to prevent the
   * relfilenode number from being recycled.    Also, we do not carefully
*************** mdunlink(RelFileNodeBackend rnode, ForkN
*** 345,360 ****

      /*
       * We have to clean out any pending fsync requests for the doomed
!      * relation, else the next mdsync() will fail.
       */
!     ForgetRelationFsyncRequests(rnode, forkNum);

      path = relpath(rnode, forkNum);

      /*
       * Delete or truncate the first segment.
       */
!     if (isRedo || forkNum != MAIN_FORKNUM)
      {
          ret = unlink(path);
          if (ret < 0 && errno != ENOENT)
--- 353,370 ----

      /*
       * We have to clean out any pending fsync requests for the doomed
!      * relation, else the next mdsync() will fail.  There can't be any such
!      * requests for a temp relation, though.
       */
!     if (!RelFileNodeBackendIsTemp(rnode))
!         ForgetRelationFsyncRequests(rnode.node, forkNum);

      path = relpath(rnode, forkNum);

      /*
       * Delete or truncate the first segment.
       */
!     if (isRedo || forkNum != MAIN_FORKNUM || RelFileNodeBackendIsTemp(rnode))
      {
          ret = unlink(path);
          if (ret < 0 && errno != ENOENT)
*************** mdsync(void)
*** 1081,1088 ****
                   * dirtied through this same smgr relation, and so we can save
                   * a file open/close cycle.
                   */
!                 reln = smgropen(entry->tag.rnode.node,
!                                 entry->tag.rnode.backend);

                  /*
                   * It is possible that the relation has been dropped or
--- 1091,1097 ----
                   * dirtied through this same smgr relation, and so we can save
                   * a file open/close cycle.
                   */
!                 reln = smgropen(entry->tag.rnode, InvalidBackendId);

                  /*
                   * It is possible that the relation has been dropped or
*************** mdpostckpt(void)
*** 1228,1234 ****
          Assert((CycleCtr) (entry->cycle_ctr + 1) == mdckpt_cycle_ctr);

          /* Unlink the file */
!         path = relpath(entry->rnode, MAIN_FORKNUM);
          if (unlink(path) < 0)
          {
              /*
--- 1237,1243 ----
          Assert((CycleCtr) (entry->cycle_ctr + 1) == mdckpt_cycle_ctr);

          /* Unlink the file */
!         path = relpathperm(entry->rnode, MAIN_FORKNUM);
          if (unlink(path) < 0)
          {
              /*
*************** mdpostckpt(void)
*** 1255,1275 ****
   *
   * If there is a local pending-ops table, just make an entry in it for
   * mdsync to process later.  Otherwise, try to pass off the fsync request
!  * to the background writer process.  If that fails, just do the fsync
!  * locally before returning (we expect this will not happen often enough
   * to be a performance problem).
   */
  static void
  register_dirty_segment(SMgrRelation reln, ForkNumber forknum, MdfdVec *seg)
  {
      if (pendingOpsTable)
      {
          /* push it into local pending-ops table */
!         RememberFsyncRequest(reln->smgr_rnode, forknum, seg->mdfd_segno);
      }
      else
      {
!         if (ForwardFsyncRequest(reln->smgr_rnode, forknum, seg->mdfd_segno))
              return;                /* passed it off successfully */

          ereport(DEBUG1,
--- 1264,1287 ----
   *
   * If there is a local pending-ops table, just make an entry in it for
   * mdsync to process later.  Otherwise, try to pass off the fsync request
!  * to the checkpointer process.  If that fails, just do the fsync
!  * locally before returning (we hope this will not happen often enough
   * to be a performance problem).
   */
  static void
  register_dirty_segment(SMgrRelation reln, ForkNumber forknum, MdfdVec *seg)
  {
+     /* Temp relations should never be fsync'd */
+     Assert(!SmgrIsTemp(reln));
+
      if (pendingOpsTable)
      {
          /* push it into local pending-ops table */
!         RememberFsyncRequest(reln->smgr_rnode.node, forknum, seg->mdfd_segno);
      }
      else
      {
!         if (ForwardFsyncRequest(reln->smgr_rnode.node, forknum, seg->mdfd_segno))
              return;                /* passed it off successfully */

          ereport(DEBUG1,
*************** register_dirty_segment(SMgrRelation reln
*** 1286,1301 ****
  /*
   * register_unlink() -- Schedule a file to be deleted after next checkpoint
   *
   * As with register_dirty_segment, this could involve either a local or
   * a remote pending-ops table.
   */
  static void
  register_unlink(RelFileNodeBackend rnode)
  {
      if (pendingOpsTable)
      {
          /* push it into local pending-ops table */
!         RememberFsyncRequest(rnode, MAIN_FORKNUM, UNLINK_RELATION_REQUEST);
      }
      else
      {
--- 1298,1320 ----
  /*
   * register_unlink() -- Schedule a file to be deleted after next checkpoint
   *
+  * We don't bother passing in the fork number, because this is only used
+  * with main forks.
+  *
   * As with register_dirty_segment, this could involve either a local or
   * a remote pending-ops table.
   */
  static void
  register_unlink(RelFileNodeBackend rnode)
  {
+     /* Should never be used with temp relations */
+     Assert(!RelFileNodeBackendIsTemp(rnode));
+
      if (pendingOpsTable)
      {
          /* push it into local pending-ops table */
!         RememberFsyncRequest(rnode.node, MAIN_FORKNUM,
!                              UNLINK_RELATION_REQUEST);
      }
      else
      {
*************** register_unlink(RelFileNodeBackend rnode
*** 1307,1313 ****
           * XXX should we just leave the file orphaned instead?
           */
          Assert(IsUnderPostmaster);
!         while (!ForwardFsyncRequest(rnode, MAIN_FORKNUM,
                                      UNLINK_RELATION_REQUEST))
              pg_usleep(10000L);    /* 10 msec seems a good number */
      }
--- 1326,1332 ----
           * XXX should we just leave the file orphaned instead?
           */
          Assert(IsUnderPostmaster);
!         while (!ForwardFsyncRequest(rnode.node, MAIN_FORKNUM,
                                      UNLINK_RELATION_REQUEST))
              pg_usleep(10000L);    /* 10 msec seems a good number */
      }
*************** register_unlink(RelFileNodeBackend rnode
*** 1333,1340 ****
   * structure for them.)
   */
  void
! RememberFsyncRequest(RelFileNodeBackend rnode, ForkNumber forknum,
!                      BlockNumber segno)
  {
      Assert(pendingOpsTable);

--- 1352,1358 ----
   * structure for them.)
   */
  void
! RememberFsyncRequest(RelFileNode rnode, ForkNumber forknum, BlockNumber segno)
  {
      Assert(pendingOpsTable);

*************** RememberFsyncRequest(RelFileNodeBackend
*** 1347,1353 ****
          hash_seq_init(&hstat, pendingOpsTable);
          while ((entry = (PendingOperationEntry *) hash_seq_search(&hstat)) != NULL)
          {
!             if (RelFileNodeBackendEquals(entry->tag.rnode, rnode) &&
                  entry->tag.forknum == forknum)
              {
                  /* Okay, cancel this entry */
--- 1365,1371 ----
          hash_seq_init(&hstat, pendingOpsTable);
          while ((entry = (PendingOperationEntry *) hash_seq_search(&hstat)) != NULL)
          {
!             if (RelFileNodeEquals(entry->tag.rnode, rnode) &&
                  entry->tag.forknum == forknum)
              {
                  /* Okay, cancel this entry */
*************** RememberFsyncRequest(RelFileNodeBackend
*** 1368,1374 ****
          hash_seq_init(&hstat, pendingOpsTable);
          while ((entry = (PendingOperationEntry *) hash_seq_search(&hstat)) != NULL)
          {
!             if (entry->tag.rnode.node.dbNode == rnode.node.dbNode)
              {
                  /* Okay, cancel this entry */
                  entry->canceled = true;
--- 1386,1392 ----
          hash_seq_init(&hstat, pendingOpsTable);
          while ((entry = (PendingOperationEntry *) hash_seq_search(&hstat)) != NULL)
          {
!             if (entry->tag.rnode.dbNode == rnode.dbNode)
              {
                  /* Okay, cancel this entry */
                  entry->canceled = true;
*************** RememberFsyncRequest(RelFileNodeBackend
*** 1382,1388 ****
              PendingUnlinkEntry *entry = (PendingUnlinkEntry *) lfirst(cell);

              next = lnext(cell);
!             if (entry->rnode.node.dbNode == rnode.node.dbNode)
              {
                  pendingUnlinks = list_delete_cell(pendingUnlinks, cell, prev);
                  pfree(entry);
--- 1400,1406 ----
              PendingUnlinkEntry *entry = (PendingUnlinkEntry *) lfirst(cell);

              next = lnext(cell);
!             if (entry->rnode.dbNode == rnode.dbNode)
              {
                  pendingUnlinks = list_delete_cell(pendingUnlinks, cell, prev);
                  pfree(entry);
*************** RememberFsyncRequest(RelFileNodeBackend
*** 1446,1455 ****
  }

  /*
!  * ForgetRelationFsyncRequests -- forget any fsyncs for a rel
   */
  void
! ForgetRelationFsyncRequests(RelFileNodeBackend rnode, ForkNumber forknum)
  {
      if (pendingOpsTable)
      {
--- 1464,1473 ----
  }

  /*
!  * ForgetRelationFsyncRequests -- forget any fsyncs for a relation fork
   */
  void
! ForgetRelationFsyncRequests(RelFileNode rnode, ForkNumber forknum)
  {
      if (pendingOpsTable)
      {
*************** ForgetRelationFsyncRequests(RelFileNodeB
*** 1484,1495 ****
  void
  ForgetDatabaseFsyncRequests(Oid dbid)
  {
!     RelFileNodeBackend rnode;

!     rnode.node.dbNode = dbid;
!     rnode.node.spcNode = 0;
!     rnode.node.relNode = 0;
!     rnode.backend = InvalidBackendId;

      if (pendingOpsTable)
      {
--- 1502,1512 ----
  void
  ForgetDatabaseFsyncRequests(Oid dbid)
  {
!     RelFileNode rnode;

!     rnode.dbNode = dbid;
!     rnode.spcNode = 0;
!     rnode.relNode = 0;

      if (pendingOpsTable)
      {
diff --git a/src/include/postmaster/bgwriter.h b/src/include/postmaster/bgwriter.h
index 996065c2edff17f6fb1c63ef8f2d3394f5c72ab2..2e97e6aea5551f338939e680cb58e903fa409dc4 100644
*** a/src/include/postmaster/bgwriter.h
--- b/src/include/postmaster/bgwriter.h
*************** extern void CheckpointerMain(void) __att
*** 31,37 ****
  extern void RequestCheckpoint(int flags);
  extern void CheckpointWriteDelay(int flags, double progress);

! extern bool ForwardFsyncRequest(RelFileNodeBackend rnode, ForkNumber forknum,
                      BlockNumber segno);
  extern void AbsorbFsyncRequests(void);

--- 31,37 ----
  extern void RequestCheckpoint(int flags);
  extern void CheckpointWriteDelay(int flags, double progress);

! extern bool ForwardFsyncRequest(RelFileNode rnode, ForkNumber forknum,
                      BlockNumber segno);
  extern void AbsorbFsyncRequests(void);

diff --git a/src/include/storage/relfilenode.h b/src/include/storage/relfilenode.h
index 60c38295375d596609cc3526b7af717a5347114d..5ec1d8f71771204cb354e9942e18ccf44bde5281 100644
*** a/src/include/storage/relfilenode.h
--- b/src/include/storage/relfilenode.h
*************** typedef enum ForkNumber
*** 69,74 ****
--- 69,78 ----
   * Note: in pg_class, relfilenode can be zero to denote that the relation
   * is a "mapped" relation, whose current true filenode number is available
   * from relmapper.c.  Again, this case is NOT allowed in RelFileNodes.
+  *
+  * Note: various places use RelFileNode in hashtable keys.  Therefore,
+  * there *must not* be any unused padding bytes in this struct.  That
+  * should be safe as long as all the fields are of type Oid.
   */
  typedef struct RelFileNode
  {
*************** typedef struct RelFileNode
*** 79,85 ****

  /*
   * Augmenting a relfilenode with the backend ID provides all the information
!  * we need to locate the physical storage.
   */
  typedef struct RelFileNodeBackend
  {
--- 83,93 ----

  /*
   * Augmenting a relfilenode with the backend ID provides all the information
!  * we need to locate the physical storage.  The backend ID is InvalidBackendId
!  * for regular relations (those accessible to more than one backend), or the
!  * owning backend's ID for backend-local relations.  Backend-local relations
!  * are always transient and removed in case of a database crash; they are
!  * never WAL-logged or fsync'd.
   */
  typedef struct RelFileNodeBackend
  {
*************** typedef struct RelFileNodeBackend
*** 87,97 ****
      BackendId    backend;
  } RelFileNodeBackend;

  /*
   * Note: RelFileNodeEquals and RelFileNodeBackendEquals compare relNode first
   * since that is most likely to be different in two unequal RelFileNodes.  It
   * is probably redundant to compare spcNode if the other fields are found equal,
!  * but do it anyway to be sure.
   */
  #define RelFileNodeEquals(node1, node2) \
      ((node1).relNode == (node2).relNode && \
--- 95,109 ----
      BackendId    backend;
  } RelFileNodeBackend;

+ #define RelFileNodeBackendIsTemp(rnode) \
+     ((rnode).backend != InvalidBackendId)
+
  /*
   * Note: RelFileNodeEquals and RelFileNodeBackendEquals compare relNode first
   * since that is most likely to be different in two unequal RelFileNodes.  It
   * is probably redundant to compare spcNode if the other fields are found equal,
!  * but do it anyway to be sure.  Likewise for checking the backend ID in
!  * RelFileNodeBackendEquals.
   */
  #define RelFileNodeEquals(node1, node2) \
      ((node1).relNode == (node2).relNode && \
diff --git a/src/include/storage/smgr.h b/src/include/storage/smgr.h
index f8fc2b2d6e82857ed2483dc0a3444e470ffcd43a..3560d539076da83a3f2897f3109fcabc4b72d5d0 100644
*** a/src/include/storage/smgr.h
--- b/src/include/storage/smgr.h
*************** typedef struct SMgrRelationData
*** 69,75 ****
  typedef SMgrRelationData *SMgrRelation;

  #define SmgrIsTemp(smgr) \
!     ((smgr)->smgr_rnode.backend != InvalidBackendId)

  extern void smgrinit(void);
  extern SMgrRelation smgropen(RelFileNode rnode, BackendId backend);
--- 69,75 ----
  typedef SMgrRelationData *SMgrRelation;

  #define SmgrIsTemp(smgr) \
!     RelFileNodeBackendIsTemp((smgr)->smgr_rnode)

  extern void smgrinit(void);
  extern SMgrRelation smgropen(RelFileNode rnode, BackendId backend);
*************** extern void mdsync(void);
*** 124,133 ****
  extern void mdpostckpt(void);

  extern void SetForwardFsyncRequests(void);
! extern void RememberFsyncRequest(RelFileNodeBackend rnode, ForkNumber forknum,
                       BlockNumber segno);
! extern void ForgetRelationFsyncRequests(RelFileNodeBackend rnode,
!                             ForkNumber forknum);
  extern void ForgetDatabaseFsyncRequests(Oid dbid);

  /* smgrtype.c */
--- 124,132 ----
  extern void mdpostckpt(void);

  extern void SetForwardFsyncRequests(void);
! extern void RememberFsyncRequest(RelFileNode rnode, ForkNumber forknum,
                       BlockNumber segno);
! extern void ForgetRelationFsyncRequests(RelFileNode rnode, ForkNumber forknum);
  extern void ForgetDatabaseFsyncRequests(Oid dbid);

  /* smgrtype.c */

Re: CompactCheckpointerRequestQueue versus pad bytes

От
Robert Haas
Дата:
On Mon, Jul 16, 2012 at 7:17 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> The attached patch covers everything discussed in this thread, except
> for the buggy handling of stats, which I think should be fixed in a
> separate patch since it's only relevant to 9.2+.

With respect to this chunk:

+  * We do not need to go through this dance for temp relations, though, because
+  * we never make WAL entries for temp rels, and so a temp rel poses no threat
+  * to the health of a regular rel that has taken over its relfilenode number.

...I would say that a clearer way to put this is that temporary
relations use a different file naming convention than permanent
relations and therefore there can never be any confusion between the
two.

Other than that, looks fine to me.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: CompactCheckpointerRequestQueue versus pad bytes

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> With respect to this chunk:

> +  * We do not need to go through this dance for temp relations, though, because
> +  * we never make WAL entries for temp rels, and so a temp rel poses no threat
> +  * to the health of a regular rel that has taken over its relfilenode number.

> ...I would say that a clearer way to put this is that temporary
> relations use a different file naming convention than permanent
> relations and therefore there can never be any confusion between the
> two.

Yeah, that's an entirely independent reason why there's probably no
issue in recent releases.  The rationale as stated is back-patchable
to earlier releases, though.

BTW, I wonder whether the code that checks for relfilenode conflict
when selecting a pg_class or relfilenode OID tries both file naming
conventions?  If not, should we make it do so?
        regards, tom lane


Re: CompactCheckpointerRequestQueue versus pad bytes

От
Robert Haas
Дата:
On Mon, Jul 16, 2012 at 9:58 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> BTW, I wonder whether the code that checks for relfilenode conflict
> when selecting a pg_class or relfilenode OID tries both file naming
> conventions?  If not, should we make it do so?

I don't believe it does, nor do I see what we would gain by making it to do so.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: CompactCheckpointerRequestQueue versus pad bytes

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> On Mon, Jul 16, 2012 at 9:58 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> BTW, I wonder whether the code that checks for relfilenode conflict
>> when selecting a pg_class or relfilenode OID tries both file naming
>> conventions?  If not, should we make it do so?

> I don't believe it does, nor do I see what we would gain by making it to do so.

What we would gain is ensuring that we aren't using the same relfilenode
for both a regular table and a temp table.  Do you really want to assume
that such a conflict is 100% safe?  It sounds pretty scary to me, and
even if we were sure the backend would never get confused, what about
client-side code that's looking at relfilenode?
        regards, tom lane


Re: CompactCheckpointerRequestQueue versus pad bytes

От
Tom Lane
Дата:
I wrote:
> I had thought that we might get a performance boost here by saving fsync
> queue traffic, but I see that md.c was already not calling
> register_dirty_segment for temp rels, so there's no joy there.

Actually, wait a second.  We were smart enough to not send fsync
requests in the first place for temp rels.  But we were not smart enough
to not call ForgetRelationFsyncRequests when deleting a temp rel,
which made for an entirely useless scan through the pending-fsyncs
table.  So there could be win there, on top of not forwarding the actual
unlink operation.
        regards, tom lane


Re: CompactCheckpointerRequestQueue versus pad bytes

От
Robert Haas
Дата:
On Tue, Jul 17, 2012 at 1:26 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> On Mon, Jul 16, 2012 at 9:58 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> BTW, I wonder whether the code that checks for relfilenode conflict
>>> when selecting a pg_class or relfilenode OID tries both file naming
>>> conventions?  If not, should we make it do so?
>
>> I don't believe it does, nor do I see what we would gain by making it to do so.
>
> What we would gain is ensuring that we aren't using the same relfilenode
> for both a regular table and a temp table.  Do you really want to assume
> that such a conflict is 100% safe?  It sounds pretty scary to me, and
> even if we were sure the backend would never get confused, what about
> client-side code that's looking at relfilenode?

Well, when I was working on that patch, I spent a lot of time trying
to ensure that it was in fact safe.  So I hope it is.  Also, note that
that could perfectly well happen anyway in any prior release if the
relations happened to live in different tablespaces.  Anyone assuming
that <dboid,relfilenode> is unique is kidding themselves, because it
is not guaranteed to be and has never been guaranteed to be.  Yes,
there could be client code out there that assumes
<dboid,tsoid,relfilenode> is unique and such code will need adjustment
for 9.1+.  But I bet there isn't very much.  The thing that broke a
lot of client code in that commit was the replacement of relistemp
with relpersistence; we already decided we didn't care about that (and
it's too late to change it now anyway) so I can't really get excited
about this.  I think that code assuming that anything other than a
RelFileNodeBackend is sufficient to uniquely identify a relation is
just buggy, and if there is any, we should fix it.  All remaining uses
of RelFileNode rather than RelFileNodeBackend should be cases where we
know that the backend ID has got to be InvalidBackendId.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company