Обсуждение: Re: [COMMITTERS] pgsql: Move each SLRU's lwlocks to a separate tranche.

Поиск
Список
Период
Сортировка

Re: [COMMITTERS] pgsql: Move each SLRU's lwlocks to a separate tranche.

От
Andres Freund
Дата:
On 2015-11-12 19:59:54 +0000, Robert Haas wrote:
> Move each SLRU's lwlocks to a separate tranche.
>
> This makes it significantly easier to identify these lwlocks in
> LWLOCK_STATS or Trace_lwlocks output.  It's also arguably better
> from a modularity standpoint, since lwlock.c no longer needs to
> know anything about the LWLock needs of the higher-level SLRU
> facility.
>
> Ildus Kurbangaliev, reviewd by Álvaro Herrera and by me.

Before this commit the lwlocks were cacheline aligned, but that's not
the case anymore afterwards; afaics. I think that should be fixed? I
guess it'd be good to avoid duplicating the code for aligning, so maybe
we ought to add a ShmemAllocAligned or something?

Greetings,

Andres Freund


Re: [COMMITTERS] pgsql: Move each SLRU's lwlocks to a separate tranche.

От
Robert Haas
Дата:
On Fri, Mar 25, 2016 at 3:05 AM, Andres Freund <andres@anarazel.de> wrote:
> On 2015-11-12 19:59:54 +0000, Robert Haas wrote:
>> Move each SLRU's lwlocks to a separate tranche.
>>
>> This makes it significantly easier to identify these lwlocks in
>> LWLOCK_STATS or Trace_lwlocks output.  It's also arguably better
>> from a modularity standpoint, since lwlock.c no longer needs to
>> know anything about the LWLock needs of the higher-level SLRU
>> facility.
>>
>> Ildus Kurbangaliev, reviewd by Álvaro Herrera and by me.
>
> Before this commit the lwlocks were cacheline aligned, but that's not
> the case anymore afterwards; afaics. I think that should be fixed? I
> guess it'd be good to avoid duplicating the code for aligning, so maybe
> we ought to add a ShmemAllocAligned or something?

Does it actually matter?  I wouldn't have thought the I/O locks had
enough traffic for it to make any difference.

But in any case I think the right solution is probably this:

--- a/src/backend/storage/ipc/shmem.c
+++ b/src/backend/storage/ipc/shmem.c
@@ -173,7 +173,7 @@ ShmemAlloc(Size size)
        /*
         * ensure all space is adequately aligned.
         */
-       size = MAXALIGN(size);
+       size = CACHELINEALIGN(size);

        Assert(ShmemSegHdr != NULL);

It's stupid that we keep spending time and energy figuring out which
shared memory data structures require alignment and which ones don't.
Let's just align them *all* and be done with it.  The memory cost
shouldn't be more than a few kB.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [COMMITTERS] pgsql: Move each SLRU's lwlocks to a separate tranche.

От
Andres Freund
Дата:

On March 25, 2016 1:04:13 PM GMT+01:00, Robert Haas <robertmhaas@gmail.com> wrote:
>On Fri, Mar 25, 2016 at 3:05 AM, Andres Freund <andres@anarazel.de>
>wrote:
>> On 2015-11-12 19:59:54 +0000, Robert Haas wrote:
>>> Move each SLRU's lwlocks to a separate tranche.
>>>
>>> This makes it significantly easier to identify these lwlocks in
>>> LWLOCK_STATS or Trace_lwlocks output.  It's also arguably better
>>> from a modularity standpoint, since lwlock.c no longer needs to
>>> know anything about the LWLock needs of the higher-level SLRU
>>> facility.
>>>
>>> Ildus Kurbangaliev, reviewd by Álvaro Herrera and by me.
>>
>> Before this commit the lwlocks were cacheline aligned, but that's not
>> the case anymore afterwards; afaics. I think that should be fixed? I
>> guess it'd be good to avoid duplicating the code for aligning, so
>maybe
>> we ought to add a ShmemAllocAligned or something?
>
>Does it actually matter?  I wouldn't have thought the I/O locks had
>enough traffic for it to make any difference.
>
>But in any case I think the right solution is probably this:
>
>--- a/src/backend/storage/ipc/shmem.c
>+++ b/src/backend/storage/ipc/shmem.c
>@@ -173,7 +173,7 @@ ShmemAlloc(Size size)
>        /*
>         * ensure all space is adequately aligned.
>         */
>-       size = MAXALIGN(size);
>+       size = CACHELINEALIGN(size);
>
>        Assert(ShmemSegHdr != NULL);
>
>It's stupid that we keep spending time and energy figuring out which
>shared memory data structures require alignment and which ones don't.
>Let's just align them *all* and be done with it.  The memory cost
>shouldn't be more than a few kB.

Last time I proposed that it got shut down. I agree it'd be a good idea, it's really hard to find alignment issues.

--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


Re: [COMMITTERS] pgsql: Move each SLRU's lwlocks to a separate tranche.

От
Robert Haas
Дата:
On Fri, Mar 25, 2016 at 9:11 AM, Andres Freund <andres@anarazel.de> wrote:
> On March 25, 2016 1:04:13 PM GMT+01:00, Robert Haas <robertmhaas@gmail.com> wrote:
>>On Fri, Mar 25, 2016 at 3:05 AM, Andres Freund <andres@anarazel.de>
>>wrote:
>>> On 2015-11-12 19:59:54 +0000, Robert Haas wrote:
>>>> Move each SLRU's lwlocks to a separate tranche.
>>>>
>>>> This makes it significantly easier to identify these lwlocks in
>>>> LWLOCK_STATS or Trace_lwlocks output.  It's also arguably better
>>>> from a modularity standpoint, since lwlock.c no longer needs to
>>>> know anything about the LWLock needs of the higher-level SLRU
>>>> facility.
>>>>
>>>> Ildus Kurbangaliev, reviewd by Álvaro Herrera and by me.
>>>
>>> Before this commit the lwlocks were cacheline aligned, but that's not
>>> the case anymore afterwards; afaics. I think that should be fixed? I
>>> guess it'd be good to avoid duplicating the code for aligning, so
>>maybe
>>> we ought to add a ShmemAllocAligned or something?
>>
>>Does it actually matter?  I wouldn't have thought the I/O locks had
>>enough traffic for it to make any difference.
>>
>>But in any case I think the right solution is probably this:
>>
>>--- a/src/backend/storage/ipc/shmem.c
>>+++ b/src/backend/storage/ipc/shmem.c
>>@@ -173,7 +173,7 @@ ShmemAlloc(Size size)
>>        /*
>>         * ensure all space is adequately aligned.
>>         */
>>-       size = MAXALIGN(size);
>>+       size = CACHELINEALIGN(size);
>>
>>        Assert(ShmemSegHdr != NULL);
>>
>>It's stupid that we keep spending time and energy figuring out which
>>shared memory data structures require alignment and which ones don't.
>>Let's just align them *all* and be done with it.  The memory cost
>>shouldn't be more than a few kB.
>
> Last time I proposed that it got shut down. I agree it'd be a good idea, it's really hard to find alignment issues.

Gosh, I thought *I* had last proposed that and *you* had shot it down.

Why ever would we not want to do that?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [COMMITTERS] pgsql: Move each SLRU's lwlocks to a separate tranche.

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> It's stupid that we keep spending time and energy figuring out which
> shared memory data structures require alignment and which ones don't.
> Let's just align them *all* and be done with it.  The memory cost
> shouldn't be more than a few kB.

I think such a proposal should come with a measurement, not just
speculation about what it costs.
        regards, tom lane



Re: [COMMITTERS] pgsql: Move each SLRU's lwlocks to a separate tranche.

От
Andres Freund
Дата:

On March 25, 2016 2:48:00 PM GMT+01:00, Robert Haas <robertmhaas@gmail.com> wrote:
>On Fri, Mar 25, 2016 at 9:11 AM, Andres Freund <andres@anarazel.de>
>wrote:
>> On March 25, 2016 1:04:13 PM GMT+01:00, Robert Haas
><robertmhaas@gmail.com> wrote:
>>>On Fri, Mar 25, 2016 at 3:05 AM, Andres Freund <andres@anarazel.de>
>>>wrote:
>>>> On 2015-11-12 19:59:54 +0000, Robert Haas wrote:
>>>>> Move each SLRU's lwlocks to a separate tranche.
>>>>>
>>>>> This makes it significantly easier to identify these lwlocks in
>>>>> LWLOCK_STATS or Trace_lwlocks output.  It's also arguably better
>>>>> from a modularity standpoint, since lwlock.c no longer needs to
>>>>> know anything about the LWLock needs of the higher-level SLRU
>>>>> facility.
>>>>>
>>>>> Ildus Kurbangaliev, reviewd by Álvaro Herrera and by me.
>>>>
>>>> Before this commit the lwlocks were cacheline aligned, but that's
>not
>>>> the case anymore afterwards; afaics. I think that should be fixed?
>I
>>>> guess it'd be good to avoid duplicating the code for aligning, so
>>>maybe
>>>> we ought to add a ShmemAllocAligned or something?
>>>
>>>Does it actually matter?  I wouldn't have thought the I/O locks had
>>>enough traffic for it to make any difference.
>>>
>>>But in any case I think the right solution is probably this:
>>>
>>>--- a/src/backend/storage/ipc/shmem.c
>>>+++ b/src/backend/storage/ipc/shmem.c
>>>@@ -173,7 +173,7 @@ ShmemAlloc(Size size)
>>>        /*
>>>         * ensure all space is adequately aligned.
>>>         */
>>>-       size = MAXALIGN(size);
>>>+       size = CACHELINEALIGN(size);
>>>
>>>        Assert(ShmemSegHdr != NULL);
>>>
>>>It's stupid that we keep spending time and energy figuring out which
>>>shared memory data structures require alignment and which ones don't.
>>>Let's just align them *all* and be done with it.  The memory cost
>>>shouldn't be more than a few kB.
>>
>> Last time I proposed that it got shut down. I agree it'd be a good
>idea, it's really hard to find alignment issues.
>
>Gosh, I thought *I* had last proposed that and *you* had shot it down.
>
>Why ever would we not want to do that?

See Tom's response.  The reason I didn't do it last time is that we previously had that argument.  I think that's just
awaste of or time. It's ridiculously hard figuring it alignment issues, I spent days on the buffer desc thing.
 
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.



Re: [COMMITTERS] pgsql: Move each SLRU's lwlocks to a separate tranche.

От
Robert Haas
Дата:
On Fri, Mar 25, 2016 at 9:48 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> It's stupid that we keep spending time and energy figuring out which
>> shared memory data structures require alignment and which ones don't.
>> Let's just align them *all* and be done with it.  The memory cost
>> shouldn't be more than a few kB.
>
> I think such a proposal should come with a measurement, not just
> speculation about what it costs.

About 6kB with default settings.  See below.

LOG:  shared memory size: 148471808
LOG:  size 48 -> maxalign 48, cachelinealign 128, delta 80
LOG:  size 25988 -> maxalign 25992, cachelinealign 26112, delta 120
LOG:  size 2904 -> maxalign 2904, cachelinealign 2944, delta 40
LOG:  size 264 -> maxalign 264, cachelinealign 384, delta 120
LOG:  size 4209296 -> maxalign 4209296, cachelinealign 4209408, delta 112
LOG:  size 529216 -> maxalign 529216, cachelinealign 529280, delta 64
LOG:  size 133600 -> maxalign 133600, cachelinealign 133632, delta 32
LOG:  size 32 -> maxalign 32, cachelinealign 128, delta 96
LOG:  size 267072 -> maxalign 267072, cachelinealign 267136, delta 64
LOG:  size 66880 -> maxalign 66880, cachelinealign 66944, delta 64
LOG:  size 133600 -> maxalign 133600, cachelinealign 133632, delta 32
LOG:  size 948 -> maxalign 952, cachelinealign 1024, delta 72
LOG:  size 2904 -> maxalign 2904, cachelinealign 2944, delta 40
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 20640 -> maxalign 20640, cachelinealign 20736, delta 96
LOG:  size 28 -> maxalign 32, cachelinealign 128, delta 96
LOG:  size 2904 -> maxalign 2904, cachelinealign 2944, delta 40
LOG:  size 2904 -> maxalign 2904, cachelinealign 2944, delta 40
LOG:  size 4100 -> maxalign 4104, cachelinealign 4224, delta 120
LOG:  size 2904 -> maxalign 2904, cachelinealign 2944, delta 40
LOG:  size 2904 -> maxalign 2904, cachelinealign 2944, delta 40
LOG:  size 88 -> maxalign 88, cachelinealign 128, delta 40
LOG:  size 2904 -> maxalign 2904, cachelinealign 2944, delta 40
LOG:  size 24 -> maxalign 24, cachelinealign 128, delta 104
LOG:  size 16 -> maxalign 16, cachelinealign 128, delta 112
LOG:  size 133600 -> maxalign 133600, cachelinealign 133632, delta 32
LOG:  size 16 -> maxalign 16, cachelinealign 128, delta 112
LOG:  size 96 -> maxalign 96, cachelinealign 128, delta 32
LOG:  size 95584 -> maxalign 95584, cachelinealign 95616, delta 32
LOG:  size 1392 -> maxalign 1392, cachelinealign 1408, delta 16
LOG:  size 1 -> maxalign 8, cachelinealign 128, delta 120
LOG:  size 488 -> maxalign 488, cachelinealign 512, delta 24
LOG:  size 16 -> maxalign 16, cachelinealign 128, delta 112
LOG:  size 3016 -> maxalign 3016, cachelinealign 3072, delta 56
LOG:  size 69144 -> maxalign 69144, cachelinealign 69248, delta 104
LOG:  size 940 -> maxalign 944, cachelinealign 1024, delta 80
LOG:  size 4760 -> maxalign 4760, cachelinealign 4864, delta 104
LOG:  size 327720 -> maxalign 327720, cachelinealign 327808, delta 88
LOG:  size 224 -> maxalign 224, cachelinealign 256, delta 32
LOG:  size 56 -> maxalign 56, cachelinealign 128, delta 72
LOG:  size 1192 -> maxalign 1192, cachelinealign 1280, delta 88
LOG:  size 1356 -> maxalign 1360, cachelinealign 1408, delta 48
LOG:  size 656 -> maxalign 656, cachelinealign 768, delta 112
LOG:  size 1832 -> maxalign 1832, cachelinealign 1920, delta 88
LOG:  size 66880 -> maxalign 66880, cachelinealign 66944, delta 64
LOG:  total additional space consumed due to cache line alignment = 6096

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Вложения

Re: [COMMITTERS] pgsql: Move each SLRU's lwlocks to a separate tranche.

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> On Fri, Mar 25, 2016 at 9:48 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Robert Haas <robertmhaas@gmail.com> writes:
>>> It's stupid that we keep spending time and energy figuring out which
>>> shared memory data structures require alignment and which ones don't.
>>> Let's just align them *all* and be done with it.  The memory cost
>>> shouldn't be more than a few kB.

>> I think such a proposal should come with a measurement, not just
>> speculation about what it costs.

> About 6kB with default settings.  See below.

Sold, then.
        regards, tom lane



Re: [COMMITTERS] pgsql: Move each SLRU's lwlocks to a separate tranche.

От
Robert Haas
Дата:
On Fri, Mar 25, 2016 at 12:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> On Fri, Mar 25, 2016 at 9:48 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> Robert Haas <robertmhaas@gmail.com> writes:
>>>> It's stupid that we keep spending time and energy figuring out which
>>>> shared memory data structures require alignment and which ones don't.
>>>> Let's just align them *all* and be done with it.  The memory cost
>>>> shouldn't be more than a few kB.
>
>>> I think such a proposal should come with a measurement, not just
>>> speculation about what it costs.
>
>> About 6kB with default settings.  See below.
>
> Sold, then.

Excellent.  Done.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [COMMITTERS] pgsql: Move each SLRU's lwlocks to a separate tranche.

От
Andres Freund
Дата:
On 2016-04-05 15:48:22 -0400, Robert Haas wrote:
> On Fri, Mar 25, 2016 at 12:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > Robert Haas <robertmhaas@gmail.com> writes:
> >> On Fri, Mar 25, 2016 at 9:48 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> >>> Robert Haas <robertmhaas@gmail.com> writes:
> >>>> It's stupid that we keep spending time and energy figuring out which
> >>>> shared memory data structures require alignment and which ones don't.
> >>>> Let's just align them *all* and be done with it.  The memory cost
> >>>> shouldn't be more than a few kB.
> >
> >>> I think such a proposal should come with a measurement, not just
> >>> speculation about what it costs.
> >
> >> About 6kB with default settings.  See below.
> >
> > Sold, then.
> 
> Excellent.  Done.

InitBufferPool() manually fixes up alignment; that should probably be
removed now.

I also wonder if it doesn't make sense to fix PG_CACHE_LINE_SIZE to
64byte on x86. I personally think a manual ifdef in pg_config_manual.h
is ok for that.

Greetings,

Andres Freund



Re: [COMMITTERS] pgsql: Move each SLRU's lwlocks to a separate tranche.

От
Andres Freund
Дата:
On 2016-04-10 16:08:56 -0700, Andres Freund wrote:
> On 2016-04-05 15:48:22 -0400, Robert Haas wrote:
> > On Fri, Mar 25, 2016 at 12:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > > Robert Haas <robertmhaas@gmail.com> writes:
> > >> On Fri, Mar 25, 2016 at 9:48 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > >>> Robert Haas <robertmhaas@gmail.com> writes:
> > >>>> It's stupid that we keep spending time and energy figuring out which
> > >>>> shared memory data structures require alignment and which ones don't.
> > >>>> Let's just align them *all* and be done with it.  The memory cost
> > >>>> shouldn't be more than a few kB.
> > >
> > >>> I think such a proposal should come with a measurement, not just
> > >>> speculation about what it costs.
> > >
> > >> About 6kB with default settings.  See below.
> > >
> > > Sold, then.
> > 
> > Excellent.  Done.
> 
> InitBufferPool() manually fixes up alignment; that should probably be
> removed now.
> 
> I also wonder if it doesn't make sense to fix PG_CACHE_LINE_SIZE to
> 64byte on x86. I personally think a manual ifdef in pg_config_manual.h
> is ok for that.

Also, this doesn't seem to be complete. This now aligns sizes to
cacheline boundaries, but it doesn't actually align the returned values
afaics? That might kind of work sometimes, if freeoffset is initially
aligned to PG_CACHE_LINE_SIZE, but that's not guaranteed, it's justshmhdr->freeoffset += MAXALIGN(sizeof(slock_t));

Additionally, doesn't this obsolete

/** Preferred alignment for disk I/O buffers.  On some CPUs, copies between* user space and kernel space are
significantlyfaster if the user buffer* is aligned on a larger-than-MAXALIGN boundary.  Ideally this should be* a
platform-dependentvalue, but for now we just hard-wire it.*/
 
#define ALIGNOF_BUFFER    32

and
/* extra alignment for large requests, since they are probably buffers */if (size >= BLCKSZ)    newStart =
BUFFERALIGN(newStart);

- Andres



Re: [COMMITTERS] pgsql: Move each SLRU's lwlocks to a separate tranche.

От
Robert Haas
Дата:
On Mon, Apr 11, 2016 at 12:01 AM, Andres Freund <andres@anarazel.de> wrote:
>> InitBufferPool() manually fixes up alignment; that should probably be
>> removed now.

Attached patch does that.

>> I also wonder if it doesn't make sense to fix PG_CACHE_LINE_SIZE to
>> 64byte on x86. I personally think a manual ifdef in pg_config_manual.h
>> is ok for that.
>
> Also, this doesn't seem to be complete. This now aligns sizes to
> cacheline boundaries, but it doesn't actually align the returned values
> afaics? That might kind of work sometimes, if freeoffset is initially
> aligned to PG_CACHE_LINE_SIZE, but that's not guaranteed, it's just
>         shmhdr->freeoffset += MAXALIGN(sizeof(slock_t));

And tries to fix that.

> Additionally, doesn't this obsolete
>
> /*
>  * Preferred alignment for disk I/O buffers.  On some CPUs, copies between
>  * user space and kernel space are significantly faster if the user buffer
>  * is aligned on a larger-than-MAXALIGN boundary.  Ideally this should be
>  * a platform-dependent value, but for now we just hard-wire it.
>  */
> #define ALIGNOF_BUFFER  32

I didn't go as far as trying to remove this; a few other random things
are using it.

> and
>
>         /* extra alignment for large requests, since they are probably buffers */
>         if (size >= BLCKSZ)
>                 newStart = BUFFERALIGN(newStart);

But I got this one.

I fixed the InitBufferPool issue you mentioned in the other email, too.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Вложения

Re: [COMMITTERS] pgsql: Move each SLRU's lwlocks to a separate tranche.

От
Andres Freund
Дата:
Hi,

On 2016-04-11 07:09:18 -0400, Robert Haas wrote:
> Attached patch does that.

Thanks.


> > Additionally, doesn't this obsolete
> >
> > /*
> >  * Preferred alignment for disk I/O buffers.  On some CPUs, copies between
> >  * user space and kernel space are significantly faster if the user buffer
> >  * is aligned on a larger-than-MAXALIGN boundary.  Ideally this should be
> >  * a platform-dependent value, but for now we just hard-wire it.
> >  */
> > #define ALIGNOF_BUFFER  32
> 
> I didn't go as far as trying to remove this; a few other random things
> are using it.

All of the places using it look like they actually rather should use
CACHELINEALIGN...


> diff --git a/src/backend/storage/ipc/shmem.c b/src/backend/storage/ipc/shmem.c
> index 1ad68cd..8e6cbdb 100644
> --- a/src/backend/storage/ipc/shmem.c
> +++ b/src/backend/storage/ipc/shmem.c
> @@ -112,6 +112,7 @@ void
>  InitShmemAllocation(void)
>  {
>      PGShmemHeader *shmhdr = ShmemSegHdr;
> +    char       *aligned;
>  
>      Assert(shmhdr != NULL);
>  
> @@ -139,6 +140,11 @@ InitShmemAllocation(void)
>      shmhdr->freeoffset += MAXALIGN(sizeof(slock_t));
>      Assert(shmhdr->freeoffset <= shmhdr->totalsize);
>  
> +    /* Make sure the first allocation begins on a cache line boundary. */
> +    aligned = (char *)
> +        (CACHELINEALIGN((((char *) shmhdr) + shmhdr->freeoffset)));
> +    shmhdr->freeoffset = aligned - (char *) shmhdr;
> +
>      SpinLockInit(ShmemLock);

In the patch attached to http://www.postgresql.org/message-id/20160411051004.yvniqb2pkc7re5k7@alap3.anarazel.de
I did this instead by fixing up the actual shared memory allocation,
which seems a tiny bit cleaner.  But the asserts added there seem like a
worthwhile addition to me.

Greetings,

Andres Freund



Re: [COMMITTERS] pgsql: Move each SLRU's lwlocks to a separate tranche.

От
Robert Haas
Дата:
On Mon, Apr 11, 2016 at 7:31 AM, Andres Freund <andres@anarazel.de> wrote:
> In the patch attached to http://www.postgresql.org/message-id/20160411051004.yvniqb2pkc7re5k7@alap3.anarazel.de
> I did this instead by fixing up the actual shared memory allocation,
> which seems a tiny bit cleaner.

I don't really find that cleaner, though I think it's just a matter of taste.

> But the asserts added there seem like a
> worthwhile addition to me.

I adopted a couple of those and committed this.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company