Обсуждение: Different memory allocation strategy in Postgres 11?

Поиск
Список
Период
Сортировка

Different memory allocation strategy in Postgres 11?

От
Thomas Kellerer
Дата:
I have a Postgres instance running on my Windows laptop for testing purposes. 

I typically configure "shared_buffers = 4096MB" on my 16GB system as sometimes when testing, it pays off to have a
biggercache. 
 

With Postgres 10 and earlier, the Postgres process(es) would only allocate that memory from the operating system when
needed.
 
So right after startup, it would only consume several hundred MB, not the entire 4GB

However with Postgres 11 I noticed that it immediately grabs the complete memory configured for shared_buffers during
startup.

It's not really a big deal, but I wonder if that is an intentional change or a result from something else? 

Regards
Thomas



Re: Different memory allocation strategy in Postgres 11?

От
Jeff Janes
Дата:
On Fri, Oct 26, 2018 at 9:12 AM Thomas Kellerer <spam_eater@gmx.net> wrote:
I have a Postgres instance running on my Windows laptop for testing purposes.

I typically configure "shared_buffers = 4096MB" on my 16GB system as sometimes when testing, it pays off to have a bigger cache.

With Postgres 10 and earlier, the Postgres process(es) would only allocate that memory from the operating system when needed.
So right after startup, it would only consume several hundred MB, not the entire 4GB

However with Postgres 11 I noticed that it immediately grabs the complete memory configured for shared_buffers during startup.

It's not really a big deal, but I wonder if that is an intentional change or a result from something else?

Do you have pg_prewarm in shared_preload_libraries?

Cheers,

Jeff 

Re: Different memory allocation strategy in Postgres 11?

От
Thomas Kellerer
Дата:
Jeff Janes schrieb am 26.10.2018 um 17:42:
>     I typically configure "shared_buffers = 4096MB" on my 16GB system as sometimes when testing, it pays off to have
abigger cache.
 
> 
>     With Postgres 10 and earlier, the Postgres process(es) would only allocate that memory from the operating system
whenneeded.
 
>     So right after startup, it would only consume several hundred MB, not the entire 4GB
> 
>     However with Postgres 11 I noticed that it immediately grabs the complete memory configured for shared_buffers
duringstartup.
 
> 
>     It's not really a big deal, but I wonder if that is an intentional change or a result from something else?
> 
> 
> Do you have pg_prewarm in shared_preload_libraries?

No. The only shared libraries are those for pg_stat_statemens




Re: Different memory allocation strategy in Postgres 11?

От
Thomas Munro
Дата:
On Sat, Oct 27, 2018 at 6:10 AM Thomas Kellerer <spam_eater@gmx.net> wrote:
> Jeff Janes schrieb am 26.10.2018 um 17:42:
> >     I typically configure "shared_buffers = 4096MB" on my 16GB system as sometimes when testing, it pays off to
havea bigger cache.
 
> >
> >     With Postgres 10 and earlier, the Postgres process(es) would only allocate that memory from the operating
systemwhen needed.
 
> >     So right after startup, it would only consume several hundred MB, not the entire 4GB
> >
> >     However with Postgres 11 I noticed that it immediately grabs the complete memory configured for shared_buffers
duringstartup.
 
> >
> >     It's not really a big deal, but I wonder if that is an intentional change or a result from something else?
> >
> >
> > Do you have pg_prewarm in shared_preload_libraries?
>
> No. The only shared libraries are those for pg_stat_statemens

Does your user have "Lock Pages in Memory" privilege?  One thing that
is new in 11 is huge AKA large page support, and the default is
huge_pages=try.  Not a Windows person myself but I believe that should
succeed if you have that privilege and enough contiguous chunks of
physical memory are available.  If you set huge_pages=off does it
revert to the old behaviour?

-- 
Thomas Munro
http://www.enterprisedb.com


Re: Different memory allocation strategy in Postgres 11?

От
Thomas Kellerer
Дата:
Thomas Munro schrieb am 26.10.2018 um 22:13:
>>>     I typically configure "shared_buffers = 4096MB" on my 16GB system as sometimes when testing, it pays off to
havea bigger cache.
 
>>>
>>>     With Postgres 10 and earlier, the Postgres process(es) would only allocate that memory from the operating
systemwhen needed.
 
>>>     So right after startup, it would only consume several hundred MB, not the entire 4GB
>>>
>>>     However with Postgres 11 I noticed that it immediately grabs the complete memory configured for shared_buffers
duringstartup.
 
>>>
>>>     It's not really a big deal, but I wonder if that is an intentional change or a result from something else?
>>>
>>>
>>> Do you have pg_prewarm in shared_preload_libraries?
>>
>> No. The only shared libraries are those for pg_stat_statemens
> 
> Does your user have "Lock Pages in Memory" privilege?  One thing that
> is new in 11 is huge AKA large page support, and the default is
> huge_pages=try.  Not a Windows person myself but I believe that should
> succeed if you have that privilege and enough contiguous chunks of
> physical memory are available.  If you set huge_pages=off does it
> revert to the old behaviour?

Turns out this was an "optimization" in Windows 10, and completely unrelated to Postgres.

Windows 10 has a feature called "Fast Boot" (or something along the lines). 

When that is activated (which it is by default), a proper shutdown of the system does not seem to really shut it down.
Thisis especially noteworthy with services: they don't get a shutdown event (which e.g. means even a service marked as
"manualstart", will still be running after a reboot if it did before) 
 

In case of Postgres this is visible e.g. in the logfile, because there will no shutdown or startup messages. 

So when I booted my laptop, Postgres continued where it was before the reboot - and the memory usage was caused caused
bymyself generating test data using generate_series() but I expected a "clean" state after the reboot.
 

When manually restarting the service everything works as expected. 

Sorry for the noise.