Обсуждение: Memory usage per session

Поиск
Список
Период
Сортировка

Memory usage per session

От
AMatveev@bitec.ru
Дата:
Hi.
We have tested postgreSql,Oracle,MSSqlServer.
The test performs about 11K lines of code
Memory usage per session:
Oracle: about 5M
MSSqlServer: about 4M
postgreSql: about 160М
The result of postgreSql is very sad(Our typical business logic has about 30K lines of code).
How can I reduce memory consumption per session?
Note, we cant move the business logic to an application server as it will lead us to performance problem.

I can send the test script on request.

Test description:
PostgreSQL 9.5.3, compiled by Visual C build 1800, 64-bit
At the same time we run 50 sessions that perform the following functions:
CREATE OR REPLACE FUNCTION perfa.func9
...
BEGIN
  svSql:='';
  PERFORM perfb."func91"();
  ...
END;

CREATE OR REPLACE FUNCTION perfb.func91
...
BEGIN
  PERFORM perfc."func911"();
  ...
END;

CREATE OR REPLACE FUNCTION perfc.func911 (
)
RETURNS void AS
$body$
DECLARE
  svSql BIGINT;
BEGIN
  svSql:=0;
  ...
  svSql:=svSql+10;
END;
$body$



Re: Memory usage per session

От
Achilleas Mantzios
Дата:
On 08/07/2016 14:11, AMatveev@bitec.ru wrote:
> Hi.
> We have tested postgreSql,Oracle,MSSqlServer.
> The test performs about 11K lines of code
> Memory usage per session:
> Oracle: about 5M
> MSSqlServer: about 4M
> postgreSql: about 160М
> The result of postgreSql is very sad(Our typical business logic has about 30K lines of code).
> How can I reduce memory consumption per session?
> Note, we cant move the business logic to an application server as it will lead us to performance problem.
>
> I can send the test script on request.
>
> Test description:
> PostgreSQL 9.5.3, compiled by Visual C build 1800, 64-bit

Visual C???
You will have to run PostgreSQL on a proper Unix system to test for performance.

> At the same time we run 50 sessions that perform the following functions:
> CREATE OR REPLACE FUNCTION perfa.func9
> ...
> BEGIN
>    svSql:='';
>    PERFORM perfb."func91"();
>    ...
> END;
>
> CREATE OR REPLACE FUNCTION perfb.func91
> ...
> BEGIN
>    PERFORM perfc."func911"();
>    ...
> END;
>
> CREATE OR REPLACE FUNCTION perfc.func911 (
> )
> RETURNS void AS
> $body$
> DECLARE
>    svSql BIGINT;
> BEGIN
>    svSql:=0;
>    ...
>    svSql:=svSql+10;
> END;
> $body$
>
>
>


--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Memory usage per session

От
AMatveev@bitec.ru
Дата:
Hi

> On 08/07/2016 14:11, AMatveev@bitec.ru wrote:
>> Hi.
>> The test performs about 11K lines of code
>> Memory usage per session:
>> Oracle: about 5M
>> MSSqlServer: about 4M
>> postgreSql: about 160М


> Visual C???
> You will have to run PostgreSQL on a proper Unix system to test for performance.
Of cause we understand that unix is faster( damn fork :)))
Our Current problem is memory:
 (160m vs 5M) * 100 sessions  = 16G vs 0.5G
We just can get "out of memory".
:(((



Re: Memory usage per session

От
John McKown
Дата:
On Fri, Jul 8, 2016 at 8:16 AM, <AMatveev@bitec.ru> wrote:
Hi

> On 08/07/2016 14:11, AMatveev@bitec.ru wrote:
>> Hi.
>> The test performs about 11K lines of code
>> Memory usage per session:
>> Oracle: about 5M
>> MSSqlServer: about 4M
>> postgreSql: about 160М


> Visual C???
> You will have to run PostgreSQL on a proper Unix system to test for performance.
Of cause we understand that unix is faster( damn fork :)))
Our Current problem is memory:
 (160m vs 5M) * 100 sessions  = 16G vs 0.5G
We just can get "out of memory".
:(((

​I'm admittedly ignorant of this type of testing. But if the memory usage for PostgreSQL is in the server, perhaps due to caching (how to test?), then it likely would _not_ linearly scale up as the number of clients increased because every clients would share the same cache data within the server.​ Or are you measuring the memory usage where the client is on one machine and the PostgreSQL server is a different machine, with the client machine getting the memory hit?

--
"Pessimism is a admirable quality in an engineer. Pessimistic people check their work three times, because they're sure that something won't be right. Optimistic people check once, trust in Solis-de to keep the ship safe, then blow everyone up."
"I think you're mistaking the word optimistic for inept."
"They've got a similar ring to my ear."

From "Star Nomad" by Lindsay Buroker:

Maranatha! <><
John McKown

Re: Memory usage per session

От
Pavel Stehule
Дата:
Hi

2016-07-08 15:16 GMT+02:00 <AMatveev@bitec.ru>:
Hi

> On 08/07/2016 14:11, AMatveev@bitec.ru wrote:
>> Hi.
>> The test performs about 11K lines of code
>> Memory usage per session:
>> Oracle: about 5M
>> MSSqlServer: about 4M
>> postgreSql: about 160М


> Visual C???
> You will have to run PostgreSQL on a proper Unix system to test for performance.
Of cause we understand that unix is faster( damn fork :)))
Our Current problem is memory:
 (160m vs 5M) * 100 sessions  = 16G vs 0.5G
We just can get "out of memory".
:(((

The almost session memory is used for catalog caches. So you should to have big catalog and long living sessions.

What do you do exactly?

Regards

Pavel



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: Memory usage per session

От
Stephen Frost
Дата:
* AMatveev@bitec.ru (AMatveev@bitec.ru) wrote:
> > On 08/07/2016 14:11, AMatveev@bitec.ru wrote:
> >> The test performs about 11K lines of code
> >> Memory usage per session:
> >> Oracle: about 5M
> >> MSSqlServer: about 4M
> >> postgreSql: about 160М
>
> > Visual C???
> > You will have to run PostgreSQL on a proper Unix system to test for performance.
> Of cause we understand that unix is faster( damn fork :)))
> Our Current problem is memory:
>  (160m vs 5M) * 100 sessions  = 16G vs 0.5G
> We just can get "out of memory".
> :(((

Do you have 100 CPUs on this system which apparently doesn't have 16G
of RAM available for PG to use?

If not, you should probably consider connection pooling to reduce the
number of PG sessions to something approaching the number of CPUs/cores
you have in the system.

Thanks!

Stephen

Вложения

Re: Memory usage per session

От
AMatveev@bitec.ru
Дата:
Hi

>> >> The test performs about 11K lines of code
>> >> Oracle: about 5M
>> >> postgreSql: about 160М


> Do you have 100 CPUs on this system which apparently doesn't have 16G
> of RAM available for PG to use?
We can say at fact:
We currently  work at oracle.
Our code base about 4000 k line of code
In out last project we have:
3000 current connection
200 active session
So 16g it's very optimistic.
Of course  we think about buy hardware or software.
It's other question.
So with this memory consumption it can be really cheaper to by Oracle.
> If not, you should probably consider connection pooling to reduce the
> number of PG sessions to something approaching the number of CPUs/cores
> you have in the system.
It's  possible only  with  application  server,
for local network thick client has reasonable advantages.
 We just can't implement today all functions on thin client.



Re: Memory usage per session

От
AMatveev@bitec.ru
Дата:
Re: [GENERAL] Memory usage per session



=D0=97=D0=B4=D1=
=80=D0=B0=D0=B2=D1=81=D1=82=D0=B2=D1=83=D0=B9=D1=82=D0=B5.

>> Oracle: about 5M
>> postgreSql: about 160=D0=9C


>=E2=80=8BI'm admittedly ignorant of this type of testing. But if the me=
mory usage for PostgreSQL is in the server, perhaps due to caching (how to =
test?), then it likely would _not_ linearly scale up >as the number of c=
lients increased because every clients >would share the same cache data =
within the server.=E2=80=8B Or are you measuring the memory usage where the=
 client is on one machine and the >PostgreSQL server is a different mach=
ine, with the client machine getting the memory hit?

I can send zip file with test code on request;
To say the truth it's described in documentation:
https://www.postgresql.org/docs/9.5/static/plpgsql-implementation.html#PLPG=
SQL-PLAN-CACHING
The PL/pgSQL interpreter parses the function's source text and produces an =
internal binary instruction tree the first time the function is called (wit=
hin each session) 

It's very sad :(

The test is very simply:
Generate code:
DECLARE
  svSql "varchar";
BEGIN
  for nvi in 1..10
  loop
    svSql =3D 'CREATE OR REPLACE FUNCTION perfa."func'||nvi||'" (=

)
RETURNS void AS
$body$
DECLARE
  svSql "varchar";
BEGIN
  svSql:=3D'''';
  PERFORM perfb."func'||(nvi-1)*10+1||'"();
  PERFORM perfb."func'||(nvi-1)*10+2||'"();
  PERFORM perfb."func'||(nvi-1)*10+3||'"();
  PERFORM perfb."func'||(nvi-1)*10+4||'"();
  PERFORM perfb."func'||(nvi-1)*10+5||'"();
  PERFORM perfb."func'||(nvi-1)*10+6||'"();
  PERFORM perfb."func'||(nvi-1)*10+7||'"();
  PERFORM perfb."func'||(nvi-1)*10+8||'"();
  PERFORM perfb."func'||(nvi-1)*10+9||'"();
  PERFORM perfb."func'||(nvi-1)*10+10||'"();       &nbs=
p; 
END;
$body$
LANGUAGE ''plpgsql''
VOLATILE
CALLED ON NULL INPUT
SECURITY INVOKER';
    EXECUTE svSql;
  end loop;
END;

Download jmetter
In 50 thread run this funciton.


About 4g memory will be consumed in one second 
It's hard to miss
:))

Re: Memory usage per session

От
Melvin Davidson
Дата:


On Fri, Jul 8, 2016 at 10:07 AM, <AMatveev@bitec.ru> wrote:
Hi

>> >> The test performs about 11K lines of code
>> >> Oracle: about 5M
>> >> postgreSql: about 160М


> Do you have 100 CPUs on this system which apparently doesn't have 16G
> of RAM available for PG to use?
We can say at fact:
We currently  work at oracle.
Our code base about 4000 k line of code
In out last project we have:
3000 current connection
200 active session
So 16g it's very optimistic.
Of course  we think about buy hardware or software.
It's other question.
So with this memory consumption it can be really cheaper to by Oracle.
> If not, you should probably consider connection pooling to reduce the
> number of PG sessions to something approaching the number of CPUs/cores
> you have in the system.
It's  possible only  with  application  server,
for local network thick client has reasonable advantages.
 We just can't implement today all functions on thin client.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

What is the actual O/S that PostgreSQL is installed on?
How much total memory is on the server?
I would be very curious about the values you have  specified in postgresql.conf?
 Also, what is the exact version of PostgreSQL you are using?
What is the total time to complete the test for all 3 DB's?
The best I can tell is that with all the unknowns, you are comparing apples to oranges.
 
Melvin Davidson
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.

Re: Memory usage per session

От
Achilleas Mantzios
Дата:
On 08/07/2016 17:07, AMatveev@bitec.ru wrote:
> Hi
>
>>>>> The test performs about 11K lines of code
>>>>> Oracle: about 5M
>>>>> postgreSql: about 160М
>
>> Do you have 100 CPUs on this system which apparently doesn't have 16G
>> of RAM available for PG to use?
> We can say at fact:
> We currently  work at oracle.
> Our code base about 4000 k line of code
> In out last project we have:
> 3000 current connection
> 200 active session
> So 16g it's very optimistic.
> Of course  we think about buy hardware or software.
> It's other question.
> So with this memory consumption it can be really cheaper to by Oracle.
>> If not, you should probably consider connection pooling to reduce the
>> number of PG sessions to something approaching the number of CPUs/cores
>> you have in the system.
> It's  possible only  with  application  server,

No, you can deploy PgPool or PgBouncer.
Apart from that, I just checked in my system. User sessions have size of 16M. Not 160M.

> for local network thick client has reasonable advantages.
>   We just can't implement today all functions on thin client.
>
>
>


--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Memory usage per session

От
AMatveev@bitec.ru
Дата:
<span style=" font-family:'courier new'; font-size: 9pt;">Hi<br /><br /> >> >> The test performs about 11K
linesof code<br /> >> >> Oracle: about 5M<br /> >> >> postgreSql: about 160М<br /><br /><br />
>Whatis the actual O/S that PostgreSQL is installed on? <br /> >How much total memory is on the server? <br />
>Iwould be very curious about the values you have  specified in postgresql.conf?<br /> > Also, what is the exact
versionof PostgreSQL you are using?<br /> >What is the total time to complete the test for all 3 DB's?<br /> >The
bestI can tell is that with all the unknowns, you are comparing apples to oranges.<br /><br /> There is real problem
forus.<br /><span style=" font-family:'verdana';">The PL/pgSQL interpreter parses the function's source text and
producesan internal binary instruction tree the first time the function is called (within each session)<span style="
font-family:'timesnew roman'; font-size: 14pt;"> <br /><br /><span style=" font-family:'Courier New'; font-size:
9pt;">Thesize of this instruction tree depends only of postgreSql build.<br /><br /> And size is big, and size is not
sharebetween session.<br /><br /> I understand that nobody says: Damn One second I will implement it soon :)<br /><br
/>Thank all for constructive answers.</span></span></span></span> 

Re: Memory usage per session

От
AMatveev@bitec.ru
Дата:
<span style=" font-family:'courier new'; font-size: 9pt;">Hi<br /><br /><br /> >> Oracle: about 5M<br /> >>
postgreSql:about 160М<br /><br /><br /><br /> >The almost session memory is used for catalog caches. So you should
tohave big catalog and long living sessions. <br /><br /> >What do you do exactly?<br /><br /> I've generate test
codethat  emulates instruction tree size for our production code.<br /> This test shows:<br /> -What is the size of
instructiontree for our typical  BP  <br />   it's greater than 300M for each session <br /> -How often do PostgreSql
parsethe text <br />   When postgres clean cache, so much often <br /><br /> So Oracle is much better in this case. <br
/>It's very difficult really estimate in such case, to buy Oracle or to by hardware.<br /><br /></span> 

Re: Memory usage per session

От
Melvin Davidson
Дата:


On Fri, Jul 8, 2016 at 11:49 AM, <AMatveev@bitec.ru> wrote:
Hi


>> Oracle: about 5M
>> postgreSql: about 160М



>The almost session memory is used for catalog caches. So you should to have big catalog and long living sessions. 

>What do you do exactly?

I've generate test code that  emulates instruction tree size for our production code.
This test shows:
-What is the size of instruction tree for our typical  BP  
  it's greater than 300M for each session 
-How often do PostgreSql parse the text 
  When postgres clean cache, so much often 

So Oracle is much better in this case. 
It's very difficult really estimate in such case, to buy Oracle or to by hardware.


My questions:
>What is the actual O/S that PostgreSQL is installed on? 
>How much total memory is on the server? 
>I would be very curious about the values you have  specified in postgresql.conf?
> Also, what is the exact version of PostgreSQL you are using?
>What is the total time to complete the test for all 3 DB's?
>The best I can tell is that with all the unknowns, you are comparing apples to oranges.

Your answers:
>There is real problem for us.
>The PL/pgSQL interpreter parses the function's source text and produces an internal binary instruction tree the first time the function is called (within each session)


Your answer is jibberish and has nothing to do with my questions.
Have you even tuned the postgresql.conf?
You cannot fairly compare PostgreSQL with any other database unless you first tune it's postgres.conf.

Melvin Davidson
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.

Re: Memory usage per session

От
John McKown
Дата:
On Fri, Jul 8, 2016 at 11:26 AM, Melvin Davidson <melvin6925@gmail.com> wrote:


On Fri, Jul 8, 2016 at 11:49 AM, <AMatveev@bitec.ru> wrote:
Hi


>> Oracle: about 5M
>> postgreSql: about 160М



>The almost session memory is used for catalog caches. So you should to have big catalog and long living sessions. 

>What do you do exactly?

I've generate test code that  emulates instruction tree size for our production code.
This test shows:
-What is the size of instruction tree for our typical  BP  
  it's greater than 300M for each session 
-How often do PostgreSql parse the text 
  When postgres clean cache, so much often 

So Oracle is much better in this case. 
It's very difficult really estimate in such case, to buy Oracle or to by hardware.


My questions:
>What is the actual O/S that PostgreSQL is installed on? 
>How much total memory is on the server? 
>I would be very curious about the values you have  specified in postgresql.conf?
> Also, what is the exact version of PostgreSQL you are using?
>What is the total time to complete the test for all 3 DB's?
>The best I can tell is that with all the unknowns, you are comparing apples to oranges.

Your answers:
>There is real problem for us.
>The PL/pgSQL interpreter parses the function's source text and produces an internal binary instruction tree the first time the function is called (within each session)


Your answer is jibberish and has nothing to do with my questions.
Have you even tuned the postgresql.conf?
You cannot fairly compare PostgreSQL with any other database unless you first tune it's postgres.conf.

Melvin Davidson

I think the "problem" that he is having is fixable only by changing how PostgreSQL itself works. His problem is a PL/pgSQL function which is 11K lines in length. When invoked, this function is "compiled" into a large tokenized parse tree. This parse tree is only usable in the session which invoked the the function. Apparently this parse tree takes a lot of memory. And "n" concurrent users of this, highly used, function will therefore require "n" times as much memory because the parse tree is _not_ shareable.  This is explained in:
​​
https://www.postgresql.org/docs/9.5/static/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING

​In previous posts, he implied that he is running on some version of Windows by referencing the VC compiler. I am _guessing_ that the other DBs mentioned: MSSQL and Oracle implement their server side programming differently so that it takes less memory. Perhaps by allowing the "compiled program" to be shared between session.




--
"Pessimism is a admirable quality in an engineer. Pessimistic people check their work three times, because they're sure that something won't be right. Optimistic people check once, trust in Solis-de to keep the ship safe, then blow everyone up."
"I think you're mistaking the word optimistic for inept."
"They've got a similar ring to my ear."

From "Star Nomad" by Lindsay Buroker:

Maranatha! <><
John McKown

Re: Memory usage per session

От
Pavel Stehule
Дата:


2016-07-08 17:49 GMT+02:00 <AMatveev@bitec.ru>:
Hi


>> Oracle: about 5M
>> postgreSql: about 160М



>The almost session memory is used for catalog caches. So you should to have big catalog and long living sessions. 

>What do you do exactly?

I've generate test code that  emulates instruction tree size for our production code.
This test shows:
-What is the size of instruction tree for our typical  BP  
  it's greater than 300M for each session 
-How often do PostgreSql parse the text 
  When postgres clean cache, so much often

PostgreSQL parses the source code of functions once per session
 
 

So Oracle is much better in this case. 
It's very difficult really estimate in such case, to buy Oracle or to by hardware.

Should be - The implementation of PLpgSQL and PL/SQL is strongly different. When your case is not usual, then the migration to Postgres needs redesign sometimes.

Regards

Pavel

Re: Memory usage per session

От
Karl Czajkowski
Дата:
On Jul 08, John McKown modulated:
...
> I think the "problem" that he is having is fixable only by changing how
> PostgreSQL itself works. His problem is a PL/pgSQL function which is
> 11K lines in length. When invoked, this function is "compiled" into a
> large tokenized parse tree. This parse tree is only usable in the
> session which invoked the the function. Apparently this parse tree
> takes a lot of memory.

Right.  I'm not entirely sure the original poster wants to hear
practical solutions, but I think there are three ways that someone
familar with Postgres would avoid this type of problem:

1. Connection pooling. Set the max connection limit and other tuning
   parameters appropriately for your workload and available
   resources. Don't get into a state as was described (thousands of
   open connections and only hundreds "active"). Make your client
   applications more disciplined.

2. Buy more RAM.  You can easily grow to 512GB in just basic
   dual-socket servers these days.  This hardware cost may well
   be worth it to avoid human labor costs.

3. Rewrite or refactor such complex stored procedures in a different
   programming language such as C or Python, so your PL/pgsql stored
   procedures remain small glue around libraries of code.  Postgres
   makes it very trivial to extend the system with such procedural
   libraries.


Karl


Re: Memory usage per session

От
John R Pierce
Дата:
On 7/8/2016 12:00 PM, Karl Czajkowski wrote:
> 3. Rewrite or refactor such complex stored procedures in a different
>     programming language such as C or Python, so your PL/pgsql stored
>     procedures remain small glue around libraries of code.  Postgres
>     makes it very trivial to extend the system with such procedural
>     libraries.

4.   move said business logic to app servers to unload the database
server from doing so much compute, and get better all around
performance.   use plpgsql functions for inner functions only where
there are significant performance gains.    its easy to cluster app
servers, its not so easy to parallelize database servers.



--
john r pierce, recycling bits in santa cruz



Re: Memory usage per session

От
AMatveev@bitec.ru
Дата:
Hi

> On Jul 08, John McKown modulated:
> ...
>> I think the "problem" that he is having is fixable only by changing how
>> PostgreSQL itself works. His problem is a PL/pgSQL function which is
>> 11K lines in length. When invoked, this function is "compiled" into a
>> large tokenized parse tree. This parse tree is only usable in the
>> session which invoked the the function. Apparently this parse tree
>> takes a lot of memory.

It's  pleasant to see smart men, And I got here some help.
The most valuable is advice to deploy PgPool or PgBouncer.
Thanks guys. Usually, I don't answer on certain theme.
http://www.joelonsoftware.com/articles/fog0000000018.html
But may be it helps to somebody.

> Right.  I'm not entirely sure the original poster wants to hear
> practical solutions,
Of course this answer requires "moral justification" :)

> but I think there are three ways that someone
> familar with Postgres would avoid this type of problem:

> 1. Connection pooling. Set the max connection limit and other tuning
>    parameters appropriately for your workload and available
>    resources. Don't get into a state as was described (thousands of
>    open connections and only hundreds "active"). Make your client
>    applications more disciplined.
You are joking :)
They pay us money, and they want easy and cheap decision :))
I can tell a terrible thing, but they can pay money to others :)))
> 2. Buy more RAM.  You can easily grow to 512GB in just basic
>    dual-socket servers these days.  This hardware cost may well
>    be worth it to avoid human labor costs.
Thanks cap :))

http://www.reduxsquad.com/wp-content/uploads/2016/02/hotels-com-spring-break-sale-captain-obvious-workout-bathroom-large-8.jpg
> 3. Rewrite or refactor such complex stored procedures in a different
>    programming language such as C or Python, so your PL/pgsql stored
>    procedures remain small glue around libraries of code.  Postgres
>    makes it very trivial to extend the system with such procedural
>    libraries.
>
Let's read first :)



Re: Memory usage per session

От
AMatveev@bitec.ru
Дата:
Hi

>I think the "problem" that he is having is fixable only by changing how PostgreSQL itself works.
>His problem is a PL/pgSQL function which is 11K lines in length.
>When invoked, this function is "compiled" into a large tokenized parse tree.
>This parse tree is only usable in the session which invoked the the function.
>Apparently this parse tree takes a lot of memory. And "n" concurrent users of this,
>highly used, function will therefore require "n" times as much memory because the parse tree is _not_ shareable.
>This is explained in:
>https://www.postgresql.org/docs/9.5/static/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING


There   is   an   interesting   continuation   of  the  discussion  at
pgsql-hackers@postgresql.org  ([HACKERS]  One process per session lack
of sharing)

https://www.postgresql.org/message-id/flat/301417900.20160712173911%40bitec.ru#301417900.20160712173911@bitec.ru

>I agree that there's not really a plan to implement this, but I don't
>agree that connection pooling solves the whole problem.  Most people
>can't get by with statement pooling, so in practice you are looking at
>transaction pooling or session pooling.  And that means that you can't
>really keep the pool size as small as you'd like because backends can
>be idle in transaction for long enough to force the pool size to be
>pretty large.  Also, pooling causes the same backends to get reused
>for different sessions which touch different relations and different
>functions so that, for example, the relcache and the PL/pgsql function
>caches grow until every one of those sessions has everything cached
>that any client needs.  That can cause big problems.
>
>So, I actually think it would be a good idea to think about this.  The
>problem, of course, is that as long as we allow arbitrary parts of the
>code - including extension code - to declare global variables and
>store arbitrary stuff in them without any coordination, it's
>impossible to imagine hibernating and resuming a session without a
>risk of things going severely awry.  This was a major issue for
>parallel query, but we've solved it, mostly, by designating the things
>that rely on global variables as parallel-restricted, and there
>actually aren't a ton of those.  So I think it's imaginable that we
>can get to a point where we can, at least in some circumstances, let a
>backend exit and reconstitute its state at a later time.  It's not an
>easy project, but I think it is one we will eventually need to do.
>Insisting that the current model is working is just sticking our head
>in the sand.  It's mostly working, but there are workloads where it
>fails badly - and competing database products survive a number of
>scenarios where we just fall on our face.