Обсуждение: Postgre Performance

Поиск
Список
Период
Сортировка

Postgre Performance

От
"Deshpande, Yogesh Sadashiv (STSD-Openview)"
Дата:

Hello ,

 

We have a setup where in there are around 100 process running in parallel every 5 minutes and each one of them opens a connection to database. We are observing that for each connection , postgre also created on sub processes. We have set max_connection to 100. So the number of sub process in the system is close to 200 every 5 minutes. And because of this we are seeing very high CPU usage.  We need following information

 

1.       Is there any configuration we do that would pool the connection request rather than coming out with connection limit exceed.

2.       Is there any configuration we do that would limit the sub process to some value say 50 and any request for connection would get queued.

 

Basically we wanted to limit the number of processes so that client code doesn’t have to retry for unavailability for connection or sub processes , but postgre takes care of queuing?

 

Thanks

Yogesh

Re: Postgre Performance

От
Raghavendra
Дата:
Dear Yogesh,

To get best answer's from community member's you need to provide complete information like,PG version, Server /Hardware info etc., So that it help's member's to assist you in right way.


---
Regards,
Raghavendra
EnterpriseDB Corporation



On Tue, Oct 18, 2011 at 7:27 PM, Deshpande, Yogesh Sadashiv (STSD-Openview) <yogesh-sadashiv.deshpande@hp.com> wrote:

Hello ,

 

We have a setup where in there are around 100 process running in parallel every 5 minutes and each one of them opens a connection to database. We are observing that for each connection , postgre also created on sub processes. We have set max_connection to 100. So the number of sub process in the system is close to 200 every 5 minutes. And because of this we are seeing very high CPU usage.  We need following information

 

1.       Is there any configuration we do that would pool the connection request rather than coming out with connection limit exceed.

2.       Is there any configuration we do that would limit the sub process to some value say 50 and any request for connection would get queued.

 

Basically we wanted to limit the number of processes so that client code doesn’t have to retry for unavailability for connection or sub processes , but postgre takes care of queuing?

 

Thanks

Yogesh


Re: Postgre Performance

От
Bill Moran
Дата:
In response to "Deshpande, Yogesh Sadashiv (STSD-Openview)" <yogesh-sadashiv.deshpande@hp.com>:

> Hello ,
>
> We have a setup where in there are around 100 process running in parallel every 5 minutes and each one of them opens
aconnection to database. We are observing that for each connection , postgre also created on sub processes. We have set
max_connectionto 100. So the number of sub process in the system is close to 200 every 5 minutes. And because of this
weare seeing very high CPU usage. 

This does not follow logically, in my experience.  We have many servers that
have over 300 simultaneous connections, and the connections themselves do
not automatically create high CPU usage.

Unless of course, there is an issue with the particular OS you're using,
which you didn't mention.

> We need following information
>
> 1.       Is there any configuration we do that would pool the connection request rather than coming out with
connectionlimit exceed. 

Use pgpool or pgbouncer.

> 2.       Is there any configuration we do that would limit the sub process to some value say 50 and any request for
connectionwould get queued. 

Set the max connection and handle the connection retry in your application.

> Basically we wanted to limit the number of processes so that client code doesn't have to retry for unavailability for
connectionor sub processes , but postgre takes care of queuing? 

pgpool and pgbouncer handle some of that, but I don't know if they do
exactly everything that you want.  Probably a good place to start, though.

--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

Re: Postgre Performance

От
Raghavendra
Дата:
> We need following information
>
> 1.       Is there any configuration we do that would pool the connection request rather than coming out with connection limit exceed.

Use pgpool or pgbouncer.


Use pgbouncer, which is a light weighted connection pooling tool, if you are not opting for load balancing.


> Basically we wanted to limit the number of processes so that client code doesn't have to retry for unavailability for connection or sub processes , but postgre takes care of queuing?


For controlling unavailability of connections, it may be possible at application level but its not possible at Database level. However, if connections reaches max limit, DB will alert you as it reached max_connection.

--Raghav

Re: Postgre Performance

От
"Joshua D. Drake"
Дата:
On 10/18/2011 06:57 AM, Deshpande, Yogesh Sadashiv (STSD-Openview) wrote:
> Hello ,
>
> We have a setup where in there are around 100 process running in
> parallel every 5 minutes and each one of them opens a connection to
> database. We are observing that for each connection , postgre also
> created on sub processes. We have set max_connection to 100. So the
> number of sub process in the system is close to 200 every 5 minutes. And
> because of this we are seeing very high CPU usage. We need following
> information
>
> 1.Is there any configuration we do that would pool the connection
> request rather than coming out with connection limit exceed.

Yes you need a pooler. Initiating connections like that is expensive.
The use of PgBouncer is your friend here.

http://wiki.postgresql.org/wiki/PgBouncer

Sincerely,

Joshua D. Drake

--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
The PostgreSQL Conference - http://www.postgresqlconference.org/
@cmdpromptinc - @postgresconf - 509-416-6579

Re: Postgre Performance

От
"Deshpande, Yogesh Sadashiv (STSD-Openview)"
Дата:

I am not able to find binary distribution of pgbouncer for windows.. Can you point me to the location?

 

From: Raghavendra [mailto:raghavendra.rao@enterprisedb.com]
Sent: Tuesday, October 18, 2011 10:33 PM
To: Bill Moran
Cc: Deshpande, Yogesh Sadashiv (STSD-Openview); pgsql-general@postgresql.org
Subject: Re: [GENERAL] Postgre Performance

 

> We need following information
>
> 1.       Is there any configuration we do that would pool the connection request rather than coming out with connection limit exceed.

Use pgpool or pgbouncer.

 

 

Use pgbouncer, which is a light weighted connection pooling tool, if you are not opting for load balancing.

 


> Basically we wanted to limit the number of processes so that client code doesn't have to retry for unavailability for connection or sub processes , but postgre takes care of queuing?

 

For controlling unavailability of connections, it may be possible at application level but its not possible at Database level. However, if connections reaches max limit, DB will alert you as it reached max_connection.

 

--Raghav

Re: Postgre Performance

От
"Deshpande, Yogesh Sadashiv (STSD-Openview)"
Дата:

Hello Raghavendra,

 

Following are the details..

 

PostgreSQL9.0 , we running our application on 4CPU 8GB RAM system.

 

Thanks

Yogesh

 

From: Raghavendra [mailto:raghavendra.rao@enterprisedb.com]
Sent: Tuesday, October 18, 2011 9:46 PM
To: Deshpande, Yogesh Sadashiv (STSD-Openview)
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Postgre Performance

 

Dear Yogesh,

 

To get best answer's from community member's you need to provide complete information like,PG version, Server /Hardware info etc., So that it help's member's to assist you in right way.

 

 

---

Regards,

Raghavendra

EnterpriseDB Corporation



On Tue, Oct 18, 2011 at 7:27 PM, Deshpande, Yogesh Sadashiv (STSD-Openview) <yogesh-sadashiv.deshpande@hp.com> wrote:

Hello ,

 

We have a setup where in there are around 100 process running in parallel every 5 minutes and each one of them opens a connection to database. We are observing that for each connection , postgre also created on sub processes. We have set max_connection to 100. So the number of sub process in the system is close to 200 every 5 minutes. And because of this we are seeing very high CPU usage.  We need following information

 

1.       Is there any configuration we do that would pool the connection request rather than coming out with connection limit exceed.

2.       Is there any configuration we do that would limit the sub process to some value say 50 and any request for connection would get queued.

 

Basically we wanted to limit the number of processes so that client code doesn’t have to retry for unavailability for connection or sub processes , but postgre takes care of queuing?

 

Thanks

Yogesh

 

Re: Postgre Performance

От
Raghavendra
Дата:
Here you go..

http://winpg.jp/~saito/pgbouncer/pgbouncer-1.4-win32.zip

---
Regards,
Raghavendra
EnterpriseDB Corporation



On Tue, Oct 18, 2011 at 11:08 PM, Deshpande, Yogesh Sadashiv (STSD-Openview) <yogesh-sadashiv.deshpande@hp.com> wrote:

I am not able to find binary distribution of pgbouncer for windows.. Can you point me to the location?

 

From: Raghavendra [mailto:raghavendra.rao@enterprisedb.com]
Sent: Tuesday, October 18, 2011 10:33 PM
To: Bill Moran
Cc: Deshpande, Yogesh Sadashiv (STSD-Openview); pgsql-general@postgresql.org


Subject: Re: [GENERAL] Postgre Performance

 

> We need following information


>
> 1.       Is there any configuration we do that would pool the connection request rather than coming out with connection limit exceed.

Use pgpool or pgbouncer.

 

 

Use pgbouncer, which is a light weighted connection pooling tool, if you are not opting for load balancing.

 


> Basically we wanted to limit the number of processes so that client code doesn't have to retry for unavailability for connection or sub processes , but postgre takes care of queuing?

 

For controlling unavailability of connections, it may be possible at application level but its not possible at Database level. However, if connections reaches max limit, DB will alert you as it reached max_connection.

 

--Raghav


Re: Postgre Performance

От
John R Pierce
Дата:
On 10/18/11 9:51 AM, Bill Moran wrote:
>> Basically we wanted to limit the number of processes so that client code doesn't have to retry for unavailability
forconnection or sub processes , but postgre takes care of queuing? 
> pgpool and pgbouncer handle some of that, but I don't know if they do
> exactly everything that you want.  Probably a good place to start, though.

pools work great when you have a lot of clients that only sporadically
make queries, like web users. each client (like the webserver) grabs a
connection from the pool, runs its transactions, then releases the
connection back to the pool.    a pool won't help much if all 100 of
your clients want to make a query at the same time.

your 4 CPU 8GB machine will likely be optimal doing no more than about 8
queries at once. (give or take a few, depending on how many disk drives
in your raids and how much IO concurrency the server can support).
oh, you mentioned MS Windows in there, ok, 8 is optimistic, the optimal
value may be more like 4.

if you have 100 clients that simultaneously want to make queries each 5
minutes, you should consider using some sort of message queueing system,
where your clients send a message to an application service, and the app
server runs as many queue workers as you find are optimal, each of which
reads a message from the queue, processes database requests to satisfy
the message request, and returns the results to the client, then grabs
the next queue entry and repeat....


--
john r pierce                            N 37, W 122
santa cruz ca                         mid-left coast


Re: Postgre Performance

От
Scott Marlowe
Дата:
On Tue, Oct 18, 2011 at 12:43 PM, John R Pierce <pierce@hogranch.com> wrote:
> On 10/18/11 9:51 AM, Bill Moran wrote:
>>>
>>> Basically we wanted to limit the number of processes so that client code
>>> doesn't have to retry for unavailability for connection or sub processes ,
>>> but postgre takes care of queuing?
>>
>> pgpool and pgbouncer handle some of that, but I don't know if they do
>> exactly everything that you want.  Probably a good place to start, though.
>
> pools work great when you have a lot of clients that only sporadically make
> queries, like web users. each client (like the webserver) grabs a connection
> from the pool, runs its transactions, then releases the connection back to
> the pool.    a pool won't help much if all 100 of your clients want to make
> a query at the same time.
>
> your 4 CPU 8GB machine will likely be optimal doing no more than about 8
> queries at once. (give or take a few, depending on how many disk drives in
> your raids and how much IO concurrency the server can support).    oh, you
> mentioned MS Windows in there, ok, 8 is optimistic, the optimal value may be
> more like 4.
>
> if you have 100 clients that simultaneously want to make queries each 5
> minutes, you should consider using some sort of message queueing system,
> where your clients send a message to an application service, and the app
> server runs as many queue workers as you find are optimal, each of which
> reads a message from the queue, processes database requests to satisfy the
> message request, and returns the results to the client, then grabs the next
> queue entry and repeat....

Or he could spend $35k or so on an HP DL580 with 4x8 core Xeons in it.