Обсуждение: RAM of Postgres Server

Поиск
Список
Период
Сортировка

RAM of Postgres Server

От
Sachin Srivastava
Дата:
Dear Team,

Please suggest, how much RAM and core should be define for New Postgres database server, if we will use Postgres 9.3 and above.

If suppose my postgres database size will be near about 300 to 500 GB for future.

There is any document regarding this server configuration, suggest ?

Regards,
SS

Re: RAM of Postgres Server

От
Gavin Flower
Дата:
On 07/01/16 18:39, Sachin Srivastava wrote:
> Dear Team,
>
> Please suggest, how much RAM and core should be define for New
> Postgres database server, if we will use Postgres 9.3 and above.
>
> If suppose my postgres database size will be near about 300 to 500 GB
> for future.
>
> There is any document regarding this server configuration, suggest ?
>
> Regards,
> SS

What O/S, Linux or something else?

What type of queries?

Expected number of queries per second?

Size of commonly accessed tables, indexes, ... ?


The experts will probably need AT LEAST the above!


Cheers,
Gavin


Re: RAM of Postgres Server

От
"David G. Johnston"
Дата:
On Wed, Jan 6, 2016 at 10:39 PM, Sachin Srivastava <ssr.teleatlas@gmail.com> wrote:
Dear Team,

Please suggest, how much RAM and core should be define for New Postgres database server, if we will use Postgres 9.3 and above.

If suppose my postgres database size will be near about 300 to 500 GB for future.

There is any document regarding this server configuration, suggest ?



​Total size is meaningless because, for instance, consider if of that 500GB, 499GB of it​
 
​is archive data that is rarely if ever accessed.

RAM holds data that is recently accessed - how much of that will you have?
Cores help service concurrent requests - how many of those will you have?  How fast will they complete?

​David J.

Re: RAM of Postgres Server

От
Sachin Srivastava
Дата:

Dear David,

 

 

Q: RAM holds data that is recently accessed - how much of that will you have?

 

Ans: Kindly confirm, as per your question “RAM holds data that is recently accessed” :  How we figured out that how much data we will have. Is it depends of Total WAL files (total "checkpoint_segment" I have given 32), am I correct or thinking wrong, please clarify to me.

 

Right now we have 10 GB RAM for first database server and 3 GB RAM for another database server.

 

 

Q: Cores help service concurrent requests - how many of those will you have?  How fast will they complete?

 

Ans: It’s means, if we have more core then we can do our work fast. Like from 9.3 onwards for pg_dump as example, if machines having multiple cores as the load can be shared among separate threads.

 

So if possible to us then more core should be available on database server for better performance, please clarify the benefit of more core to me.

 

Right now we have 1 core for first database server and 2 core for another database server.

 

 

 

Regards,

Sachin


On Thu, Jan 7, 2016 at 11:25 AM, David G. Johnston <david.g.johnston@gmail.com> wrote:
On Wed, Jan 6, 2016 at 10:39 PM, Sachin Srivastava <ssr.teleatlas@gmail.com> wrote:
Dear Team,

Please suggest, how much RAM and core should be define for New Postgres database server, if we will use Postgres 9.3 and above.

If suppose my postgres database size will be near about 300 to 500 GB for future.

There is any document regarding this server configuration, suggest ?



​Total size is meaningless because, for instance, consider if of that 500GB, 499GB of it​
 
​is archive data that is rarely if ever accessed.

RAM holds data that is recently accessed - how much of that will you have?
Cores help service concurrent requests - how many of those will you have?  How fast will they complete?

​David J.


Re: RAM of Postgres Server

От
"David G. Johnston"
Дата:
On Thu, Jan 7, 2016 at 12:32 AM, Sachin Srivastava <ssr.teleatlas@gmail.com> wrote:

Dear David,

 

 

Q: RAM holds data that is recently accessed - how much of that will you have?

 

Ans: Kindly confirm, as per your question “RAM holds data that is recently accessed” :  How we figured out that how much data we will have. Is it depends of Total WAL files (total "checkpoint_segment" I have given 32), am I correct or thinking wrong, please clarify to me.

 

Right now we have 10 GB RAM for first database server and 3 GB RAM for another database server.


Using WAL to measure your active dataset is not going to work.  WAL activity occurs when you WRITE data while in many cases the data in RAM is data that was written to the WAL a long time ago.
 

 

 

Q: Cores help service concurrent requests - how many of those will you have?  How fast will they complete?

 

Ans: It’s means, if we have more core then we can do our work fast. Like from 9.3 onwards for pg_dump as example, if machines having multiple cores as the load can be shared among separate threads.

 

So if possible to us then more core should be available on database server for better performance, please clarify the benefit of more core to me.

 

Right now we have 1 core for first database server and 2 core for another database server.

 


​PostgreSQL is process-oriented and presently only uses a single process to service a single connection.  Application software can upon up multiple connections -  which is what pg_dump does.  More rypically you'd have something like a web server where all of the incoming requests are funneled through a connection pool which then opens a number of connections to the database which it then shares among those requests.

If you want advice you are going to have to give considerably more detail of your application and database usage patterns than you have.

David J.

Re: RAM of Postgres Server

От
Sachin Srivastava
Дата:
Dear David G. and Gavin,

Please find the details as below:


Q. What type of queries to be run on postgres database server?

All types. Databases will be general OLTP.

Q. Size of commonly accessed tables, indexes, ... ?

Various. Anywhere from a few MB to over 100GB.

 

Q. Database size for current and for future for all databases:

            Various. Anywhere from 100 – 1000GB and always growing.

 

Q. Total Connection limit from the database through users and Application on daily basis for all the database:

            Probably 4000+

 

Q.  CPU Core:

            Prod databases will have 32 cores of CPU and 128 GB of RAM.


If you want any other information then inform to me.


Regards,

SS

 



On Thu, Jan 7, 2016 at 8:51 PM, David G. Johnston <david.g.johnston@gmail.com> wrote:
On Thu, Jan 7, 2016 at 12:32 AM, Sachin Srivastava <ssr.teleatlas@gmail.com> wrote:

Dear David,

 

 

Q: RAM holds data that is recently accessed - how much of that will you have?

 

Ans: Kindly confirm, as per your question “RAM holds data that is recently accessed” :  How we figured out that how much data we will have. Is it depends of Total WAL files (total "checkpoint_segment" I have given 32), am I correct or thinking wrong, please clarify to me.

 

Right now we have 10 GB RAM for first database server and 3 GB RAM for another database server.


Using WAL to measure your active dataset is not going to work.  WAL activity occurs when you WRITE data while in many cases the data in RAM is data that was written to the WAL a long time ago.
 

 

 

Q: Cores help service concurrent requests - how many of those will you have?  How fast will they complete?

 

Ans: It’s means, if we have more core then we can do our work fast. Like from 9.3 onwards for pg_dump as example, if machines having multiple cores as the load can be shared among separate threads.

 

So if possible to us then more core should be available on database server for better performance, please clarify the benefit of more core to me.

 

Right now we have 1 core for first database server and 2 core for another database server.

 


​PostgreSQL is process-oriented and presently only uses a single process to service a single connection.  Application software can upon up multiple connections -  which is what pg_dump does.  More rypically you'd have something like a web server where all of the incoming requests are funneled through a connection pool which then opens a number of connections to the database which it then shares among those requests.

If you want advice you are going to have to give considerably more detail of your application and database usage patterns than you have.

David J.

Re: RAM of Postgres Server

От
John R Pierce
Дата:
On 1/7/2016 8:21 PM, Sachin Srivastava wrote:

Q. Total Connection limit from the database through users and Application on daily basis for all the database:

            Probably 4000+


I hope you mean 4000 sessions/day, and not 4000 concurrent connections.



-- 
john r pierce, recycling bits in santa cruz

Re: RAM of Postgres Server

От
"David G. Johnston"
Дата:
On Thu, Jan 7, 2016 at 9:21 PM, Sachin Srivastava <ssr.teleatlas@gmail.com> wrote:
Dear David G. and Gavin,

Please find the details as below:


Q. What type of queries to be run on postgres database server?

All types. Databases will be general OLTP.

Q. Size of commonly accessed tables, indexes, ... ?

Various. Anywhere from a few MB to over 100GB.

 

Q. Database size for current and for future for all databases:

            Various. Anywhere from 100 – 1000GB and always growing.

 

Q. Total Connection limit from the database through users and Application on daily basis for all the database:

            Probably 4000+

 

Q.  CPU Core:

            Prod databases will have 32 cores of CPU and 128 GB of RAM.



​So, what's your question...?  Given the broad brush generalities and explicit references to multiple databases I have no clue what you expect from this list - not that I was going to be of much help on the topic anyway...

David J.

Re: RAM of Postgres Server

От
Sachin Srivastava
Дата:

Dear John,

We are looking at more like 500-600 connections simultaneously in 1 day and I want to say we get 10000 to 12000 connections a day per db.

Regards
SS

On Fri, Jan 8, 2016 at 10:49 AM, John R Pierce <pierce@hogranch.com> wrote:
On 1/7/2016 8:21 PM, Sachin Srivastava wrote:

Q. Total Connection limit from the database through users and Application on daily basis for all the database:

            Probably 4000+


I hope you mean 4000 sessions/day, and not 4000 concurrent connections.



-- 
john r pierce, recycling bits in santa cruz

Re: RAM of Postgres Server

От
Ben Chobot
Дата:
On Jan 7, 2016, at 10:32 PM, Sachin Srivastava <ssr.teleatlas@gmail.com> wrote:
>
>
> Dear John,
>
> We are looking at more like 500-600 connections simultaneously in 1 day and I want to say we get 10000 to 12000
connectionsa day per db. 


Unless you have 300 cores to service those 500-600 simultaneous connections, you are really going to want to put your
databasebehind a connection pooler such as pgBouncer. A connection pooler is a proxy that decreases the concurrency on
thedatabase, letting the database see only a few of the connections that clients want to make, and thereby increasing
overallquery throughput. 

If it works for your application, pgBouncer has wonderful mode called transaction pooling, which automatically rotates
clientsinto an open database slot on transaction boundaries.  

Re: RAM of Postgres Server

От
John R Pierce
Дата:
On 1/7/2016 10:32 PM, Sachin Srivastava wrote:
> We are looking at more like 500-600 connections simultaneously in 1
> day and I want to say we get 10000 to 12000 connections a day per db.

these applications, are they hammering queries, or mostly idle, and just
issuing intermittent queries?

500 queries at the same time will cause serious resource thrashing with
32 cores.   if they are mostly idle, you should look at the pgbouncer
style connection pooling model, as Ben suggests

--
john r pierce, recycling bits in santa cruz