Обсуждение: pgtune
I just found out about pgtune and am trying it out on my server. I have 2.5 questions: 1) Are these settings the maximum that the server will handle, if it is strictly dedicated to postgresql? Meaning if I am running other stuff on the server as well, this would be a bad idea. 1a) If I have some intense plpython stored procedures, do they run in the postgresql memory space (ie using the memory settings from the postgresql.conf, or do they run under their own memory space and then I would have to take that into account when allocating postgresql memory? 2) If it sets my max_connections to 80 and would like to set it at 300, what would be the appropriate setting to lower at its expense? Sim
2010/8/9 Sim Zacks <sim@compulab.co.il>
Look at the options available in pgtune
-M TOTALMEMORY, --memory=TOTALMEMORY
Total system memory, will attempt to detect if
unspecified
-T DBTYPE, --type=DBTYPE
Database type, defaults to Mixed, valid options are
DW, OLTP, Web, Mixed, Desktop
-c CONNECTIONS, --connections=CONNECTIONS
Maximum number of expected connections, default
depends on database type
For question 1, you can set the type of server you want. For question 2, you can pass the -c parameter and it would adjust the other parameters. Not sure of 1a though.
Amitabh Kant
I just found out about pgtune and am trying it out on my server.
I have 2.5 questions:
1) Are these settings the maximum that the server will handle, if it is
strictly dedicated to postgresql? Meaning if I am running other stuff on
the server as well, this would be a bad idea.
1a) If I have some intense plpython stored procedures, do they run in
the postgresql memory space (ie using the memory settings from the
postgresql.conf, or do they run under their own memory space and then I
would have to take that into account when allocating postgresql memory?
2) If it sets my max_connections to 80 and would like to set it at 300,
what would be the appropriate setting to lower at its expense?
Sim
Look at the options available in pgtune
-M TOTALMEMORY, --memory=TOTALMEMORY
Total system memory, will attempt to detect if
unspecified
-T DBTYPE, --type=DBTYPE
Database type, defaults to Mixed, valid options are
DW, OLTP, Web, Mixed, Desktop
-c CONNECTIONS, --connections=CONNECTIONS
Maximum number of expected connections, default
depends on database type
For question 1, you can set the type of server you want. For question 2, you can pass the -c parameter and it would adjust the other parameters. Not sure of 1a though.
Amitabh Kant
What is the name of DW in --type=DW
Sorry for my English.
Tuan Hoang Anh
Sorry for my English.
Tuan Hoang Anh
On Mon, Aug 9, 2010 at 6:21 PM, Amitabh Kant <amitabhkant@gmail.com> wrote:
2010/8/9 Sim Zacks <sim@compulab.co.il>
I just found out about pgtune and am trying it out on my server.
I have 2.5 questions:
1) Are these settings the maximum that the server will handle, if it is
strictly dedicated to postgresql? Meaning if I am running other stuff on
the server as well, this would be a bad idea.
1a) If I have some intense plpython stored procedures, do they run in
the postgresql memory space (ie using the memory settings from the
postgresql.conf, or do they run under their own memory space and then I
would have to take that into account when allocating postgresql memory?
2) If it sets my max_connections to 80 and would like to set it at 300,
what would be the appropriate setting to lower at its expense?
Sim
Look at the options available in pgtune
-M TOTALMEMORY, --memory=TOTALMEMORY
Total system memory, will attempt to detect if
unspecified
-T DBTYPE, --type=DBTYPE
Database type, defaults to Mixed, valid options are
DW, OLTP, Web, Mixed, Desktop
-c CONNECTIONS, --connections=CONNECTIONS
Maximum number of expected connections, default
depends on database type
For question 1, you can set the type of server you want. For question 2, you can pass the -c parameter and it would adjust the other parameters. Not sure of 1a though.
Amitabh Kant
On 09-Aug-2010 6:40 PM, tuanhoanganh wrote: > What is the name of DW in --type=DW > Sorry for my English. > > Tuan Hoang Anh DW = data warehouse. I don't think you have to apologize for your English.
On Aug 9, 2010, at 11:40 AM, tuanhoanganh wrote: > What is the name of DW in --type=DW > Sorry for my English. > DW: Data Warehouse > Tuan Hoang Anh > > On Mon, Aug 9, 2010 at 6:21 PM, Amitabh Kant <amitabhkant@gmail.com> wrote: > 2010/8/9 Sim Zacks <sim@compulab.co.il> > > > > I just found out about pgtune and am trying it out on my server. > > > I have 2.5 questions: > > 1) Are these settings the maximum that the server will handle, if it is > strictly dedicated to postgresql? Meaning if I am running other stuff on > the server as well, this would be a bad idea. > > > 1a) If I have some intense plpython stored procedures, do they run in > the postgresql memory space (ie using the memory settings from the > postgresql.conf, or do they run under their own memory space and then I > would have to take that into account when allocating postgresql memory? > > > 2) If it sets my max_connections to 80 and would like to set it at 300, > what would be the appropriate setting to lower at its expense? > > > Sim > > Look at the options available in pgtune > > > -M TOTALMEMORY, --memory=TOTALMEMORY > Total system memory, will attempt to detect if > unspecified > -T DBTYPE, --type=DBTYPE > Database type, defaults to Mixed, valid options are > DW, OLTP, Web, Mixed, Desktop > -c CONNECTIONS, --connections=CONNECTIONS > Maximum number of expected connections, default > depends on database type > > For question 1, you can set the type of server you want. For question 2, you can pass the -c parameter and it would adjustthe other parameters. Not sure of 1a though. > > > Amitabh Kant > > --Regards, Vibhor
Sim Zacks wrote: > 1) Are these settings the maximum that the server will handle, if it is > strictly dedicated to postgresql? Meaning if I am running other stuff on > the server as well, this would be a bad idea. > The idea is that they will be in the right general range for a system running nothing but PostgreSQL. The current code goes a little far in that regard; I've gotten some reports that its setting for work_mem in particular is a bit too high for some people. If the amount of memory you'd like to set aside for PostgreSQL use is smaller than the "all of the RAM in the server" it defaults to, just pass it "-M <bytes>" with a smaller setting. For example, if the server has 8GB of RAM, but you only want half to be considered usable by the server, you could add "-M 4294967296". Note that the server doesn't enforce a hard limit here and pgtune will give you high settings for everything, by design. > 1a) If I have some intense plpython stored procedures, do they run in > the postgresql memory space (ie using the memory settings from the > postgresql.conf, or do they run under their own memory space and then I > would have to take that into account when allocating postgresql memory? > Most of the memory settings set in the postgresql.conf impact shared memory uses only by the server. The main client side setting is work_mem, which isn't directly involved in allocating memory; it's only a rough guide to how much memory clients should use for operations like sorting and hashing. If your stored procedure uses a bunch of memory, that's being allocated by the client running it, with no reference to any server side setting. > 2) If it sets my max_connections to 80 and would like to set it at 300, > what would be the appropriate setting to lower at its expense? > Set "-c 300" when you run pgtune and it will do the right thing here. -- Greg Smith 2ndQuadrant US Baltimore, MD PostgreSQL Training, Services and Support greg@2ndQuadrant.com www.2ndQuadrant.us
Greg Smith wrote: > Set "-c 300" when you run pgtune and it will do the right thing here. regarding tuning a database we had a rather neat tool for oracle some years ago that would connect to a live database and monitor the QEP (query execution plan) cache for badly indexed queries etc. It would use this information (with the schema meta data) to suggest creation and deletion of indices or norm or denorm of tables. At the time is was around 3K per seat but because of the size of the schema and project it saved a lot more than it cost -custoemr paid BTW :-) IIRC it was used over a dialup (dialback) ppp tunnel into a customers systems so net/db overheads were pretty light. In the month or two it was running it made very few "wrong" suggestions but was not a tool to be used by a non DBA. My questions is - is there anything out there "similar" for Pg? I was looking at using pgpool as something to build a query profile from. Then roll some perl code to build QEP's for this "cache". Then semi-manual analysis of the output - which could end up as gigs of execution plans :-( Jacqui
Jacqui Caren-home wrote: > we had a rather neat tool for oracle some > years ago that would connect to a live database and monitor the QEP > (query execution plan) cache for badly indexed queries etc. > It would use this information (with the schema meta data) to suggest > creation and deletion of indices or norm or denorm of tables. > ... > My questions is - is there anything out there "similar" for Pg? We have a service like that: http://www.tuningcloud.com/ And some of Enterprise DB's PostgreSQL products have a tool named Dynatune that tweaks the configuration of a running server: http://www.enterprisedb.com/docs/en/8.3/suppguide/EnterpriseDB_Supplement_EN_8.3-15.htm I'm not sure how much they actually look at queries to make those decisions though. -- Greg Smith 2ndQuadrant US Baltimore, MD PostgreSQL Training, Services and Support greg@2ndQuadrant.com www.2ndQuadrant.us