Обсуждение: Tuning Postgres for single user manipulating large amounts of data

Поиск
Список
Период
Сортировка

Tuning Postgres for single user manipulating large amounts of data

От
Paul Taylor
Дата:
Hi, Im using Postgres 8.3 on a Macbook Pro Labtop.
I using the database with just one db connection to build a lucene
search index from some of the data, and Im trying to improve
performance. The key thing is that I'm only a single user but
manipulating large amounts of data , i.e processing tables with upto 10
million rows in them, so I think want to configure Postgres so that it
can create large temporary tables in memory

I've tried changes various parameters such as shared_buffers, work_mem
and checkpoint_segments but I don't really understand what they values
are, and the documentation seems to be aimed towards configuring for
multiple users, and my changes make things worse. For example my machine
has 2GB of memory and I read if using as a dedicated server you should
set shared memory to 40% of total memory, but when I increase to more
than 30MB Postgres will not start complaining about my SHMMAX limit.

Paul

Re: Tuning Postgres for single user manipulating large amounts of data

От
Andy Colson
Дата:
On 12/9/2010 6:25 AM, Paul Taylor wrote:
> Hi, Im using Postgres 8.3 on a Macbook Pro Labtop.
> I using the database with just one db connection to build a lucene
> search index from some of the data, and Im trying to improve
> performance. The key thing is that I'm only a single user but
> manipulating large amounts of data , i.e processing tables with upto 10
> million rows in them, so I think want to configure Postgres so that it
> can create large temporary tables in memory
>
> I've tried changes various parameters such as shared_buffers, work_mem
> and checkpoint_segments but I don't really understand what they values
> are, and the documentation seems to be aimed towards configuring for
> multiple users, and my changes make things worse. For example my machine
> has 2GB of memory and I read if using as a dedicated server you should
> set shared memory to 40% of total memory, but when I increase to more
> than 30MB Postgres will not start complaining about my SHMMAX limit.
>
> Paul
>

You need to bump up your SHMMAX is your OS.  I'm sure google knows how
to do it.  (in linux use sysctl, so it may be similar in macos).

checkpoint_segments: I've bumped them up to 10, but only when inserting
a huge amount of data, not sure how much it'll help otherwise.

shared_buffers: this is the big one.  Set it big, 1G maybe

work_mem: this is for temp work a query might need to do, like sorting,
merging, etc.  A big value (100Meg or so) would be ok.  Its Per User,
but since there is only one of you, splurge.

There is also an effective_cache_size (or something like that): its the
amount of memory PG can assume is being used for disk cache.  Its not
something that'll be allocated.  So you have 2G, 1G for PG, 300Meg for
os and other stuff, so 700Meg for effective_cache_size?

In Linux I use "free" to see how much is being used for disk cache, and
set it to that.


-Andy

Re: Tuning Postgres for single user manipulating large amounts of data

От
Andy Colson
Дата:
On 12/9/2010 8:50 AM, Andy Colson wrote:
> On 12/9/2010 6:25 AM, Paul Taylor wrote:
>> Hi, Im using Postgres 8.3 on a Macbook Pro Labtop.
>> I using the database with just one db connection to build a lucene
>> search index from some of the data, and Im trying to improve
>> performance. The key thing is that I'm only a single user but
>> manipulating large amounts of data , i.e processing tables with upto 10
>> million rows in them, so I think want to configure Postgres so that it
>> can create large temporary tables in memory
>>
>> I've tried changes various parameters such as shared_buffers, work_mem
>> and checkpoint_segments but I don't really understand what they values
>> are, and the documentation seems to be aimed towards configuring for
>> multiple users, and my changes make things worse. For example my machine
>> has 2GB of memory and I read if using as a dedicated server you should
>> set shared memory to 40% of total memory, but when I increase to more
>> than 30MB Postgres will not start complaining about my SHMMAX limit.
>>
>> Paul
>>
>
> You need to bump up your SHMMAX is your OS.

sorry: SHMMAX _in_ your OS.

its an OS setting not a PG one.

-Andy


Re: Tuning Postgres for single user manipulating large amounts of data

От
Reid Thompson
Дата:
On 12/09/2010 09:59 AM, Andy Colson wrote:
> On 12/9/2010 8:50 AM, Andy Colson wrote:
>> On 12/9/2010 6:25 AM, Paul Taylor wrote:

>> You need to bump up your SHMMAX is your OS.
>
> sorry: SHMMAX _in_ your OS.
>
> its an OS setting not a PG one.
>
> -Andy
>
>
scroll down to the section on OSX
http://developer.postgresql.org/pgdocs/postgres/kernel-resources.html

Re: Tuning Postgres for single user manipulating large amounts of data

От
Paul Taylor
Дата:
On 09/12/2010 15:12, Reid Thompson wrote:
> On 12/09/2010 09:59 AM, Andy Colson wrote:
>> On 12/9/2010 8:50 AM, Andy Colson wrote:
>>> On 12/9/2010 6:25 AM, Paul Taylor wrote:
>>> You need to bump up your SHMMAX is your OS.
>> sorry: SHMMAX _in_ your OS.
>>
>> its an OS setting not a PG one.
>>
>> -Andy
>>
>>
> scroll down to the section on OSX
> http://developer.postgresql.org/pgdocs/postgres/kernel-resources.html
>
Thanks guys, but one think I dont get is why does setting shared_buffers
to 40mb break the kernel limit, I mean 40 mb doesnt sound like very much
at all

Paul

Re: Tuning Postgres for single user manipulating large amounts of data

От
Reid Thompson
Дата:
On 12/09/2010 12:36 PM, Paul Taylor wrote:
> On 09/12/2010 15:12, Reid Thompson wrote:
>> On 12/09/2010 09:59 AM, Andy Colson wrote:
>>> On 12/9/2010 8:50 AM, Andy Colson wrote:
>>>> On 12/9/2010 6:25 AM, Paul Taylor wrote:
>>>> You need to bump up your SHMMAX is your OS.
>>> sorry: SHMMAX _in_ your OS.
>>>
>>> its an OS setting not a PG one.
>>>
>>> -Andy
>>>
>>>
>> scroll down to the section on OSX
>> http://developer.postgresql.org/pgdocs/postgres/kernel-resources.html
>>
> Thanks guys, but one think I dont get is why does setting shared_buffers to 40mb break the kernel limit, I mean 40 mb
doesntsound 
> like very much at all
>
> Paul

It's not -- from the same page (near the top)
17.4.1. Shared Memory and Semaphores

Shared memory and semaphores are collectively referred to as "System V IPC" (together with message queues, which are
notrelevant 
for PostgreSQL). Almost all modern operating systems provide these features, but many of them don't have them turned on
or
sufficiently sized by default, especially as available RAM and the demands of database applications grow.

and/but most of these system defaults originated when system RAM availability was much smaller

Re: Tuning Postgres for single user manipulating large amounts of data

От
Scott Marlowe
Дата:
On Thu, Dec 9, 2010 at 5:25 AM, Paul Taylor <ijabz@fastmail.fm> wrote:
> Hi, Im using Postgres 8.3 on a Macbook Pro Labtop.
> I using the database with just one db connection to build a lucene search
> index from some of the data, and Im trying to improve performance. The key
> thing is that I'm only a single user but manipulating large amounts of data
> , i.e processing tables with upto 10 million rows in them, so I think want
> to configure Postgres so that it can create large temporary tables in memory
>
> I've tried changes various parameters such as shared_buffers, work_mem and
> checkpoint_segments but I don't really understand what they values are, and
> the documentation seems to be aimed towards configuring for multiple users,
> and my changes make things worse. For example my machine has 2GB of memory
> and I read if using as a dedicated server you should set shared memory to
> 40% of total memory, but when I increase to more than 30MB Postgres will not
> start complaining about my SHMMAX limit.

So you're pretty much batch processing.  Not Postgresql's strongest
point.  But we'll see what we can do.  Large shared buffers aren't
gonna help a lot here, since your OS will be caching files as well,
and you've only got one process running.  You do want a large enough
shared_buffer to hold everything you're working on at one point in
time, so getting it into the hundred or so megabyte range will likely
help.  After that you'll be stealing memory that could be used for OS
caching or work_mem, so don't go crazy, especially on a machine with
only 2 Gigs ram.  Note I just picked up 8 gigs of DDR3 ram for $99 on
newegg, so if you MBP can handle more memory, now's the time to
splurge.

Crank up work_mem to something pretty big, in the 60 to 200 meg range.
 note that work_mem is PER sort, not total or per connection.  So if
your single user runs a query with three sorts, it could use 3x
work_mem.  Once it allocates too much memory your machine will start
swapping and slow to a crawl.  So don't overshoot.

Assuming you can recreate your db should things go horribly wrong, you
can turn off fsync.  Also crank up WAL segments to 32 or 64 or so.

Make sure accesstime updates are turned off for the file system.
(noatime mount option).

--
To understand recursion, one must first understand recursion.