Re: 15,000 tables - next step

Поиск
Список
Период
Сортировка
От Michael Riess
Тема Re: 15,000 tables - next step
Дата
Msg-id dmud5m$245l$1@news.hub.org
обсуждение исходный текст
Ответ на Re: 15,000 tables - next step  (William Yu <wyu@talisys.com>)
Ответы Re: 15,000 tables - next step  (Jan Wieck <JanWieck@Yahoo.com>)
Список pgsql-performance
William Yu schrieb:
 > Michael Riess wrote:
 >>> Well, I'd think that's were your problem is.  Not only you have a
 >>> (relatively speaking) small server -- you also share it with other
 >>> very-memory-hungry services!  That's not a situation I'd like to be in.
 >>> Try putting Apache and Tomcat elsewhere, and leave the bulk of the 1GB
 >>> to Postgres.
 >>
 >>
 >> No can do. I can try to switch to a 2GB machine, but I will not use
several machines. Not for a 5GB database. ;-)
 >>
 >>> With 1500 shared buffers you are not really going
 >>> anywhere -- you should have ten times that at the very least.
 >>>
 >>
 >> Like I said - I tried to double the buffers and the performance did
not improve in the least. And I also tried this on a 2GB machine, and
swapping was not a problem. If I used 10x more buffers, I would in
essence remove the OS buffers.
 >
 > Increasing buffers do improve performance -- if you have enough
memory. You just don't have enough memory to play with. My servers run
w/ 10K buffers (128MB on 64-bit FC4) and it definitely runs better w/ it
at 10K versus 1500.
 >
 > With that many tables, your system catalogs are probably huge.


content2=# select sum(relpages) from pg_class where relname like 'pg_%';
   sum
-------
  64088
(1 row)

:-)


 > While my situtation was fixable by scheduling a nightly
vacuum/analyze on the system catalogs to get rid of the bazillion dead
table/index info, you have no choice but to get more memory so you can
stuff your entire system catalog into buffers/os cache. Personally, w/
1GB of ECC RAM at ~$85, it's a no brainer. Get as much memory as your
server can support.

The problem is that we use pre-built hardware which isn't configurable.
We can only switch to a bigger server with 2GB, but that's tops.

I will do the following:

- switch to 10k buffers on a 1GB machine, 20k buffers on a 2GB machine
- try to optimize my connection polls to remember which apps (groups of
30 tables) were accessed, so that there is a better chance of using caches
- "swap out" tables which are rarely used: export the content, drop the
table, and re-create it on the fly upon access.

Thanks for your comments!

В списке pgsql-performance по дате отправления:

Предыдущее
От: William Yu
Дата:
Сообщение: Re: 15,000 tables - next step
Следующее
От: Andreas Pflug
Дата:
Сообщение: Re: Faster db architecture for a twisted table.