Re: performance problem - 10.000 databases

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: performance problem - 10.000 databases
Дата
Msg-id 9643.1067610219@sss.pgh.pa.us
обсуждение исходный текст
Ответ на performance problem - 10.000 databases  (Marek Florianczyk <franki@tpi.pl>)
Ответы Re: performance problem - 10.000 databases
Список pgsql-admin
Marek Florianczyk <franki@tpi.pl> writes:
> We are building hosting with apache + php ( our own mod_virtual module )
> with about 10.000 wirtul domains + PostgreSQL.
> PostgreSQL is on a different machine ( 2 x intel xeon 2.4GHz 1GB RAM
> scsi raid 1+0 )
> I've made some test's - 3000 databases and 400 clients connected at same
> time.

You are going to need much more serious iron than that if you want to
support 10000 active databases.  The required working set per database
is a couple hundred K just for system catalogs (I don't have an exact
figure in my head, but it's surely of that order of magnitude).  So the
system catalogs alone would require 2 gig of RAM to keep 'em swapped in;
never mind caching any user data.

The recommended way to handle this is to use *one* database and create
10000 users each with his own schema.  That should scale a lot better.

Also, with a large max_connections setting, you have to beware that your
kernel settings are adequate --- particularly the open-files table.
It's pretty easy for Postgres to eat all your open files slots.  PG
itself will usually survive this condition just fine, but everything
else you run on the machine will start falling over :-(.  For safety
you should make sure that max_connections * max_files_per_process is
comfortably less than the size of the kernel's open-files table.

            regards, tom lane

В списке pgsql-admin по дате отправления:

Предыдущее
От: Marek Florianczyk
Дата:
Сообщение: Re: performance problem - 10.000 databases
Следующее
От: "Matt Clark"
Дата:
Сообщение: Re: performance problem - 10.000 databases