max_connections limit violation not showing in pg_stat_activity

Поиск
Список
Период
Сортировка
От Charles Clavadetscher
Тема max_connections limit violation not showing in pg_stat_activity
Дата
Msg-id c272641c-f91c-42c4-346a-0afcd0d7d8cc@swisspug.org
обсуждение исходный текст
Ответы Re: max_connections limit violation not showing in pg_stat_activity
Список pgsql-general
Hello

We are using PostgreSQL 9.3.10 on RedHat (probably 6.x).

The database is hosted by an internal service provider and we have
superuser access to it over a PG client, e.g. psql, but not to the OS.
For that reason we only have access to the log files indirectly using
some of the built in system functions like pg_ls_dir, etc.

Among other thing the database is the backend for a web application that
expects a load of a some hundred users at a time (those are participans
to online surveys that we use for computing economic indicators and
access the system every month). The whole amount of people expected is
above 5000, but we don't expect a too high concurrent access to the
database. As mentioned a few hundreds at the beginning of the surveys.

To be sure that we won't have problems with the peak times we created a
load test using gatling that ramps up to 1000 users in 5 minutes in
bunches of 10. At the beginning we had problems with the web server
response that we were able to correct. Now we face problem with the
max_connections limit of PostgreSQL. Currently it is set to the default
of 100. We are going to look into it and either increase that limit or
consider connections pooling.

What bothers me however is that running a query on pg_stat_activity with
a watch of 1 seconds never shows any value higher than 37 of concurrent
active connections.

SELECT count(*) FROM pg_stat_activity; watch 1;

Due to that fact it took us quite a time to figure out that the
bottleneck had become the database. We discovered it after looking into
the log files (as mentioned above this is not very straightforward, in
particular because the logs tend to become quite huge).

I assume that the peaks of requests violating the limit happen between
two calls of the query. Is there a better way to keep track of this kind
of problems? I felt a bit weird not to be able to discover the issue sooner.

And what would be a reasonable strategy to deal with the problem at
hand? Increasing max_connections has repercussions on the configuration
of work_mem (if I remember well) or on the other hand on the amount of
physical memory that must be available on the system.

On Thursday we are going to have a meeting with our DB hosting provider
to discuss which improvement need to be made to meet the requirements of
our applications (the web application mentioned is not the only one
using the database, but is the only one where we expect such peaks).

So I'd be very grateful for advice on this subject.

Thank you.
Regards
Charles

--
Swiss PostgreSQL Users Group
c/o Charles Clavadetscher
Treasurer
Motorenstrasse 18
CH – 8005 Zürich

http://www.swisspug.org

+-----------------------+
|   ____  ______  ___   |
|  /    )/      \/   \  |
| (     / __    _\    ) |
|  \    (/ o)  ( o)   ) |
|   \_  (_  )   \ ) _/  |
|     \  /\_/    \)/    |
|      \/ <//|  |\\>    |
|           _|  |       |
|           \|_/        |
|                       |
| PostgreSQL 1996-2016  |
|  20 Years of Success  |
|                       |
+-----------------------+


В списке pgsql-general по дате отправления:

Предыдущее
От: Subhankar Chattopadhyay
Дата:
Сообщение: Re: pg_basebackup on slave running for a long time
Следующее
От: Torsten Förtsch
Дата:
Сообщение: min/max_wal_size