Обсуждение: postgresql performace degrading after a while

Поиск
Список
Период
Сортировка

postgresql performace degrading after a while

От
"Ron Marom"
Дата:

Hi All,

 

I seem to have a problem with postgresql 7.4.2 running on Red Hat Enterprise Linux ES3. I am running an application server over a database in which tables contains at most a few thousands of records. The problematic table, however, contains 67 records, with an application daemon updating all those records every 30 seconds.

 

Everything looks fine as the application goes up, however as it keeps running, after 1-1.5 days the postgresql connections start consuming 100% of the machine's CPU in kernel mode. When I come to this point, restart of postgresql, the application and even the entire machine does not help, as it continues to consume 100% of the CPU immediately after going up. Nevertheless, I have negated hardware issues/ application bug as I discovered that when creating a dump of all application data using the pg_dump utility, then recreating the database using dropdb and createdb and finally restoring from the dump, I manage to return to the start point, i.e. everything works fine and I gain more 1-1.5 days of work.

 

Can anyone have an idea of what I should do to avoid recreating the DB every day?

 

Thanks in Advance.

Re: postgresql performace degrading after a while

От
Michael Fuhr
Дата:
On Mon, Jan 30, 2006 at 09:02:24AM +0200, Ron Marom wrote:
> I seem to have a problem with postgresql 7.4.2 running on Red Hat
> Enterprise Linux ES3.

If you can't upgrade to 8.0 or 8.1 then at least consider using the
latest version in the 7.4 branch (7.4.11).  You're missing almost
two years of bug fixes.

http://www.postgresql.org/docs/7.4/static/release.html

> I am running an application server over a database in which tables
> contains at most a few thousands of records. The problematic table,
> however, contains 67 records, with an application daemon updating
> all those records every 30 seconds.

Are you vacuuming that table frequently?  If not then it's accumulating
a lot of dead tuples, which would cause performance to degrade.

Are you vacuuming at all?  See "Routine Database Maintenance Tasks"
in the documentation for an explanation of what it is and why it's
necessary.

http://www.postgresql.org/docs/7.4/static/maintenance.html

--
Michael Fuhr

Re: postgresql performace degrading after a while

От
"Ron Marom"
Дата:
First of all thanks for you quick and efficient response.

Indeed I forgot to mention that I AM vacuuming the database using a
daemon every few hours; however this seems not to be the issue this
time, as when the CPU consumptions went up I tried to vacuum manually
and this seemed to take no affect.

As for the version, I am aware of the most recent versions and the next
version of the application will be certified with postgresql 8.1.x,
however the behavior I mentioned occurred on a live site in which making
changes is not simple. I will do it if I have no choice but I would be
happy if there is any workaround to hold the server for a while until
the ordinary upgrade occurs.

Thanks again.




-----Original Message-----
From: Michael Fuhr [mailto:mike@fuhr.org]
Sent: Monday, January 30, 2006 9:17 AM
To: Ron Marom
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] postgresql performace degrading after a while

On Mon, Jan 30, 2006 at 09:02:24AM +0200, Ron Marom wrote:
> I seem to have a problem with postgresql 7.4.2 running on Red Hat
> Enterprise Linux ES3.

If you can't upgrade to 8.0 or 8.1 then at least consider using the
latest version in the 7.4 branch (7.4.11).  You're missing almost
two years of bug fixes.

http://www.postgresql.org/docs/7.4/static/release.html

> I am running an application server over a database in which tables
> contains at most a few thousands of records. The problematic table,
> however, contains 67 records, with an application daemon updating
> all those records every 30 seconds.

Are you vacuuming that table frequently?  If not then it's accumulating
a lot of dead tuples, which would cause performance to degrade.

Are you vacuuming at all?  See "Routine Database Maintenance Tasks"
in the documentation for an explanation of what it is and why it's
necessary.

http://www.postgresql.org/docs/7.4/static/maintenance.html

--
Michael Fuhr

Re: postgresql performace degrading after a while

От
Martijn van Oosterhout
Дата:
On Mon, Jan 30, 2006 at 09:28:14AM +0200, Ron Marom wrote:
> Indeed I forgot to mention that I AM vacuuming the database using a
> daemon every few hours; however this seems not to be the issue this
> time, as when the CPU consumptions went up I tried to vacuum manually
> and this seemed to take no affect.

You may be suffering from index bloat. Newer versions mostly deal with
this but old ones don't. Check the data directory for files that keep
growing. Cross-reference with relfilenode in pg_class. Also, vacuuming
once every few hours for such a commonly updated table seems not
enough, maybe you need to do it more often. What about your FSM
settings?

If it's index bloat, simply running REINDEX over the table should fix
it. In general, VACUUM FULL + REINDEX should have pretty much the same
effect as a dump/restore..

Hope this helps,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.

Вложения

Re: postgresql performace degrading after a while

От
Tom Lane
Дата:
"Ron Marom" <ron@bitband.com> writes:
> Indeed I forgot to mention that I AM vacuuming the database using a
> daemon every few hours; however this seems not to be the issue this
> time, as when the CPU consumptions went up I tried to vacuum manually
> and this seemed to take no affect.

This is not nearly often enough.  You stated that the problem table
contains 67 records and all of them are updated every 30 seconds.
Therefore, after one hour the table contains 67 live records and
8040 dead ones, and a slowdown on the order of 100x is not exactly
surprising.

I would recommend vacuuming this specific table every five minutes or
less via a cron job.  Or even fix that application daemon to issue
a vacuum after each time it updates the table --- if there are never
more than 67 dead rows then vacuuming won't take long.

After you update to 8.1 you might want to consider using autovacuum
instead of a hand-tuned cron script.

            regards, tom lane

Re: postgresql performace degrading after a while

От
Rick Gigger
Дата:
On Jan 30, 2006, at 12:28 AM, Ron Marom wrote:

> First of all thanks for you quick and efficient response.
>
> Indeed I forgot to mention that I AM vacuuming the database using a
> daemon every few hours; however this seems not to be the issue this
> time, as when the CPU consumptions went up I tried to vacuum manually
> and this seemed to take no affect.

This does not necessarily mean that your vacuuming is all being taken
care of properly.  I would look at the size of the files for those
tables and associated indexes and toast tables.  The table could be
growing much larger than it needs to be and becoming much slower than
it needs to be.  If you fsm settings are too low then even after
vacuuming there will still be tons of dead tuples causing your tables
to get bloated.  Even if your settings are high enough you probably
need to vacuum more often.  If you are doing it every two hours and
they are being updated every 30 seconds and there are 67 records then
by the time vacuum gets around to cleaning up you already have space
allocated for 16,080 tuples for a table that only has 67.  Now doing
anything on that table is going to take way longer than it should.
You could try doing a vacuum full on it sometime (this will block
access to the table until it is done so be careful) to see if that
speeds it up.  If that works then the problem is that your table is
simply becoming too bloated.  If you get the shrunk down through
either a reload of the table or a vacuum full then a vacuum on that
specific table should be near instantaneous if you've only got 67
records.  I would try setting up a vacuum on that one table about
every 10 minutes after you've got it shrunk down and see if that
helps.  Also once again make sure your fsm settings are high enough
or all the vacuuming in the world won't do any good.

It's possible that your indexes are also getting bloated.  Try
reindexing the table and see if that helps.

I am no expert here so please anyone correct me if I am wrong but I
have been dealing with very similar issues lately that is how I fixed
it.

Rick