Обсуждение: building 8.3beta2 w/ 'make check' consumes A LOT of disk space

Поиск
Список
Период
Сортировка

building 8.3beta2 w/ 'make check' consumes A LOT of disk space

От
Jörg Beyer
Дата:
Hello.

I compiled and installed PostgreSQL 8.3 beta 2 on OS X 10.4.10 these days,
and noticed a minor issue that may lead to some problems, occasionally.

I generally run make check, and that consumes some extra disk space, of
course. While versions 8.1 and 8.2 were quite happy with a total of
approximately 280-300 MB of free disk space for the build, 8.3 beta 2
consumes *up to 1.6 Gigabyte*. This may not be a problem for professional
environments, but I can imagine cases where some single user machines are
brought to their knees ;-)
(Fortunately not in my case. I'm working with a disk image to build
PostgreSQL, but nonetheless it took three trials until I made the image big
enough...)

Snip from the current INSTALL file:  ... If you are going to run the regression tests you will  temporarily need up to
anextra 90 MB. ... 

May I suggest to correct this sentence?

Thanks for your interest.

Joerg Beyer


============================================================

Jörg Beyer
PHILIPPS-University Marburg
Dept. of Psychology
Germany




Re: building 8.3beta2 w/ 'make check' consumes A LOT of disk space

От
Tom Lane
Дата:
Jörg Beyer <Beyerj@students.uni-marburg.de> writes:
> I generally run make check, and that consumes some extra disk space, of
> course. While versions 8.1 and 8.2 were quite happy with a total of
> approximately 280-300 MB of free disk space for the build, 8.3 beta 2
> consumes *up to 1.6 Gigabyte*.

There is something very wrong with that --- the regression DB has grown
but not by that much.  Where do you see the space going, exactly?
        regards, tom lane


Re: building 8.3beta2 w/ 'make check' consumes A LOT of disk space

От
Tom Lane
Дата:
Jörg Beyer <Beyerj@students.uni-marburg.de> writes:
> Attached is the log with the "du" output, one block for the build directory,
> one for the installation directory, and one for the 8.3-cluster. Some
> comments are added.

> I suspect gprof to be the culprit, everything gprof related is incredibly
> huge. 

Oh, you didn't mention you were using gprof.  Yeah, that's definitely
where the problem is:

...
364     ./src/test/regress/tmp_check/data/global
8040    ./src/test/regress/tmp_check/data/gprof/17805
8036    ./src/test/regress/tmp_check/data/gprof/17807
... 150 lines snipped ...
8120    ./src/test/regress/tmp_check/data/gprof/18286
8120    ./src/test/regress/tmp_check/data/gprof/18287
8056    ./src/test/regress/tmp_check/data/gprof/18288
8124    ./src/test/regress/tmp_check/data/gprof/18289
8080    ./src/test/regress/tmp_check/data/gprof/18290
8140    ./src/test/regress/tmp_check/data/gprof/18328
8056    ./src/test/regress/tmp_check/data/gprof/18332
8124    ./src/test/regress/tmp_check/data/gprof/18333
1296368 ./src/test/regress/tmp_check/data/gprof
8       ./src/test/regress/tmp_check/data/pg_clog
...

That is, each one of the 150+ backend processes launched during the
regression test run dropped a separate 8MB gprof file.  Presto, 1.2GB
eaten up.

The reason you didn't see this before is that we used to drop gmon.out
files directly in $PGDATA, so there was only one, with different backends
overwriting it as they exited.  A patch got put into 8.3 to drop
gmon.out in subdirectories (as you see above) so that the files wouldn't
get overwritten right away.  When autovacuum is enabled this is just
about essential if you want to learn anything useful from profiling.

However, accumulation of zillions of gmon.out files is definitely a
downside of the approach; one that I've noticed myself.  I've also
noticed that it takes a heck of a long time to rm -rf $PGDATA once
you've built up a few tens of thousands of gprof subdirectories.  What's
worse, this accumulation will occur pretty quick even if you're not
doing anything with the DB, because of autovacuum process launches.

I wonder if we need to rein in gmon.out accumulation somehow, and if
so how?  This isn't an issue for ordinary users but I can see it
becoming a PITA for developers.

> And I suspect that some kind of permission problem could play a role, too.

The gprof subdirectories are mode 0700 IIRC ... maybe that's causing you
a problem?
        regards, tom lane


Re: building 8.3beta2 w/ 'make check' consumes A LOT of disk space

От
Jörg Beyer
Дата:
You see, I'm not a trained developer, I'm just a dumb psychologist,
sometimes poking around w/ things I don't fully understand -- learning by
doing, or die trying  ;-)  Give me at least three years...

Seriously now, I didn't even think about possible downsides of
--enable-profiling, I just thought profiling could be a good thing to have,
and switched it on. Now I know better and will stay away from it.  I used
the last hour to build and install a clean beta2, initialized a fresh
cluster, and everything is O.K. now (at least so far).

My apologies for the noise, and for wasting your time -- you have definitely
more urgent items on your list.

Regards 

Joerg Beyer


Am 03.11.2007 23:39 Uhr schrieb Tom Lane (<tgl@sss.pgh.pa.us>):

> Jörg Beyer <Beyerj@students.uni-marburg.de> writes:
>> Attached is the log with the "du" output, one block for the build directory,
>> one for the installation directory, and one for the 8.3-cluster. Some
>> comments are added.
> 
>> I suspect gprof to be the culprit, everything gprof related is incredibly
>> huge. 
> 
> Oh, you didn't mention you were using gprof.  Yeah, that's definitely
> where the problem is:
> 
> ...
> 364     ./src/test/regress/tmp_check/data/global
> 8040    ./src/test/regress/tmp_check/data/gprof/17805
> 8036    ./src/test/regress/tmp_check/data/gprof/17807
> ... 150 lines snipped ...
> 8120    ./src/test/regress/tmp_check/data/gprof/18286
> 8120    ./src/test/regress/tmp_check/data/gprof/18287
> 8056    ./src/test/regress/tmp_check/data/gprof/18288
> 8124    ./src/test/regress/tmp_check/data/gprof/18289
> 8080    ./src/test/regress/tmp_check/data/gprof/18290
> 8140    ./src/test/regress/tmp_check/data/gprof/18328
> 8056    ./src/test/regress/tmp_check/data/gprof/18332
> 8124    ./src/test/regress/tmp_check/data/gprof/18333
> 1296368 ./src/test/regress/tmp_check/data/gprof
> 8       ./src/test/regress/tmp_check/data/pg_clog
> ...
> 
> That is, each one of the 150+ backend processes launched during the
> regression test run dropped a separate 8MB gprof file.  Presto, 1.2GB
> eaten up.
> 
> The reason you didn't see this before is that we used to drop gmon.out
> files directly in $PGDATA, so there was only one, with different backends
> overwriting it as they exited.  A patch got put into 8.3 to drop
> gmon.out in subdirectories (as you see above) so that the files wouldn't
> get overwritten right away.  When autovacuum is enabled this is just
> about essential if you want to learn anything useful from profiling.
> 
> However, accumulation of zillions of gmon.out files is definitely a
> downside of the approach; one that I've noticed myself.  I've also
> noticed that it takes a heck of a long time to rm -rf $PGDATA once
> you've built up a few tens of thousands of gprof subdirectories.  What's
> worse, this accumulation will occur pretty quick even if you're not
> doing anything with the DB, because of autovacuum process launches.
> 
> I wonder if we need to rein in gmon.out accumulation somehow, and if
> so how?  This isn't an issue for ordinary users but I can see it
> becoming a PITA for developers.
> 
>> And I suspect that some kind of permission problem could play a role, too.
> 
> The gprof subdirectories are mode 0700 IIRC ... maybe that's causing you
> a problem?
> 
> regards, tom lane




I wrote:
> However, accumulation of zillions of gmon.out files is definitely a
> downside of the approach; one that I've noticed myself.  I've also
> noticed that it takes a heck of a long time to rm -rf $PGDATA once
> you've built up a few tens of thousands of gprof subdirectories.  What's
> worse, this accumulation will occur pretty quick even if you're not
> doing anything with the DB, because of autovacuum process launches.

> I wonder if we need to rein in gmon.out accumulation somehow, and if
> so how?  This isn't an issue for ordinary users but I can see it
> becoming a PITA for developers.

On reflection, it seems like the worst part of this is the steady
accumulation of gprof files from autovacuum workers, which are unlikely
to be of interest at all (if you want to profile vacuuming, you'd
probably issue manual vacuum commands anyway).  So I propose something
like the attached, untested patch to force all AV workers to dump to the
same gmon.out file.  Comments?

            regards, tom lane

*** src/backend/storage/ipc/ipc.c.orig    Wed Jul 25 15:58:56 2007
--- src/backend/storage/ipc/ipc.c    Sat Nov  3 23:00:41 2007
***************
*** 126,131 ****
--- 126,136 ----
           *    $PGDATA/gprof/8845/gmon.out
           *        ...
           *
+          * To avoid undesirable disk space bloat, autovacuum workers are
+          * discriminated against: all their gmon.out files go into the same
+          * subdirectory.  Without this, an installation that is "just sitting
+          * there" nonetheless eats megabytes of disk space every few seconds.
+          *
           * Note that we do this here instead of in an on_proc_exit()
           * callback because we want to ensure that this code executes
           * last - we don't want to interfere with any other on_proc_exit()
***************
*** 133,139 ****
           */
          char gprofDirName[32];

!         snprintf(gprofDirName, 32, "gprof/%d", (int) getpid());

          mkdir("gprof", 0777);
          mkdir(gprofDirName, 0777);
--- 138,147 ----
           */
          char gprofDirName[32];

!         if (IsAutoVacuumWorkerProcess())
!             snprintf(gprofDirName, 32, "gprof/avworker");
!         else
!             snprintf(gprofDirName, 32, "gprof/%d", (int) getpid());

          mkdir("gprof", 0777);
          mkdir(gprofDirName, 0777);

Re: Profiling vs autovacuum

От
Gregory Stark
Дата:
"Tom Lane" <tgl@sss.pgh.pa.us> writes:

>> However, accumulation of zillions of gmon.out files is definitely a
>> downside of the approach; one that I've noticed myself.  

> Comments?

All I can add is that I've run into this problem myself too.

--  Gregory Stark EnterpriseDB          http://www.enterprisedb.com Ask me about EnterpriseDB's 24x7 Postgres support!