Обсуждение: leaking FD's ?

Поиск
Список
Период
Сортировка

leaking FD's ?

От
Michael Simms
Дата:
Hi

I am running a process that does a fair number of selects and updates but
nothing too complex.

I have the postmaster starting like such:

/usr/bin/postmaster -o "-F -S 10240" -d 1 -N 128 -B 256 -D/var/lib/pgsql/data -o -F > /tmp/postmasterout 2>
/tmp/postmastererr

Now, looking at that, I have 256 shared memory segments, and as such,
I would expect the number of file descriptors used by my backends to
be fairly similar.

Now, looking at /proc, I have backend processes using up to 460 fds

I have just had to recompile my kernel cos it kept going up to 10240
fd's and killing everything, so now I have 40960 fds available. I am
still concerned though. every time a series of requests goes through,
the number of FD's goes up. Is this leakage, do you think, or just the
way it always acts? Can I expect to see a peak on fd's or is it just
going to go up and up?

This is the latest stable release.
[PostgreSQL 6.5.2 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]
Linux oscounter.org 2.2.12 #2 SMP Fri Oct 1 21:50:14 BST 1999 i686 unknown

Thanx

                        M Simms

Re: [GENERAL] leaking FD's ?

От
Bruce Momjian
Дата:
> Hi
>
> I am running a process that does a fair number of selects and updates but
> nothing too complex.
>
> I have the postmaster starting like such:
>
> /usr/bin/postmaster -o "-F -S 10240" -d 1 -N 128 -B 256 -D/var/lib/pgsql/data -o -F > /tmp/postmasterout 2>
/tmp/postmastererr
>
> Now, looking at that, I have 256 shared memory segments, and as such,
> I would expect the number of file descriptors used by my backends to
> be fairly similar.

Each backend keeps up to 64(?) file descriptors open, expecting it may
need to access those files in the future, so it uses it as a cache.


--
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

where can i find ???

От
Chris Ian Capon Fiel
Дата:

where can i find information about postgres design limits
specifically
                    1) Max number of clients connected in one server
                    2) Maximum Database size
                    3) Maximum number of files per database
                    4) Maximum number of database open in one transaction
                    5) Maximum number of tables per database
                    6) Maximum row size

thanks in advance


ian





Re: [GENERAL] where can i find ???

От
tolik@aaanet.ru (Anatoly K. Lasareff)
Дата:
>>>>> "CICF" == Chris Ian Capon Fiel <ian@xavier.cc.xu.edu.ph> writes:

 CICF> where can i find information about postgres design limits
 CICF> specifically
. . .
 CICF> 4) Maximum number of database open in one transaction

1

. . .
 CICF> 6) Maximum row size

8kb

--
Anatoly K. Lasareff              Email:       tolik@icomm.ru

Re: [GENERAL] leaking FD's ?

От
Michael Simms
Дата:
> > Hi
> >
> > I am running a process that does a fair number of selects and updates but
> > nothing too complex.
> >
> > I have the postmaster starting like such:
> >
> > /usr/bin/postmaster -o "-F -S 10240" -d 1 -N 128 -B 256 -D/var/lib/pgsql/data -o -F > /tmp/postmasterout 2>
/tmp/postmastererr
> >
> > Now, looking at that, I have 256 shared memory segments, and as such,
> > I would expect the number of file descriptors used by my backends to
> > be fairly similar.
>
> Each backend keeps up to 64(?) file descriptors open, expecting it may
> need to access those files in the future, so it uses it as a cache.

Thats fine, except for, as I stated, I was up to 480 at time of writing. As
time progressed, the number of FDs open maxed out at 1022, which considering
I have a max of 1024 per process seems to say to me that it was leaking.
Especially as it became increasingly slower as it went after hitting 1022
which to me indicates that, as you say, it held fd's open for caching, but
when it reached its fd limit and still leaked, it had less and less free fds
to play with.

Sound like a leak to anyone?

                        ~Michael

Re: [GENERAL] leaking FD's ?

От
Jim Cromie
Дата:
Michael Simms wrote:

> > > Hi
> > >
> > > I am running a process that does a fair number of selects and updates but
> > > nothing too complex.
> > >
> > > I have the postmaster starting like such:
> > >
> > > /usr/bin/postmaster -o "-F -S 10240" -d 1 -N 128 -B 256 -D/var/lib/pgsql/data -o -F > /tmp/postmasterout 2>
/tmp/postmastererr
> > >

ok, looked up man pages....
I dont have an -N 128   in my man page (v6.5.1)

>
> > > Now, looking at that, I have 256 shared memory segments, and as such,
> > > I would expect the number of file descriptors used by my backends to
> > > be fairly similar.
> >

why this expectation ?  To my knowledge, shared memory shows up with `ipcs`,
not in the process's file descriptors.

Q: What unix/linux utilities would one use to determine descriptor usage ?
Are any of those lost handles unix or inet sockets ? did you try netstat ?

>
> > Each backend keeps up to 64(?) file descriptors open, expecting it may
> > need to access those files in the future, so it uses it as a cache.
>
> Thats fine, except for, as I stated, I was up to 480 at time of writing. As
> time progressed, the number of FDs open maxed out at 1022, which considering
> I have a max of 1024 per process seems to say to me that it was leaking.
> Especially as it became increasingly slower as it went after hitting 1022
> which to me indicates that, as you say, it held fd's open for caching, but
> when it reached its fd limit and still leaked, it had less and less free fds
> to play with.
>
> Sound like a leak to anyone?
>

Actually, it sounds rather like operator error.

In other words; supporting evidence ? more clues available ?

A-priori, Id think that leaking handles would have been seen long ago by hundreds of people, who are almost all using
thestandard 
fd-open-max.  Pushing the limits up just to keep running sounds like a rather desperate solution.

`I dont have enough information to conclude otherwize.  For instance - you havent said what operating system you are
on, Ill assume 
linux since you rebuilt the kernel with more.   What did you start out with ?   Please be specific here, I hope to
learnsomething 
from your experience.

I note on re-reading both your postings that the 2nd one has apparently corrected 2 numbers, youve dropped a zero, and
gotmore 
reasonable numbers.  The 1st numbers were extreme; Im not knowledgeable enough to say that it couldnt be done, but I
wouldntdo it 
wihout good reason.

So, is yours a commercial installation ?   Have you done benchmarks on your system to establish what performance gains
youvegotten 
by enlarging shared memory segments etc...    Such a case study would be very helpful to the postgres community in
establishinga 
Postgres-Tuning-Guide.
Ill admit I didnot do an archive search.

Are you using any of your own extensions, or is it an out-of-the-box postgres ?

Whats a fair number ?  If your transaction count were one of the world-wide maxes for a postgres installation, your
chanceat 
exposing a bug would be better.

How about your 'nothing-to-complex' ?   Plain vanilla operations are less likely to expose new bugs than all the new
features. Do 
you use triggers, refint, etc ?

Granted, something sounds leaky, but youve gotten an answer from someone who regularly answers questions in this forum
(Bruce,not 
me).  He clearly doesnt know of such a leak, so its up to you to find it.

I know I cant help you, Im just a basic user
good luck


Re: [GENERAL] leaking FD's ?

От
Michael Simms
Дата:
>
> ok, looked up man pages....
> I dont have an -N 128   in my man page (v6.5.1)
>

<From man postmaster>

 -N n_backends
      n_backends is the maximum number  of  backend  server
      processes  that  this postmaster is allowed to start.
      In the stock configuration, this  value  defaults  to
      32,  and  can  be  set as high as 1024 if your system
      will support that many processes.  Both  the  default
      and  upper  limit values can be altered when building
      Postgres (see src/include/config.h).

> why this expectation ?  To my knowledge, shared memory shows up with `ipcs`,
> not in the process's file descriptors.

Fair enough, I know nothing about shared memory.

> Q: What unix/linux utilities would one use to determine descriptor usage ?
> Are any of those lost handles unix or inet sockets ? did you try netstat ?

I was looking at /proc/pid/fd and seeing how many open fd's are in there.

> Actually, it sounds rather like operator error.

Nope, the fd's are in the *postgresql backend process* thus I have no control
directly about what it does. I initially suspected myown code, but I have
7 open fd's, and it stays pretty much constant.

> In other words; supporting evidence ? more clues available ?
>

[snip]

> `I dont have enough information to conclude otherwize.  For instance - you havent said what operating system you are
on, Ill assume 
> linux since you rebuilt the kernel with more.   What did you start out with ?   Please be specific here, I hope to
learnsomething 
> from your experience.

I stated in my first email, the output of uname -a

Linux oscounter.org 2.2.12 #2 SMP Fri Oct 1 21:50:14 BST 1999 i686 unknown

> I note on re-reading both your postings that the 2nd one has apparently corrected 2 numbers, youve dropped a zero,
andgot more 
> reasonable numbers.  The 1st numbers were extreme; Im not knowledgeable enough to say that it couldnt be done, but I
wouldntdo it 
> wihout good reason.

No, the numbers are all correct. In the first mail I raised my *system limit*
of file descriptors from 10240 to 40960 and the process limit has stayed
the same at 1024

> So, is yours a commercial installation ?   Have you done benchmarks on
> your system to establish what performance gains youve gotten
> by enlarging shared memory segments etc...    Such a case study would be
> very helpful to the postgres community in establishing a
> Postgres-Tuning-Guide.
> Ill admit I didnot do an archive search.

No, Ive not done any testing on tuning. I didnt have time, I was
working against a deadline to get the system running. I took bruces
earlier advice that you can not have too many shared memory blocks, so
I increased it. Ideally I would love to test these things but I never
have enough free time {:-( One day maybee I will have a weekend to sit
down and do just that, cos I am very very interested as to what the
results would be.

> Are you using any of your own extensions, or is it an out-of-the-box
> postgres ?

It is a clean compile of 6.5.2, no additions or modifications.

> Whats a fair number ?  If your transaction count were one of the
> world-wide maxes for a postgres installation, your chance at
> exposing a bug would be better.

The process carries out around 500 operations (roughly) before
reaching its fd limit. Each operation is on a table containing up to
100,000 rows (approx).

> How about your 'nothing-to-complex' ?   Plain vanilla operations are
> less likely to expose new bugs than all the new features.  Do
> you use triggers, refint, etc ?

Simple selects, updates, inserts, and a few selects into temporary
tables. No triggers no nothings. I agree I was very surprised to find
such a serious leak in such a simple use of the system.

> Granted, something sounds leaky, but youve gotten an answer from
> someone who regularly answers questions in this forum (Bruce, not
> me).  He clearly doesnt know of such a leak, so its up to you to find it.
>
> I know I cant help you, Im just a basic user
> good luck

My problem is, the database I am using is in a live environment now. I
cant just take it down or halt it for some gdb work {:-(.

However:

Hiroshi has pointed out a posting from the hackers list that may be
the problem already discovered (I didnt find it when I did an archive
search for some reason) on August 31st. Tom Lane has apparently fixed
it in the 'current' version. So, case closed. As per usual, Tom has
already fixed a bug that I found {:-)

                        ~Michael

Import data from a file

От
datab@main.kenken.ro
Дата:
Hi!

How can I import data from a file into existing table?

Thanks,
radu s.


Re: [GENERAL] Import data from a file

От
greg@proterians.net
Дата:
>
> How can I import data from a file into existing table?
>
> Thanks,
> radu s.
>
  COPY table FROM 'file';

  --Greg--


Re: [GENERAL] Import data from a file

От
Bob Kline
Дата:
On Mon, 25 Oct 1999 datab@main.kenken.ro wrote:

> Hi!
>
> How can I import data from a file into existing table?
>
> Thanks,
> radu s.

Have you read the manual?

     http://www.postgresql.org/docs/user/sql-copy.htm

--
Bob Kline
mailto:bkline@rksystems.com
http://www.rksystems.com


Re: [GENERAL] Import data from a file

От
Peter Eisentraut
Дата:
On Mon, 25 Oct 1999 datab@main.kenken.ro wrote:

> How can I import data from a file into existing table?

Contrary to what some other people have said (and will say) you perhaps
cannot use the copy command, since it can only read in files on the same
file system as the database server. There is the psql \copy command which
is a wrapper around the backend's copy but will act on the client's file
system.

In general your existing file has to be very carefully crafted to work
right with either copy incarnation. If it isn't I'd personally go for a
Perl script, but any other higher level language would do.

    -Peter

--
Peter Eisentraut                  Sernanders vaeg 10:115
peter_e@gmx.net                   75262 Uppsala
http://yi.org/peter-e/            Sweden