> Hi
>
> I asked these questions a couple of weeks ago and got no response whatsoever
> so I am going to try again.
>
> I have just installed 6.5, and there are some things I cannot find in
> the documentation.
>
> 1 ) When I use temp tables, is there a way to instruct postgresql to
> keep these in memory rather than on disc, for faster access, or
> does it do this anyway with temp tables
No, not really, though there is a cache that keeps recent blocks in
memory, but no way to instruct what tables to keep in the cache.
> 2 ) Is there an optimal amount of updates and inserts to perform
> before vacuuming a database, some kind of formula based on inserts
> and updates that indicates when a vacuum would be most
> beneficial. I realise there cannot be an absolute rule for this,
> but a guideline would help, as I dont know if I will need to
> vacuum more than once a day on a busy database.
Not really.
> 3 ) Is there a way to instruct postgresql to perform a query at a
> lower priority, such as daily maintainence operations, so that
> these jobs do not impact on the interactive actions. I realise
> I can renice a process that is making calls to the database, but
> that doesnt have any effect on the backend spawned by the
> postmaster when I connect to it.
> If there is no such functionality, would people be interested in it
> if I was to code it and release it back to the main source tree?
Sure. You could use some 'set' command. I would recommend something
that put it at the end of the lock queue, though with 6.5 and MVCC,
the isn't much lock queue contention anymore. Not sure what lower
priority would mean.
> 4 ) Is there an optimal ratio between the number of backends and the
> number of shared memory buffers. I realise there is a minimum of
> 1:2 but do more shared memory buffers increase performance in some
> areas, or would the extra overhead of managing the buffers make the
> increase pointless.
Yes, they certainly do, and they are shared, so even one backend running
will use all the shared buffers it can get.
> 5 ) The final question (I promise) is that if I have a large number of
> inserts that I generate dynamically, is it quicker for me to
> perform these inserts one by one (maybee 10,000 of them at a time)
> or would it be faster and less CPU intensive to generate a text
> file instead and then read this in via a single copy command.
> This file at times may be over 100,000 entries, so would I be
> better to split it to a maximum number of transactions if I
> take the route of the copy command?
Much faster using COPY from a text file. Copy has no limit or
limitation on size.
>
> Thanks in advance, and I hope that this time someone will be able
> to answer some or all of these questions.
I did.
--
Bruce Momjian | http://www.op.net/~candle
maillist@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026