On Mon, Jun 6, 2011 at 12:30 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Alvaro Herrera <alvherre@commandprompt.com> writes:
>> Excerpts from Tom Lane's message of lun jun 06 12:10:24 -0400 2011:
>>> Yeah, I wasn't that thrilled with the suggestion either. But we can't
>>> just have backends constantly closing every open FD they hold, or
>>> performance will suffer. I don't see any very good place to do this...
>
>> How about doing something on an sinval message for pg_database?
>> That doesn't solve the WAL problem Kevin found, of course ...
>
> Hmm ... that would help for the specific scenario of dropped databases,
> but we've also heard complaints about narrower cases such as a single
> dropped table.
>
> A bigger issue is that I don't believe it's very practical to scan the
> FD array looking for files associated with a particular database (or
> table). They aren't labeled that way, and parsing the file path to
> find out the info seems pretty grotty.
>
> On reflection I think this behavior is probably limited to the case
> where we've done what we used to call a "blind write" of a block that
> is unrelated to our database or tables. For normal SQL-driven accesses,
> there's a relcache entry, and flushing of that entry will lead to
> closure of associated files. I wonder whether we should go back to
> forcibly closing the FD after a blind write. This would suck if a
> backend had to do many dirty-buffer flushes for the same relation,
> but hopefully the bgwriter is doing most of those. We'd want to make
> sure such forced closure *doesn't* occur in the bgwriter. (If memory
> serves, it has a checkpoint-driven closure mechanism instead.)
Instead of closing them immediately, how about flagging the FD and
closing all the flagged FDs at the end of each query, or something
like that?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company