Increasing catcache size

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Increasing catcache size
Дата
Msg-id 5141.1150327541@sss.pgh.pa.us
обсуждение исходный текст
Ответы Re: Increasing catcache size  (Bruce Momjian <pgman@candle.pha.pa.us>)
Список pgsql-hackers
Awhile back, there was a discussion about psql \d display being really
slow in a database with 4000 tables:
http://archives.postgresql.org/pgsql-hackers/2005-09/msg01085.php

I looked into this some, and it seems that part of the problem is that
the catalog caches are limited to hold no more than 5000 tuples in total
(MAXCCTUPLES in catcache.c).  When you're looking at thousands of
tables, the tuples you want age out of cache immediately and so every
SearchSysCache call has to actually go to disk (or at least shared
buffers).

The 5000 setting is basically Berkeley-era (when we got the code from
Berkeley, the limit was 300 tuples per cache, which I changed to a
system-wide limit some years ago).  A reasonable guess is that we're
talking about entries of 100 to 200 bytes each, making the maximum
catcache space usage a megabyte or less (per backend, that is).  Seems
like this is mighty small on modern machines.

We could make a GUC variable to control this, or just arbitrarily bump
up the constant to 50000 or so.  Is it worth another GUC?  Not sure.

There's another hardwired setting right next to this one, which is the
number of hash buckets per catcache.   Since the caches are likely to
have significantly different sizes, that is looking a bit obsolete too.
We could put per-cache numbers into the setup lists in syscache.c
easily.  Making the cache sizes dynamic a la dynahash.c is another
possibility, but I'm unsure if it's worth the trouble.

Thoughts?
        regards, tom lane


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Bruce Momjian
Дата:
Сообщение: Re: Interval aggregate regression failure (expected seems
Следующее
От: Bruce Momjian
Дата:
Сообщение: Re: Increasing catcache size