Hi Heikki,
Thanks for the response!
I understand that relying on cache management would be the easiest solution. However, I had a similar issue with other RDBMS (MSSQL, to be specific) in the past and observed a lot of disk activity until the table was pinned in memory (fortunately MSSQL has 'dbcc pintable' for that).
Basically, this is all about a high-traffic website, where virtually _all_ data in the DB get accessed frequently - so it's not obvious which DB pages are going to win the eviction war. However, the overall cost of access is different for different tables - for the table in question it very well may ~20 disk seeks per webpage view, so very high cache hit rate (ideally 100%) has to be assured.
So - will the 'mlock' hack work? Or any other ideas for "pinning" a table in memory?
- Alex
Heikki Linnakangas <heikki@enterprisedb.com> wrote:
adrobj wrote:
> I have a pretty small table (~20MB) that is accessed very frequently and
> randomly, so I want to make sure it's 100% in memory all the time. There is
> a lot of other staff that's also gets accessed frequently, so I don't want
> to just hope that Linux file cache would do the right thing for me.
>
> Is there any way to do that?
>
> One idea I have in my head is to start a process that does mmap() and
> mlock() with the table file. Will it work? If so, are there any potential
> problems?
Just leave it to the cache management algorithms in Postgres and Linux.
If it really is frequently accessed, it should stay in Postgres shared
buffers.
You can use the pg_buffercache contrib module to see what's in cache.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Never miss a thing.
Make Yahoo your homepage.