browsing table with 2 million records

Поиск
Список
Период
Сортировка
I am running Postgre 7.4 on FreeBSD. The main table have 2 million record (we would like to do at least 10 mil or more). It is mainly a FIFO structure with maybe 200,000 new records coming in each day that displace the older records.

We have a GUI that let user browser through the record page by page at about 25 records a time. (Don't ask me why but we have to have this GUI). This translates to something like

  select count(*) from table   <-- to give feedback about the DB size
  select * from table order by date limit 25 offset 0

Tables seems properly indexed, with vacuum and analyze ran regularly. Still this very basic SQLs takes up to a minute run.

I read some recent messages that select count(*) would need a table scan for Postgre. That's disappointing. But I can accept an approximation if there are some way to do so. But how can I optimize select * from table order by date limit x offset y? One minute response time is not acceptable.

Any help would be appriciated.

Wy


В списке pgsql-performance по дате отправления:

Предыдущее
От: "Edward Di Geronimo Jr."
Дата:
Сообщение: Performance issues with custom functions
Следующее
От: Mark Lewis
Дата:
Сообщение: Re: browsing table with 2 million records