Re: Alternatives to very large tables with many performance-killing indicies?

Поиск
Список
Период
Сортировка
От Merlin Moncure
Тема Re: Alternatives to very large tables with many performance-killing indicies?
Дата
Msg-id CAHyXU0xN1obQtpWgnXkfMHaAjt+6UvvGhMxEPBz58NboTXi0Rg@mail.gmail.com
обсуждение исходный текст
Ответ на Alternatives to very large tables with many performance-killing indicies?  (Wells Oliver <wellsoliver@gmail.com>)
Список pgsql-general
On Thu, Aug 16, 2012 at 3:54 PM, Wells Oliver <wellsoliver@gmail.com> wrote:
> Hey folks, a question. We have a table that's getting large (6 million rows
> right now, but hey, no end in sight). It's wide-ish, too, 98 columns.
>
> The problem is that each of these columns needs to be searchable quickly at
> an application level, and I'm far too responsible an individual to put 98
> indexes on a table. Wondering what you folks have come across in terms of
> creative solutions that might be native to postgres. I can build something
> that indexes the data and caches it and runs separately from PG, but I
> wanted to exhaust all native options first.

Well, you could explore normalizing your table, particularly if many
of your 98 columns are null most of the time.  Another option would be
to implement hstore for attributes and index with GIN/GIST --
especially if you need to filter on multiple columns.  Organizing big
data for fast searching is a complicated topic and requires
significant thought in terms of optimization.

merlin


В списке pgsql-general по дате отправления:

Предыдущее
От: Wells Oliver
Дата:
Сообщение: Alternatives to very large tables with many performance-killing indicies?
Следующее
От: Tomas Hlavaty
Дата:
Сообщение: Re: success with postgresql on beaglebone