Re: service allowing arbitrary relations was Re: hstore, but with fast range comparisons?

Поиск
Список
Период
Сортировка
От Andy Colson
Тема Re: service allowing arbitrary relations was Re: hstore, but with fast range comparisons?
Дата
Msg-id 5489B0CB.5060506@squeakycode.net
обсуждение исходный текст
Ответ на Re: service allowing arbitrary relations was Re: hstore, but with fast range comparisons?  (Guyren Howe <guyren@gmail.com>)
Список pgsql-general
On 12/10/2014 7:20 PM, Guyren Howe wrote:
>
>
>>> I want to do something that is perfectly satisfied by an hstore column. *Except* that I want to be able to do fast
(ieindexed) <, > etc comparisons, not just equality. 
>>>
>>>  From what I can tell, there isn’t really any way to get hstore to do this, so I’ll have to go to a key-value
table.But I thought I’d ask just in case I’m missing something. 
>>>
>>
>> I think your missing something.
>>
>> Is it one field in the hstore?  Did you try an expression index?
>>
>> create index cars_mph on cars ( (data->'mph') );
>>
>> thats a btree index, which should support < and >.  (Although I've never tried it)
>
> Sorry I wasn’t clear. I need a fast comparison on whatever keys someone chooses to put in the hstore.
>
> I’m creating a web service where you can post whatever keys and values you wish. I am leaning toward having folks
declarethe relations they want to store and the field types, so I could just create a table for each one, but I want
thisto be able to scale to a lot of data for a lot of people. 
>
> Perhaps I could give everyone their own schema and just create tables for their relations. How heavyweight are
schemas?
>
> But if I don’t want to do that, I’m leaning toward a table with a relation name and a hstore, and then separate k-v
tablesfor different types. I was thinking of keeping the hstore because they will often be searching on fewer fields
thanthey’re retrieving, so this would avoid having to do a join for every field they need. 
>
> Regards,
>
> Guyren G Howe
> Relevant Logic LLC
>
>
>
>

How many rows are we talking?  Have you bench marked it with normal
table scans?  They're pretty fast, especially repeated queries because
the data is cached.  (Normal columns would be faster, a table scan +
hstore will add a little overhead)

Indexing every possible field is possible, but slow for insert/updates.
  And would chew up a lot of disk.

Is there any subset of fields you could index that would reduce the
number of results at least somewhat?

select * from bigtable where generic = 'a' and specific = 'b';

Then only index generic columns.  Your searches will still be fast, as
will insert/update.

Schema's are pretty light weight.  I run an 80 Gig database split into
115 schema's without problem.

I'm not sure what'll work best for you.  You'll probably have to mock up
some different layouts and benchmark them.

-Andy


В списке pgsql-general по дате отправления:

Предыдущее
От: Carlos Henrique Reimer
Дата:
Сообщение: Out of memory condition
Следующее
От: Andy Colson
Дата:
Сообщение: Re: Stored procedure workflow question