[GSoC] Need for advice on improving hash index performance

Поиск
Список
Период
Сортировка
От Xiao Meng
Тема [GSoC] Need for advice on improving hash index performance
Дата
Msg-id ded849dd0803260755t398e023bp3bfbe156ed8e4dbd@mail.gmail.com
обсуждение исходный текст
Ответы Re: [GSoC] Need for advice on improving hash index performance  (Kenneth Marshall <ktm@rice.edu>)
Список pgsql-hackers
Hello, Hackers:

I've post a question about GSoC before.
[GSoC] (Is it OK to choose items without % mark in theToDoList) && (is
it an acceptable idea to build index on Flash Disk)
http://archives.postgresql.org/pgsql-hackers/2008-03/msg00859.php

Now, I start a new thread since the topic had been transfered to
improving hash index.
Kenneth had started a thread about this topic before.
http://archives.postgresql.org/pgsql-hackers/2007-09/msg00051.php

I've browsed the long discussion follow Kenneth's advice.
Now, I've have some rough idea. And there's still something confused
me.I'm glad to hear your advice.

1. The Benchmark
In what condition, hash index "should" be used in general , i.e. when
it should work better that b-tree? I think we should focus on these
condition. IMHO, hash should be efficient when the cost of comparison
is expensive and most of the query is equality query.
In addition, maybe we can use TPC benchmark's data.

2.The management of the bucket
some idea in the todo list is about this

> o Pack hash index buckets onto disk pages more efficiently
>
>  Currently only one hash bucket can be stored on a page. Ideally
>  several hash buckets could be stored on a single page and greater
>  granularity used for the hash algorithm.
>
>  http://archives.postgresql.org/pgsql-hackers/2004-06/msg00168.php
>
AIUI, one approach is to divide  one  big  hash  bucket into
sub-bucket and re-hash in the big  bucket. And we should use closed
hash IMHO.

2.what about other Hash Implementation
What about static hash ? We need to re-organize the whole file when
the file grows big. But what about in such a condition  that we can
estimate the size of table and it's not changed frequently. Besides
the high cost of re-organization ,it's an attractive technique because
we only  need one I/O  to fetch  a  item.
Extendable hash is also effective when the memory is big enough .It's
also need one I/O to fetch a item. Of course the growth of memory
usage is a big problem.

3. the page layout of the bucket
> o In hash indexes, consider storing the hash value with or instead
> of the key itself
>
It means that we need at least two I/O's to fetch a tuple.
For a three level B-Tree(if we place the upper two level into the
memory), it works not worse than hash index. IMHO, Hash is effective
when the index is big enough and the B-Tree grows to 4 level.
Neil Conway had posted a patch doing this with an old version of
PostgreSQL. I think It's a good start point for me.


4.what about user defined hash function?
Sometimes, DBA may know the  data distribution well, so DBA can give a
better hash function in some cases.
As far as I know, Oracle support user defined hash function using a
"Hash is exper" clause
e.g.
create cluster acctclust (acctid int) size 80 single table hashkeys 100000                    hash is
mod(acctid,100003)


Any comment or advices? Hope to hear from you,thanks!
And I think I should finish my proposal as soon as possible now, since
the deadline of application is coming.

Have a good day;-)
--
Best Regards,
Xiao Meng

━━━━━━━━━━━━━━━━━━━━━━━━━
Data and Knowledge Engineering Research Center,CS&T
Harbin Institute of Technology, Harbin, Heilongjiang, China
Gtalk: mx.cogito@gmail.com
MSN: cnEnder@live.com
Blog: http://xiaomeng.yo2.cn

В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Zubkovsky, Sergey"
Дата:
Сообщение: Re: [DOCS] pg_total_relation_size() and CHECKPOINT
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Script binaries renaming