Обсуждение: Using the power of the GPU

Поиск
Список
Период
Сортировка

Using the power of the GPU

От
"Billings, John"
Дата:
Does anyone think that PostgreSQL could benefit from using the video card as a parallel computing device?  I'm working on a project using Nvidia's CUDA with an 8800 series video card to handle non-graphical algorithms.  I'm curious if anyone thinks that this technology could be used to speed up a database?  If so which part of the database, and what kind of parallel algorithms would be used?
Thanks,
-- John Billings
 

Re: Using the power of the GPU

От
Guy Rouillier
Дата:
Billings, John wrote:
> Does anyone think that PostgreSQL could benefit from using the video
> card as a parallel computing device?

Well, I'm not one of the developers, and one of them may have this
particular scratch, but in my opinion just about any available fish has
to be bigger than this one.  Until someone comes out with a standardized
approach for utilizing whatever extra processing power exists on a GPU
in a generic fashion, I can't see much payback for writing special code
for the NVIDIA 8800.

--
Guy Rouillier

Re: Using the power of the GPU

От
Vivek Khera
Дата:
On Jun 8, 2007, at 3:33 PM, Guy Rouillier wrote:

> Well, I'm not one of the developers, and one of them may have this
> particular scratch, but in my opinion just about any available fish
> has to be bigger than this one.  Until someone comes out with a
> standardized approach for utilizing whatever extra processing power
> exists on a GPU in a generic fashion, I can't see much payback for
> writing special code for the NVIDIA 8800.

And I can state unequivocally that none of my high-end DB serves will
every have such a high-end graphics card in it... so what's the point?


Re: Using the power of the GPU

От
Ilan Volow
Дата:
If you're absolutely, positive dying for some excuse to do this (i.e. I don't currently have the budget to pay you anything to do it), I work in a manufacturing environment where we are using a postgresql database to store bills of materials for parts. One of the things we also have to do is to figure out is what combination parts cut out of a piece of sheet metal will waste the least amount of material--your standard nesting problem. It would be useful to have the ability to use a single computer to store all the part dimensions in the database using the various postgres geometry stuff (which we're not currently doing) and then be able to flingin brute force fashion zillions of shapes at the GPU in all different orientations to get the best combination of bills of materials that would use the least amount of metal. 

Aren't you sorry you asked? ;)


-- Ilan

On Jun 8, 2007, at 1:26 PM, Billings, John wrote:

Does anyone think that PostgreSQL could benefit from using the video card as a parallel computing device?  I'm working on a project using Nvidia's CUDA with an 8800 series video card to handle non-graphical algorithms.  I'm curious if anyone thinks that this technology could be used to speed up a database?  If so which part of the database, and what kind of parallel algorithms would be used?
Thanks,
-- John Billings
 

Ilan Volow
"Implicit code is inherently evil, and here's the reason why:"



Re: Using the power of the GPU

От
Lincoln Yeoh
Дата:
At 01:26 AM 6/9/2007, Billings, John wrote:
>Does anyone think that PostgreSQL could benefit from using the video
>card as a parallel computing device?  I'm working on a project using
>Nvidia's CUDA with an 8800 series video card to handle non-graphical
>algorithms.  I'm curious if anyone thinks that this technology could
>be used to speed up a database?  If so which part of the database,
>and what kind of parallel algorithms would be used?
>Thanks,
>-- John Billings
>

I'm sure people can think of many ways to do it BUT my concern is how
accurate and consistent would the calculations be?

So far in the usual _display_only_ applications if there's an error
in the GPU calculations people might not really notice the error in
the output. A few small "artifacts" in one frame? No big deal to most
people's eyes.

There have been cases where if you rename the application executable,
you get different output and "performance".

Sure that's more a "driver issue", BUT if those vendors have that
sort of attitude and priorities, I wouldn't recommend using their
products for anything where calculation accuracy is important, no
matter what sort of buzzwords they throw at you (in fact the more
buzzwords they use, the less likely I'd want to use their stuff for
that purpose).

I'd wait for other people to get burnt first.

But go ahead, I'm sure it can speed up _your_ database ;).

Regards,
Link.



Re: Using the power of the GPU

От
Gregory Stark
Дата:
I did find this:

http://www.andrew.cmu.edu/user/ngm/15-823/project/Draft.pdf

But there are several reasons this seems to be a dead-end route for Postgres:

1) It's limited to in-memory sorts. Speeding up in-memory sorts by a linear
   factor seems uninteresting. Anything large enough for a small linear
   speedup to be interesting will be doing a disk sort anyways.

2) It's limited to one concurrent sort. There doesn't seem to be any facility
   for managing the shared resource of the GPU.

3) It's limited to sorting a single float. Postgres has an extensible type
   system. The use case for sorting a list of floats is pretty narrow. It's
   also limited to 32-bit floats and it isn't clear if that's an
   implementation detail or a hardware limitation of current GPUs.

4) It uses a hardware-specific driver for Nvidia GPUs. Ideally there would be
   some kind of kernel driver which took care of managing the shared resource
   (like the kernel manages things like disk, network, memory, etc) and either
   that or a library layer would provide an abstract interface so that the
   Postgres code would be hardware independent.

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com