Re: BLOB support

Поиск
Список
Период
Сортировка
От Tomas Vondra
Тема Re: BLOB support
Дата
Msg-id 4DE7F5E9.2000000@fuzzy.cz
обсуждение исходный текст
Ответ на Re: BLOB support  ("ktm@rice.edu" <ktm@rice.edu>)
Список pgsql-hackers
Dne 2.6.2011 15:18, ktm@rice.edu napsal(a):
> On Thu, Jun 02, 2011 at 02:58:52PM +0200, Pavel Stehule wrote:
>> 2011/6/2 Peter Eisentraut <peter_e@gmx.net>:
>>> Superficially, this looks like a reimplementation of TOAST.  What
>>> functionality exactly do you envision that the BLOB and CLOB types would
>>> need to have that would warrant treating them different from, say, bytea
>>> and text?
>>>
>>
>> a streaming for bytea could be nice. A very large bytea are limited by
>> query size - processing long query needs too RAM,
>>
>> Pavel
>>
> 
> +1 for a streaming interface to bytea/text. I do agree that there is no need
> to reinvent the TOAST architecture with another name, just improve the existing
> implementation.

Building a "parallel" architecture that mimics TOAST is obviously a bad
idea.

But I do have a curious question - the current LO approach is based on
splitting the data into small chunks (2kB) and storing those chunks in a
bytea column of the pg_largeobject table.

How much overhead does all this mean? What if there is a special kind of
blocks for binary data, that limits the amount of chunks and TOAST?
Actually this probably would not need a special type of block, but when
writing a block there would be a single row with as much data as
possible (and some metadata). I.e. there would be almost 8kB of
compressed data.

This would probably bring some restrictions (e.g. inability to update
the data, but I don't think that's possible with the current LO anyway.
Has anyone thought about this?

regards
Tomas


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: 9.2 branch and 9.1beta2 timing (was Re: InitProcGlobal cleanup)
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Domains versus polymorphic functions, redux