On Fri, 12 Nov 1999, Tom Lane wrote:
> Tatsuo Ishii <t-ishii@sra.co.jp> writes:
> >> LO is a dead end. What we really want to do is eliminate tuple-size
> >> restrictions and then have large ordinary fields (probably of type
> >> bytea) in regular tuples. I'd suggest working on compression in that
> >> context, say as a new data type called "bytez" or something like that.
--- cut ---
>
> The only thing LO would do for you is divide the data into block-sized
> tuples, so there would be a bunch of little WAL entries instead of one
> big one. But that'd probably be easy to duplicate too. If we implement
> big tuples by chaining together disk-block-sized segments, which seems
> like the most likely approach, couldn't WAL log each segment as a
> separate log entry? If so, there's almost no difference between LO and
> inline field for logging purposes.
>
I'am not sure, that LO is a dead end for every users. Big (blob) fields
going during SQL engine (?), but why - if I needn't use this data as
typically SQL data (I not need index, search .. in (example) gif files).
Will pity if LO devel. will go down. I still thing that LO compression is
not bad idea :-)
Other eventual compression questions:
* some aplication allow use over slow networks between client<->server a compressed stream, and PostgreSQL?
* MySQL dump allow make compressed dump file, it is good, and PostgreSQL?
Karel