On Fri, Jun 23, 2006 at 02:30:29PM -0400, Mark Woodward wrote:
> >
> > Bottom line: there's still lots of low-hanging fruit. Why are
> > people feeling that we need to abandon or massively complicate our
> > basic architecture to make progress?
> >
> > regards, tom lane
>
> I, for one, see a particularly nasty unscalable behavior in the
> implementation of MVCC with regards to updates.
You're not answering the question Tom asked. Why not?
> For each update to a row additional work needs to be done to access
> that row. Surely a better strategy can be done, especially
> considering that the problem being solved is a brief one.
>
> The only reason why you need previous versions of a row is for
> transactions that started before or during the transaction that
> seeks to modify a row. After which time, the previous versions
> continue to affect performance and take up space even though they
> are of no value. (Caveats for rollback, etc. but the point is still
> valid).
I wouldn't be so quick to dismiss those as parenthetical "caveats."
> This is a very pessimistic behavior and penalizes the more common
> and optimistic operations. Now, if a tool were to be created that
> could roll back an entire database to some arbitrary transaction ID
> between vacuums, then I can see the usefulnes of the older versions.
There was one called time travel. Somebody might put it back in some
day :)
> I still think an in-place indirection to the current row could fix
> the problem and speed up the database, there are some sticky
> situations that need to be considered, but it shouldn't break much.
We're eagerly awaiting your patch.
Cheers,
D
--
David Fetter <david@fetter.org> http://fetter.org/
phone: +1 415 235 3778 AIM: dfetter666 Skype: davidfetter
Remember to vote!