Обсуждение: Large objects error - expected a 'V' from the backend

Поиск
Список
Период
Сортировка

Large objects error - expected a 'V' from the backend

От
Chris Bitmead
Дата:
Hi all,

I'm using postgres 6.3.2 as built by RedHat 5.2.

Every time one of my programs tries to read the _2nd_ large object it
gets an error. Well actually, closing the descriptor on the 1st large
object fails as does retrieving the 2nd large object. The error is....

PQfn: expected a 'V' from the backend. Got 'N' instead

I have got a code extract below. It is simply a perl program using
Pg-0.91 that opens the database and tries to read two large objects
given on the command line.

What is the best bet for getting around this? Is upgrading to a later
version of postgres likely to help? Has anyone seen this before?

The large objects I'm using aren't very large. Only a few hundred bytes.

Are large objects well supported? Are they considered very stable to
use?

Here is the code....

package techmod;
require Exporter;

use DBI;
use Pg;

sub pgdbconnect
{       $pgdbh ||= Pg::connectdb("dbname=httpd");       die unless $pgdbh;       $pgdbh->trace(STDOUT);       return
$pgdbh;
}


sub getlarge
{   my ($name,$lobjId)=@_;   my $buf;   my $mode = PGRES_INV_READ;   if (0 <= ($lobj_fd = $pgdbh->lo_open($lobjId,
$mode)))      {       print "open\n";       while (0 < ($nbytes = $pgdbh->lo_read($lobj_fd, $b, 100000)))           {
       $buf = $buf . $b;           }       if ($nbytes < 0)               { print "read fail\n", $pgdbh->errorMessage;
}      if ($pgdbh->lo_close($lobj_fd) < 0)               { print "close fail\n", $pgdbh->errorMessage; }       }
else      {               print "notopen $lobjId\n", $pgdbh->errorMessage;       }   return $buf;
 
}

#!/usr/bin/perl
use techmod;
techmod->pgdbconnect();
$lobjId=$ARGV[0];
print techmod->getlarge($lobjId);
print techmod->getlarge($ARGV[1]);

Here is an extract from the trace.

To backend> F 
To backend (4#)> 954
To backend (4#)> 2
To backend (4#)> 4
To backend (4#)> 0
To backend (4#)> 4
To backend (4#)> 100000
>From backend> V
>From backend> G
>From backend (#4)> 33
>From backend (33)> This is some data stored in a large object.

>From backend> 0
To backend> F 
To backend (4#)> 954
To backend (4#)> 2
To backend (4#)> 4
To backend (4#)> 0
To backend (4#)> 4
To backend (4#)> 100000
>From backend> V
>From backend> G
>From backend (#4)> 0
>From backend (0)> 
>From backend> 0
To backend> F 
To backend (4#)> 953
To backend (4#)> 1
To backend (4#)> 4
To backend (4#)> 0
>From backend> N
close fail
PQfn: expected a 'V' from the backend. Got 'N' insteadThis is some data
stored in a large object
To backend> F 
To backend (4#)> 952
To backend (4#)> 2
To backend (4#)> 4
To backend (4#)> 21008
To backend (4#)> 4
To backend (4#)> 262144
>From backend> N
notopen 21008
PQfn: expected a 'V' from the backend. Got 'N' instead

-- 
Chris Bitmead
http://www.bigfoot.com/~chris.bitmead
mailto:chris.bitmead@bigfoot.com


Re: [HACKERS] Large objects error - expected a 'V' from the backend

От
Edmund Mergl
Дата:
Chris Bitmead wrote:
> 
> Hi all,
> 
> I'm using postgres 6.3.2 as built by RedHat 5.2.
> 
> Every time one of my programs tries to read the _2nd_ large object it
> gets an error. Well actually, closing the descriptor on the 1st large
> object fails as does retrieving the 2nd large object. The error is....
> 
> PQfn: expected a 'V' from the backend. Got 'N' instead
> 
> I have got a code extract below. It is simply a perl program using
> Pg-0.91 that opens the database and tries to read two large objects
> given on the command line.


this will most probably not solve your problem, but for DBD-Pg-0.91 you need
postgresql-6.4.2.

Edmund


-- 
Edmund Mergl          mailto:E.Mergl@bawue.de
Im Haldenhau 9        http://www.bawue.de/~mergl
70565 Stuttgart       fon: +49 711 747503
Germany


Re: Are large objects well supported? Are they considered very stableto use?

От
Chris Bitmead
Дата:
> I have tried to use the lo interface and it appears to
> work ok (although there is a fix required for solaris).
> There is also a memory leak in the back end so several
> thousand large objects will probably cause the backend
> to fail .

Ouch.

Well perhaps if I tell you PG hackers what I want to do, if you could
tell me the best way to do it.

I want to have a comment database storying ascii text comments. These
could be over 8000 bytes, and my understanding is that conventional PG
rows can't be bigger than 8000 bytes. On the other hand most of them
will probably be much smaller than 8000 bytes. I will certainly have
more than "several thousand" of them.

Is large objects the right way to go here? What are the disk usage /
speed tradeoffs of using large objects here, perhaps compared to
straight UNIX files? The main reasons I don't use the file system is
that I might run out of inodes, and also it's probably not that fast or
efficient.


Regression failures

От
Chris Bitmead
Дата:
FYI, on a standard RedHat 5.2 system the current PG snapshot fails the
following regessions...

int2 .. failed
int4 .. failed
geometry .. failed

If anyone wants more info, let me know.

diff  results/int4.out expected/int4.out 
10c10
< ERROR:  pg_atoi: error reading "1000000000000": Numerical result out
of range
---
> ERROR:  pg_atoi: error reading "1000000000000": Math result not representable
diff  results/int2.out expected 
10c10
< ERROR:  pg_atoi: error reading "100000": Numerical result out of range
---
> ERROR:  pg_atoi: error reading "100000": Math result not representable


Re: [HACKERS] Re: Are large objects well supported? Are they considered very stableto use?

От
Tatsuo Ishii
Дата:
> > I have tried to use the lo interface and it appears to
> > work ok (although there is a fix required for solaris).
> > There is also a memory leak in the back end so several
> > thousand large objects will probably cause the backend
> > to fail .

This was reported some times ago but I don't have time to fix.

> Ouch.
> 
> Well perhaps if I tell you PG hackers what I want to do, if you could
> tell me the best way to do it.
> 
> I want to have a comment database storying ascii text comments. These
> could be over 8000 bytes, and my understanding is that conventional PG
> rows can't be bigger than 8000 bytes. On the other hand most of them
> will probably be much smaller than 8000 bytes. I will certainly have
> more than "several thousand" of them.

I thought the problem stated above was in that creating lots of large
objects in a session could be a trouble. On the other hand, if you
read/or write not so much in a session, you could avoid the problem, I
guess.

> Is large objects the right way to go here? What are the disk usage /
> speed tradeoffs of using large objects here, perhaps compared to
> straight UNIX files? The main reasons I don't use the file system is
> that I might run out of inodes, and also it's probably not that fast or
> efficient.

If you are short of inodes, forget about large objects. Creating a
large object consumes 2 inodes (one is for holding data itself,
another is for an index for faster access) and problably this is not
good news for you.

I think we could implement large objects in a different way, for
example packing many of them into a single table. This is just a
thought, though.
---
Tatsuo Ishii


Re: Are large objects well supported? Are they considered very stableto use?

От
Chris Bitmead
Дата:
Thanks for all the suggestions about large objects. To me they sound
nearly a waste of time, partly because they take 2 unix files for each
one, and partly because the minimum size is 16k.

For the moment I think I will use text type in a regular class and just
put up with the restriction of less than 8k. Maybe I will use an "oid
more," link for chaining.

I think the only real solution to this is to remove the arbitrary limits
in postgres as in the 8k record limit and the 8k query buffer limit.

Has anybody thought much about this yet?

-- 
Chris Bitmead
http://www.bigfoot.com/~chris.bitmead
mailto:chris.bitmead@bigfoot.com


Re: [HACKERS] Regression failures

От
Thomas Lockhart
Дата:
> FYI, on a standard RedHat 5.2 system the current PG snapshot fails the
> following regessions...

OK. I have the "reference platform" for the regression tests, and it
has recently had a (forced) upgrade to RH5.2 after losing some disks.
I'd expect the regression tests to start matching your installation
very soon now; certainly before we release v6.5.

Thanks for the info.
                     - Tom