Re: Relation 'pg_largeobject' does not exist

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Relation 'pg_largeobject' does not exist
Дата
Msg-id 12139.1142302375@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: Relation 'pg_largeobject' does not exist  (Brandon Keepers <bkeepers@gmail.com>)
Ответы Re: Relation 'pg_largeobject' does not exist  ("Brandon Keepers" <bkeepers@gmail.com>)
Список pgsql-general
Brandon Keepers <bkeepers@gmail.com> writes:
> Thanks for your quick response!  I had actually just been trying that
> (with 7.1) and came across another error:

> NOTICE:  ShmemAlloc: out of memory
> NOTICE:  LockAcquire: xid table corrupted
> dumpBlobs(): Could not open large object.  Explanation from backend:
> 'ERROR:  LockRelation: LockAcquire failed

Ugh :-(  How many blobs have you got, thousands?  7.0 stores each blob
as a separate table, and I'll bet it is running out of lock table space
to hold a lock on each one.  My recollection is that we converted blob
storage to a single pg_largeobject table precisely because of that
problem.

What you'll need to do to get around this is to export each blob in a
separate transaction (or at least no more than a thousand or so blobs
per transaction).  It looks like pg_dumplo might be easier to hack to do
things that way --- like pg_dump, it puts a BEGIN/COMMIT around the
whole run, but it's a smaller program and easier to move those commands
in.

Another possibility is to increase the lock table size, but that would
probably require recompiling the 7.0 backend.  If you're lucky,
increasing max_connections to the largest value the backend will support
will be enough.  If you've got many thousands of blobs there's no hope
there, but if it's just a few thousand this is worth a try before you go
hacking code.

            regards, tom lane

В списке pgsql-general по дате отправления:

Предыдущее
От: Brandon Keepers
Дата:
Сообщение: Re: Relation 'pg_largeobject' does not exist
Следующее
От: CSN
Дата:
Сообщение: â in text field