Обсуждение: shared memory/max_locks_per_transaction error

Поиск
Список
Период
Сортировка

shared memory/max_locks_per_transaction error

От
"Kynn Jones"
Дата:
I've written a PL/pgSQL function that is supposed to create a whole bunch (~4000) tables:

CREATE OR REPLACE FUNCTION create_tables () RETURNS void
AS $$
DECLARE
  _s RECORD;
  _t TEXT;
BEGIN
  DROP TABLE IF EXISTS base CASCADE;
  CREATE TABLE base ( /* omit lengthy definition */ );

  FOR _s IN SELECT a, b FROM some_helper_table LOOP
    _t := _s.a || '__' || _s.b;
    EXECUTE 'DROP TABLE IF EXISTS "' || _t || '" CASCADE;';
    EXECUTE 'CREATE TABLE "' || _t || '" ( LIKE base )';
  END LOOP;
END;
$$ LANGUAGE plpgsql VOLATILE;

but after executing the loop around 3500 times, it fails with an "out of shared memory" error and the hint "you may need to increase max_locks_per_transaction."

Initially I didn't know what our max_locks_per_transaction was (nor even a typical value for it), but in light of the procedure's failure after 3500 iterations, I figured that it was 3500 or so.  In fact ours is only 64 (the default), so I'm now thoroughly confused.

Is there a way to force the release of locks within the loop?

TIA!

Kynn

Re: shared memory/max_locks_per_transaction error

От
Tom Lane
Дата:
"Kynn Jones" <kynnjo@gmail.com> writes:
> Initially I didn't know what our max_locks_per_transaction was (nor even a
> typical value for it), but in light of the procedure's failure after 3500
> iterations, I figured that it was 3500 or so.  In fact ours is only 64 (the
> default), so I'm now thoroughly confused.

The number of lock slots available system-wide is
max_locks_per_transaction times max_connections, and your procedure was
chewing them all.  I suggest taking the hint's advice if you really need
to create 3500 tables in a single transaction.  Actually, you'd better
do it if you want to have 3500 tables at all, because pg_dump will
certainly try to acquire AccessShare lock on all of them.

> Is there a way to force the release of locks within the loop?

No.

            regards, tom lane

Re: shared memory/max_locks_per_transaction error

От
"Kynn Jones"
Дата:
On Fri, Mar 14, 2008 at 7:12 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
"Kynn Jones" <kynnjo@gmail.com> writes:
> Initially I didn't know what our max_locks_per_transaction was (nor even a
> typical value for it), but in light of the procedure's failure after 3500
> iterations, I figured that it was 3500 or so.  In fact ours is only 64 (the
> default), so I'm now thoroughly confused.

The number of lock slots available system-wide is
max_locks_per_transaction times max_connections, and your procedure was
chewing them all.  I suggest taking the hint's advice if you really need
to create 3500 tables in a single transaction.  Actually, you'd better
do it if you want to have 3500 tables at all, because pg_dump will
certainly try to acquire AccessShare lock on all of them.

OK, in light of this, I'll have to either change my strategy (and schema) significantly or greatly increase max_locks_per_transaction.

I'm leaning towards the re-design option, primarily because I really don't really understand the consequences of cranking up max_locks_per_transaction.  E.g. Why is its default value 2^6, instead of, say, 2^15?  In fact, why is there a ceiling on the number of locks at all?  I'm guessing that the fact that the default value is relatively small (i.e. a couple of orders of magnitude below the number of tables I have in mind) suggests that setting this value to a huge number would be a terrible idea.  Is that so?

Thanks!

Kynn




Re: shared memory/max_locks_per_transaction error

От
Tom Lane
Дата:
"Kynn Jones" <kynnjo@gmail.com> writes:
> I'm leaning towards the re-design option, primarily because I really don't
> really understand the consequences of cranking up max_locks_per_transaction.
>  E.g. Why is its default value 2^6, instead of, say, 2^15?  In fact, why is
> there a ceiling on the number of locks at all?

Because the size of the lock table in shared memory has to be set at
postmaster start.

There are people running DBs with a couple hundred thousand tables,
but I don't know what sorts of performance problems they face when
they try to run pg_dump.  I think most SQL experts would suggest
a redesign: if you have lots of essentially identical tables the
standard advice is to fold them all into one table with one more
key column.

            regards, tom lane

Re: shared memory/max_locks_per_transaction error

От
Alvaro Herrera
Дата:
Kynn Jones escribió:

> I'm leaning towards the re-design option, primarily because I really don't
> really understand the consequences of cranking up max_locks_per_transaction.
>  E.g. Why is its default value 2^6, instead of, say, 2^15?

It's because it (partly) defines how much shared memory the server will
use.

--
Alvaro Herrera                                http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

Re: shared memory/max_locks_per_transaction error

От
Erik Jones
Дата:
On Mar 17, 2008, at 9:55 AM, Tom Lane wrote:

> "Kynn Jones" <kynnjo@gmail.com> writes:
>> I'm leaning towards the re-design option, primarily because I
>> really don't
>> really understand the consequences of cranking up
>> max_locks_per_transaction.
>> E.g. Why is its default value 2^6, instead of, say, 2^15?  In fact,
>> why is
>> there a ceiling on the number of locks at all?
>
> Because the size of the lock table in shared memory has to be set at
> postmaster start.
>
> There are people running DBs with a couple hundred thousand tables,
> but I don't know what sorts of performance problems they face when
> they try to run pg_dump.  I think most SQL experts would suggest
> a redesign: if you have lots of essentially identical tables the
> standard advice is to fold them all into one table with one more
> key column.

That's me!  Our dumps currently take about 36 hours but what's more
alarming is that vanilla restore takes about 4 days.  And, yes, a
redesign is currently in the works :)  However, for Kynn's case, I
doubt he'll have too much trouble with 35000 tables as long as that
number stays fairly static and his design doesn't rely on that number
growing, which is what we currently have.

Erik Jones

DBA | Emma®
erik@myemma.com
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




Re: shared memory/max_locks_per_transaction error

От
"Kynn Jones"
Дата:
Tom, Alvaro:

Thank you much for the clarification.  It's "back to the drawing board" for me!

Kynn


On Mon, Mar 17, 2008 at 10:55 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
"Kynn Jones" <kynnjo@gmail.com> writes:
> I'm leaning towards the re-design option, primarily because I really don't
> really understand the consequences of cranking up max_locks_per_transaction.
>  E.g. Why is its default value 2^6, instead of, say, 2^15?  In fact, why is
> there a ceiling on the number of locks at all?

Because the size of the lock table in shared memory has to be set at
postmaster start.

There are people running DBs with a couple hundred thousand tables,
but I don't know what sorts of performance problems they face when
they try to run pg_dump.  I think most SQL experts would suggest
a redesign: if you have lots of essentially identical tables the
standard advice is to fold them all into one table with one more
key column.

                       regards, tom lane