Обсуждение: PostgreSQL Limits and lack of documentation about them.

Поиск
Список
Период
Сортировка

PostgreSQL Limits and lack of documentation about them.

От
David Rowley
Дата:
For a long time, we documented our table size, max columns, max column
width limits, etc. in https://www.postgresql.org/about/ , but that
information seems to have now been removed. The last version I can
find with the information present is back in April this year. Here's a
link to what we had:
https://web.archive.org/web/20180413232613/https://www.postgresql.org/about/

I think it's a bit strange that we don't have this information fairly
early on in the official documentation.  I only see a mention of the
1600 column limit in the create table docs. Nothing central and don't
see mention of 32 TB table size limit.

I don't have a patch, but I propose we include this information in the
docs, perhaps on a new page in the preface part of the documents.

Does anyone else have any thoughts about this?

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: PostgreSQL Limits and lack of documentation about them.

От
Haribabu Kommi
Дата:
On Fri, Oct 26, 2018 at 9:30 AM David Rowley <david.rowley@2ndquadrant.com> wrote:
For a long time, we documented our table size, max columns, max column
width limits, etc. in https://www.postgresql.org/about/ , but that
information seems to have now been removed. The last version I can
find with the information present is back in April this year. Here's a
link to what we had:
https://web.archive.org/web/20180413232613/https://www.postgresql.org/about/

I think it's a bit strange that we don't have this information fairly
early on in the official documentation.  I only see a mention of the
1600 column limit in the create table docs. Nothing central and don't
see mention of 32 TB table size limit.

I don't have a patch, but I propose we include this information in the
docs, perhaps on a new page in the preface part of the documents.

I also try to find such limits of PostgreSQL, but I couldn't find it.
+1 to add them to docs.

Regards,
Haribabu Kommi
Fujitsu Australia

RE: PostgreSQL Limits and lack of documentation about them.

От
"Tsunakawa, Takayuki"
Дата:
From: David Rowley [mailto:david.rowley@2ndquadrant.com]
> I think it's a bit strange that we don't have this information fairly
> early on in the official documentation.  I only see a mention of the
> 1600 column limit in the create table docs. Nothing central and don't
> see mention of 32 TB table size limit.
> 
> I don't have a patch, but I propose we include this information in the
> docs, perhaps on a new page in the preface part of the documents.
> 
> Does anyone else have any thoughts about this?

+1
As a user, I feel I would look for such information in appendix like "A Database limits" in Oracle's Database Reference
manual:


https://docs.oracle.com/en/database/oracle/oracle-database/18/refrn/database-limits.html#GUID-ED26F826-DB40-433F-9C2C-8C63A46A3BFE

As a somewhat related topic, PostgreSQL doesn't mention the maximum values for numeric parameters.  I was asked several
timesthe questions like "what's the maximum value for max_connections?" and "how much memory can I use for work_mem?"
Idon't feel a strong need to specify those values, but I wonder if we should do something.
 


Regards
Takayuki Tsunakawa



Re: PostgreSQL Limits and lack of documentation about them.

От
Alvaro Herrera
Дата:
On 2018-Oct-26, David Rowley wrote:

> For a long time, we documented our table size, max columns, max column
> width limits, etc. in https://www.postgresql.org/about/ , but that
> information seems to have now been removed. The last version I can
> find with the information present is back in April this year. Here's a
> link to what we had:
> https://web.archive.org/web/20180413232613/https://www.postgresql.org/about/

This was removed in
https://git.postgresql.org/gitweb/?p=pgweb.git;a=commitdiff;h=66760d73bca6

Making the /about/ page leaner is a good objective IMO, considering the
target audience of that page (not us), but I wonder if the content
should have been moved elsewhere.  It's still in the wiki:
https://wiki.postgresql.org/wiki/FAQ#What_is_the_maximum_size_for_a_row.2C_a_table.2C_and_a_database.3F
but that doesn't seem great either.

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: PostgreSQL Limits and lack of documentation about them.

От
Narayanan V
Дата:
+1 for inclusion in docs.

On Fri, Oct 26, 2018 at 4:00 AM David Rowley <david.rowley@2ndquadrant.com> wrote:
For a long time, we documented our table size, max columns, max column
width limits, etc. in https://www.postgresql.org/about/ , but that
information seems to have now been removed. The last version I can
find with the information present is back in April this year. Here's a
link to what we had:
https://web.archive.org/web/20180413232613/https://www.postgresql.org/about/

I think it's a bit strange that we don't have this information fairly
early on in the official documentation.  I only see a mention of the
1600 column limit in the create table docs. Nothing central and don't
see mention of 32 TB table size limit.

I don't have a patch, but I propose we include this information in the
docs, perhaps on a new page in the preface part of the documents.

Does anyone else have any thoughts about this?

--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Re: PostgreSQL Limits and lack of documentation about them.

От
David Rowley
Дата:
On 26 October 2018 at 11:40, Haribabu Kommi <kommi.haribabu@gmail.com> wrote:
> On Fri, Oct 26, 2018 at 9:30 AM David Rowley <david.rowley@2ndquadrant.com>
> wrote:
>>
>> For a long time, we documented our table size, max columns, max column
>> width limits, etc. in https://www.postgresql.org/about/ , but that
>> information seems to have now been removed. The last version I can
>> find with the information present is back in April this year. Here's a
>> link to what we had:
>>
>> https://web.archive.org/web/20180413232613/https://www.postgresql.org/about/
>>
>> I think it's a bit strange that we don't have this information fairly
>> early on in the official documentation.  I only see a mention of the
>> 1600 column limit in the create table docs. Nothing central and don't
>> see mention of 32 TB table size limit.
>>
>> I don't have a patch, but I propose we include this information in the
>> docs, perhaps on a new page in the preface part of the documents.
>
>
> I also try to find such limits of PostgreSQL, but I couldn't find it.
> +1 to add them to docs.

I've attached a very rough patch which adds a new appendix section
named "Database Limitations".  I've included what was mentioned in [1]
plus I've added a few other things that I thought should be mentioned.
I'm sure there will be many more ideas.

I'm not so sure about detailing limits of GUCs since the limits of
those are mentioned in pg_settings.

[1] https://web.archive.org/web/20180413232613/https://www.postgresql.org/about/

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Вложения

Re: PostgreSQL Limits and lack of documentation about them.

От
John Naylor
Дата:
On 10/30/18, David Rowley <david.rowley@2ndquadrant.com> wrote:
> On 26 October 2018 at 11:40, Haribabu Kommi <kommi.haribabu@gmail.com>
> wrote:
>> On Fri, Oct 26, 2018 at 9:30 AM David Rowley
>> <david.rowley@2ndquadrant.com>
>> wrote:
>>>
>>> For a long time, we documented our table size, max columns, max column
>>> width limits, etc. in https://www.postgresql.org/about/ , but that
>>> information seems to have now been removed. The last version I can
>>> find with the information present is back in April this year. Here's a
>>> link to what we had:
>>>
>>> https://web.archive.org/web/20180413232613/https://www.postgresql.org/about/
>>>
>>> I think it's a bit strange that we don't have this information fairly
>>> early on in the official documentation.  I only see a mention of the
>>> 1600 column limit in the create table docs. Nothing central and don't
>>> see mention of 32 TB table size limit.
>>>
>>> I don't have a patch, but I propose we include this information in the
>>> docs, perhaps on a new page in the preface part of the documents.
>>
>>
>> I also try to find such limits of PostgreSQL, but I couldn't find it.
>> +1 to add them to docs.
>
> I've attached a very rough patch which adds a new appendix section
> named "Database Limitations".  I've included what was mentioned in [1]
> plus I've added a few other things that I thought should be mentioned.
> I'm sure there will be many more ideas.

David,
Thanks for doing this. I haven't looked at the rendered output yet,
but I have some comments on the content.

+      <entry>Maximum Relation Size</entry>
+      <entry>32 TB</entry>
+      <entry>Limited by 2^32 pages per relation</entry>

I prefer "limited to" or "limited by the max number of pages per
relation, ...". I think pedantically it's 2^32 - 1, since that value
is used for InvalidBlockNumber. More importantly, that seems to be for
8kB pages. I imagine this would go up with a larger page size. Page
size might also be worth mentioning separately. Also max number of
relation file segments, if any.

+      <entry>Maximum Columns per Table</entry>
+      <entry>250 - 1600</entry>
+      <entry>Depending on column types. (More details here)</entry>

Would this also depend on page size? Also, I'd put this entry before this one:

+      <entry>Maximum Row Size</entry>
+      <entry>1600 GB</entry>
+      <entry>Assuming 1600 columns, each 1 GB in size</entry>

A toast pointer is 18 bytes, according to the docs, so I would guess
the number of toasted columns would actually be much less? I'll test
this on my machine sometime (not 1600GB, but the max number of toasted
columns per tuple).

+      <entry>Maximum Identifier Length</entry>
+      <entry>63 characters</entry>
+      <entry></entry>

Can this be increased with recompiling, if not conveniently?

+      <entry>Maximum Indexed Columns</entry>
+      <entry>32</entry>
+      <entry>Can be increased by recompiling
<productname>PostgreSQL</productname></entry>

How about the max number of included columns in a covering index?

> I'm not so sure about detailing limits of GUCs since the limits of
> those are mentioned in pg_settings.

Maybe we could just have a link to that section in the docs.

--
-John Naylor


Re: PostgreSQL Limits and lack of documentation about them.

От
David Rowley
Дата:
On 1 November 2018 at 04:40, John Naylor <jcnaylor@gmail.com> wrote:
> Thanks for doing this. I haven't looked at the rendered output yet,
> but I have some comments on the content.
>
> +      <entry>Maximum Relation Size</entry>
> +      <entry>32 TB</entry>
> +      <entry>Limited by 2^32 pages per relation</entry>
>
> I prefer "limited to" or "limited by the max number of pages per
> relation, ...". I think pedantically it's 2^32 - 1, since that value
> is used for InvalidBlockNumber. More importantly, that seems to be for
> 8kB pages. I imagine this would go up with a larger page size. Page
> size might also be worth mentioning separately. Also max number of
> relation file segments, if any.

Thanks for looking at this.

I've changed this and added mention of BLKSIZE.  I was a bit unclear
on how much internal detail should go into this.

> +      <entry>Maximum Columns per Table</entry>
> +      <entry>250 - 1600</entry>
> +      <entry>Depending on column types. (More details here)</entry>
>
> Would this also depend on page size? Also, I'd put this entry before this one:
>
> +      <entry>Maximum Row Size</entry>
> +      <entry>1600 GB</entry>
> +      <entry>Assuming 1600 columns, each 1 GB in size</entry>
>
> A toast pointer is 18 bytes, according to the docs, so I would guess
> the number of toasted columns would actually be much less? I'll test
> this on my machine sometime (not 1600GB, but the max number of toasted
> columns per tuple).

I did try a table with 1600 text columns then inserted values of
several kB each. Trying with BIGINT columns the row was too large for
the page. I've never really gotten a chance to explore these limits
before, so I guess this is about the time.

> +      <entry>Maximum Identifier Length</entry>
> +      <entry>63 characters</entry>
> +      <entry></entry>
>
> Can this be increased with recompiling, if not conveniently?

Yeah. I added a note about that.

> +      <entry>Maximum Indexed Columns</entry>
> +      <entry>32</entry>
> +      <entry>Can be increased by recompiling
> <productname>PostgreSQL</productname></entry>
>
> How about the max number of included columns in a covering index?

Those are included in the limit. I updated the text.

>> I'm not so sure about detailing limits of GUCs since the limits of
>> those are mentioned in pg_settings.
>
> Maybe we could just have a link to that section in the docs.

That's likely a good idea. I was just unable to find anything better
than the link to the pg_settings view.

I've attached an updated patch, again it's just intended as an aid for
discussions at this stage. Also included the rendered html.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Вложения

Re: PostgreSQL Limits and lack of documentation about them.

От
"Nasby, Jim"
Дата:
> On Oct 31, 2018, at 5:22 PM, David Rowley <david.rowley@2ndquadrant.com> wrote:
> 
> On 1 November 2018 at 04:40, John Naylor <jcnaylor@gmail.com> wrote:
>> Thanks for doing this. I haven't looked at the rendered output yet,
>> but I have some comments on the content.
>> 
>> +      <entry>Maximum Relation Size</entry>
>> +      <entry>32 TB</entry>
>> +      <entry>Limited by 2^32 pages per relation</entry>
>> 
>> I prefer "limited to" or "limited by the max number of pages per
>> relation, ...". I think pedantically it's 2^32 - 1, since that value
>> is used for InvalidBlockNumber. More importantly, that seems to be for
>> 8kB pages. I imagine this would go up with a larger page size. Page
>> size might also be worth mentioning separately. Also max number of
>> relation file segments, if any.
> 
> Thanks for looking at this.
> 
> I've changed this and added mention of BLKSIZE.  I was a bit unclear
> on how much internal detail should go into this.

It’s a bit misleading to say “Can be increased by increasing BLKSZ and recompiling”, since you’d also need to re
initdb.Given that messing with BLKSZ is pretty uncommon I would simply put a note somewhere that mentions that these
valuesassume the default BLKSZ of 8192.
 

>> +      <entry>Maximum Columns per Table</entry>
>> +      <entry>250 - 1600</entry>
>> +      <entry>Depending on column types. (More details here)</entry>
>> 
>> Would this also depend on page size? Also, I'd put this entry before this one:
>> 
>> +      <entry>Maximum Row Size</entry>
>> +      <entry>1600 GB</entry>
>> +      <entry>Assuming 1600 columns, each 1 GB in size</entry>
>> 
>> A toast pointer is 18 bytes, according to the docs, so I would guess
>> the number of toasted columns would actually be much less? I'll test
>> this on my machine sometime (not 1600GB, but the max number of toasted
>> columns per tuple).
> 
> I did try a table with 1600 text columns then inserted values of
> several kB each. Trying with BIGINT columns the row was too large for
> the page. I've never really gotten a chance to explore these limits
> before, so I guess this is about the time.

Hmm… 18 bytes doesn’t sound right, at least not for the Datum. Offhand I’d expect it to be the small (1 byte) varlena
header+ an OID (4 bytes). Even then I don’t understand how 1600 text columns would work; the data area of a tuple
shouldbe limited to ~2000 bytes, and 2000/5 = 400. 

Re: PostgreSQL Limits and lack of documentation about them.

От
John Naylor
Дата:
On 11/1/18, Nasby, Jim <nasbyj@amazon.com> wrote:
> Hmm… 18 bytes doesn’t sound right, at least not for the Datum. Offhand I’d
> expect it to be the small (1 byte) varlena header + an OID (4 bytes). Even
> then I don’t understand how 1600 text columns would work; the data area of a
> tuple should be limited to ~2000 bytes, and 2000/5 = 400.

The wording in the docs (under Physical Storage) is "Allowing for the
varlena header bytes, the total size of an on-disk TOAST pointer datum
is therefore 18 bytes regardless of the actual size of the represented
value.", and as I understand it, it's

header + toast table oid + chunk_id + logical size + compressed size.

This is one area where visual diagrams would be nice.

-John Naylor


Re: PostgreSQL Limits and lack of documentation about them.

От
Andrew Gierth
Дата:
>>>>> "Nasby," == Nasby, Jim <nasbyj@amazon.com> writes:

 >> I did try a table with 1600 text columns then inserted values of
 >> several kB each. Trying with BIGINT columns the row was too large
 >> for the page. I've never really gotten a chance to explore these
 >> limits before, so I guess this is about the time.

 Nasby> Hmm… 18 bytes doesn’t sound right, at least not for the Datum.
 Nasby> Offhand I’d expect it to be the small (1 byte) varlena header +
 Nasby> an OID (4 bytes). Even then I don’t understand how 1600 text
 Nasby> columns would work; the data area of a tuple should be limited
 Nasby> to ~2000 bytes, and 2000/5 = 400.

1600 text columns won't work unless the values are very short or null.

A toast pointer is indeed 18 bytes: 1 byte varlena header flagging it as
a toast pointer, 1 byte type tag, raw size, saved size, toast value oid,
toast table oid.

A tuple can be almost as large as a block; the block/4 threshold is only
the point at which the toaster is run, not a limit on tuple size.

So (with 8k blocks) the limit on the number of non-null external-toasted
columns is about 450, while you can have the full 1600 columns if they
are integers or smaller, or just over 1015 bigints. But you can have
1600 text columns if they average 4 bytes or less (excluding length
byte).

If you push too close to the limit, it may even be possible to overflow
the tuple size by setting fields to null, since the null bitmap is only
present if at least one field is null. So you can have 1010 non-null
bigints, but if you try and do 1009 non-null bigints and one null, it
won't fit (and nor will 999 non-nulls and 11 nulls, if I calculated
right).

(Note also that dropped columns DO count against the 1600 limit, and
also that they are (for new row versions) set to null and thus force the
null bitmap to be present.)

--
Andrew (irc:RhodiumToad)


Re: PostgreSQL Limits and lack of documentation about them.

От
John Naylor
Дата:
On 11/1/18, Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:
> So (with 8k blocks) the limit on the number of non-null external-toasted
> columns is about 450, while you can have the full 1600 columns if they
> are integers or smaller, or just over 1015 bigints. But you can have
> 1600 text columns if they average 4 bytes or less (excluding length
> byte).
>
> If you push too close to the limit, it may even be possible to overflow
> the tuple size by setting fields to null, since the null bitmap is only
> present if at least one field is null. So you can have 1010 non-null
> bigints, but if you try and do 1009 non-null bigints and one null, it
> won't fit (and nor will 999 non-nulls and 11 nulls, if I calculated
> right).

Thanks for that, Andrew, that was insightful. I drilled down to get
the exact values:

Non-nullable columns:
text (4 bytes each or less): 1600
toasted text: 452
int: 1600
bigint: 1017

Nullable columns with one null value:
text (4 bytes each or less): 1600
toasted text: 449
int: 1600
bigint: 1002


-John Naylor


Re: PostgreSQL Limits and lack of documentation about them.

От
John Naylor
Дата:
On 11/1/18, David Rowley <david.rowley@2ndquadrant.com> wrote:

> I've attached an updated patch, again it's just intended as an aid for
> discussions at this stage. Also included the rendered html.

Looks good so far. Based on experimentation with toasted columns, it
seems the largest row size is 452GB, but I haven't tried that on my
laptop. :-) As for the number-of-column limits, it's a matter of how
much detail we want to include. With all the numbers in my previous
email, that could probably use its own table if we include them all.

On 11/1/18, Nasby, Jim <nasbyj@amazon.com> wrote:
> It’s a bit misleading to say “Can be increased by increasing BLKSZ and
> recompiling”, since you’d also need to re initdb. Given that messing with
> BLKSZ is pretty uncommon I would simply put a note somewhere that mentions
> that these values assume the default BLKSZ of 8192.

+1

-John Naylor


Re: PostgreSQL Limits and lack of documentation about them.

От
Robert Haas
Дата:
On Tue, Nov 6, 2018 at 6:01 AM John Naylor <jcnaylor@gmail.com> wrote:
> On 11/1/18, David Rowley <david.rowley@2ndquadrant.com> wrote:
> > I've attached an updated patch, again it's just intended as an aid for
> > discussions at this stage. Also included the rendered html.
>
> Looks good so far. Based on experimentation with toasted columns, it
> seems the largest row size is 452GB, but I haven't tried that on my
> laptop. :-) As for the number-of-column limits, it's a matter of how
> much detail we want to include. With all the numbers in my previous
> email, that could probably use its own table if we include them all.

There are a lot of variables here.  A particular row size may work for
one encoding and not for another.

IMHO, documenting that you can get up to 1600 integer columns but only
1002 bigint columns doesn't really help anybody, because nobody has a
table with only one type of column, and people usually want to have
some latitude to run ALTER TABLE commands later.

It might be useful for some users to explain that certain things will
should work for values < X, may work for values between X and Y, and
will definitely not work above Y.  Or maybe we can provide a narrative
explanation rather than just a table of numbers.  Or both.  But I
think trying to provide a table of exact cutoffs is sort of like
tilting at windmills.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: PostgreSQL Limits and lack of documentation about them.

От
David Rowley
Дата:
On 8 November 2018 at 10:02, Robert Haas <robertmhaas@gmail.com> wrote:
> IMHO, documenting that you can get up to 1600 integer columns but only
> 1002 bigint columns doesn't really help anybody, because nobody has a
> table with only one type of column, and people usually want to have
> some latitude to run ALTER TABLE commands later.
>
> It might be useful for some users to explain that certain things will
> should work for values < X, may work for values between X and Y, and
> will definitely not work above Y.  Or maybe we can provide a narrative
> explanation rather than just a table of numbers.  Or both.  But I
> think trying to provide a table of exact cutoffs is sort of like
> tilting at windmills.

I added something along those lines in a note below the table. Likely
there are better ways to format all this, but trying to detail out
what the content should be first.

Hopefully I I've addressed the other things mentioned too.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Вложения

Re: PostgreSQL Limits and lack of documentation about them.

От
John Naylor
Дата:
On 11/8/18, David Rowley <david.rowley@2ndquadrant.com> wrote:
> On 8 November 2018 at 10:02, Robert Haas <robertmhaas@gmail.com> wrote:
>> It might be useful for some users to explain that certain things will
>> should work for values < X, may work for values between X and Y, and
>> will definitely not work above Y.  Or maybe we can provide a narrative
>> explanation rather than just a table of numbers.  Or both.  But I
>> think trying to provide a table of exact cutoffs is sort of like
>> tilting at windmills.
>
> I added something along those lines in a note below the table. Likely
> there are better ways to format all this, but trying to detail out
> what the content should be first.

The language seems fine to me.

-John Naylor


Re: PostgreSQL Limits and lack of documentation about them.

От
Peter Eisentraut
Дата:
On 08/11/2018 04:13, David Rowley wrote:
> I added something along those lines in a note below the table. Likely
> there are better ways to format all this, but trying to detail out
> what the content should be first.
> 
> Hopefully I I've addressed the other things mentioned too.

Could you adjust this to use fewer capital letters, unless they start
sentences or similar?

-- 
Peter Eisentraut              http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: PostgreSQL Limits and lack of documentation about them.

От
David Rowley
Дата:
On 8 November 2018 at 22:46, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:
> Could you adjust this to use fewer capital letters, unless they start
> sentences or similar?

Yeah. Changed in the attached.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Вложения

Re: PostgreSQL Limits and lack of documentation about them.

От
John Naylor
Дата:
On 11/8/18, David Rowley <david.rowley@2ndquadrant.com> wrote:
> On 8 November 2018 at 22:46, Peter Eisentraut
> <peter.eisentraut@2ndquadrant.com> wrote:
>> Could you adjust this to use fewer capital letters, unless they start
>> sentences or similar?
>
> Yeah. Changed in the attached.

Looks good to me. Since there have been no new suggestions for a few
days, I'll mark it ready for committer.

-John Naylor


Re: PostgreSQL Limits and lack of documentation about them.

От
David Rowley
Дата:
On 13 November 2018 at 19:46, John Naylor <jcnaylor@gmail.com> wrote:
> On 11/8/18, David Rowley <david.rowley@2ndquadrant.com> wrote:
>> On 8 November 2018 at 22:46, Peter Eisentraut
>> <peter.eisentraut@2ndquadrant.com> wrote:
>>> Could you adjust this to use fewer capital letters, unless they start
>>> sentences or similar?
>>
>> Yeah. Changed in the attached.
>
> Looks good to me. Since there have been no new suggestions for a few
> days, I'll mark it ready for committer.

Thanks for your review.  I don't think these initially need to include
100% of the limits. If we stumble on things later that seem worth
including, we'll have a place to write them down.

The only other thing that sprung to my mind was the maximum tables per
query.  This is currently limited to 64999, not including double
counting partitioned tables and inheritance parents, but I kinda think
of we feel the need to document it, then we might as well just raise
the limit.  It seems a bit arbitrarily set at the moment. I don't see
any reason it couldn't be higher. Although, if it was too high we'd
start hitting things like palloc() size limits on simple_rte_array.
I'm inclined to not bother mentioning it.


-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: PostgreSQL Limits and lack of documentation about them.

От
Tom Lane
Дата:
David Rowley <david.rowley@2ndquadrant.com> writes:
> [ v4-0001-Add-documentation-section-appendix-detailing-some.patch ]

A few nitpicky gripes on this -

* I don't like inserting this as Appendix B, because that means
renumbering appendixes that have had their same names for a *long*
time; for instance the release notes have been Appendix E since
we adopted the modern division of the docs in 7.4.  So I'd put it
below anything that's commonly-referenced.  Maybe just before
"Acronyms"?

* I think I'd make the title "PostgreSQL Limitations", as it
applies to the product not any one database.

* The repetition of "Maximum" in each table row seems rather
pointless; couldn't we just drop that word?

* Items such as "relations per database" are surely not unlimited;
that's bounded at 4G by the number of distinct OIDs.  (In practice
you'd get unhappy well before that, though I suppose that's true
for many of these.)

* Rows per table is also definitely finite if you are documenting
pages per relation as finite.  But it'd be worth pointing out that
partitioning provides a way to surmount that.

* Many of these values are affected by BLCKSZ.  How much effort
shall we spend on documenting that?

* Max ID length is 63 bytes not characters.

* Don't think I'd bother with mentioning INCLUDE columns in the
"maximum indexed columns" entry.  Also, maybe call that "maximum
columns per index"; as phrased, it could be misunderstood to
mean that only 32 columns can be used in all indexes put together.

* Ordering of the table entries seems a bit random.

> The only other thing that sprung to my mind was the maximum tables per
> query.  This is currently limited to 64999, not including double
> counting partitioned tables and inheritance parents, but I kinda think
> of we feel the need to document it, then we might as well just raise
> the limit.

Can't get excited about documenting that one ... although as things
stand, it implies a limit on the number of partitions you can use
that's way lower than the claimed 256M.

> It seems a bit arbitrarily set at the moment. I don't see
> any reason it couldn't be higher.

It's evidently intended to make sure varnos can fit in uint16.
Whether there's anyplace that's actually doing so, rather than
storing them as ints, I dunno.

            regards, tom lane


Re: PostgreSQL Limits and lack of documentation about them.

От
David Rowley
Дата:
Thanks for looking at this.

On Thu, 15 Nov 2018 at 13:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> * I don't like inserting this as Appendix B, because that means
> renumbering appendixes that have had their same names for a *long*
> time; for instance the release notes have been Appendix E since
> we adopted the modern division of the docs in 7.4.  So I'd put it
> below anything that's commonly-referenced.  Maybe just before
> "Acronyms"?

Seems fair. I've pushed it down to before acronyms.

> * I think I'd make the title "PostgreSQL Limitations", as it
> applies to the product not any one database.

Changed.

> * The repetition of "Maximum" in each table row seems rather
> pointless; couldn't we just drop that word?

I've changed the column header to "Upper Limit" and removed the
"Maximum" in each row.

> * Items such as "relations per database" are surely not unlimited;
> that's bounded at 4G by the number of distinct OIDs.  (In practice
> you'd get unhappy well before that, though I suppose that's true
> for many of these.)

True. I've changed this to 4,294,950,911, which is 2^32 -
FirstNormalObjectId - 1 (for InvalidOid)

> * Rows per table is also definitely finite if you are documenting
> pages per relation as finite.  But it'd be worth pointing out that
> partitioning provides a way to surmount that.

I was unsure how best to word this one. I ended up with "Limited by
the number of tuples that can fit onto 4,294,967,295 pages"

> * Many of these values are affected by BLCKSZ.  How much effort
> shall we spend on documenting that?

I've changed the comment in the maximum relation size to read
"Assuming the default BLCKSZ of 8192 bytes".

> * Max ID length is 63 bytes not characters.

Changed.

> * Don't think I'd bother with mentioning INCLUDE columns in the
> "maximum indexed columns" entry.  Also, maybe call that "maximum
> columns per index"; as phrased, it could be misunderstood to
> mean that only 32 columns can be used in all indexes put together.

I slightly disagree about INCLUDE, but I've removed it anyway. Changed
the title to "Columns per index".

> * Ordering of the table entries seems a bit random.

It ended up that way due to me having not thought of any good order.
I've changed it to try to be roughly in order of scope; database
first, then things that go in them later. Perhaps that's no good, but
it does seem better than random. I don't really think alphabetical is
useful.

> > The only other thing that sprung to my mind was the maximum tables per
> > query.  This is currently limited to 64999, not including double
> > counting partitioned tables and inheritance parents, but I kinda think
> > of we feel the need to document it, then we might as well just raise
> > the limit.
>
> Can't get excited about documenting that one ... although as things
> stand, it implies a limit on the number of partitions you can use
> that's way lower than the claimed 256M.

That is true, although that may change if we no longer reserve varnos
for pruned partitions.  More partitions could then be created, you'd
just not be able to query them all at once.  For now, I've just
removed the mention of maximum partitions as it seemed a little too
obscure to document the 64999 limit due to stepping into special varno
space.

Another thing that I was a bit unsure about is the maximum table size
limit.  I've got written that it's 32 TB, but that's not quite correct
as it's 8192 bytes less than that due to InvalidBlockNumber. Writing
"35,184,372,080,640 bytes" did not seem like an improvement.

I also altered the intro paragraph to mention practical limitations
and that the table below only mentions hard limitations.

v5 is attached.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Вложения

Re: PostgreSQL Limits and lack of documentation about them.

От
Peter Eisentraut
Дата:
That last sentence about the dropped columns is confusing to me:

+    <para>
+     Columns which have been dropped from the table also contribute to the
+     maximum column limit, although the dropped column values for newly
+     created tuples are internally marked as NULL in the tuple's null
bitmap,
+     which does occupy space.
+    </para>

So the dropped columns matter, but they are null, but the nulls matter
too.  What are we really trying to say here?  Maybe this:

Columns which have been dropped from the table also contribute to the
maximum column limit.  Moreover, although the dropped column values for
newly created tuples are internally marked as NULL in the tuple's null
bitmap, the null bitmap also occupies space.

-- 
Peter Eisentraut              http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: PostgreSQL Limits and lack of documentation about them.

От
Steve Crawford
Дата:
On Wed, Nov 28, 2018 at 10:06 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:
That last sentence about the dropped columns is confusing to me:

+    <para>
+     Columns which have been dropped from the table also contribute to the
+     maximum column limit, although the dropped column values for newly
+     created tuples are internally marked as NULL in the tuple's null
bitmap,
+     which does occupy space.
+    </para>

So the dropped columns matter, but they are null, but the nulls matter
too.  What are we really trying to say here?  Maybe this:

Columns which have been dropped from the table also contribute to the
maximum column limit.  Moreover, although the dropped column values for
newly created tuples are internally marked as NULL in the tuple's null
bitmap, the null bitmap also occupies space.


Both for my edification and as a potentially important documentation detail, do operations that rebuild the table such as CLUSTER or pg_repack reclaim the column space?

Cheers,
Steve 

Re: PostgreSQL Limits and lack of documentation about them.

От
David Rowley
Дата:
On Thu, 29 Nov 2018 at 07:06, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:
>
> That last sentence about the dropped columns is confusing to me:
>
> +    <para>
> +     Columns which have been dropped from the table also contribute to the
> +     maximum column limit, although the dropped column values for newly
> +     created tuples are internally marked as NULL in the tuple's null
> bitmap,
> +     which does occupy space.
> +    </para>
>
> So the dropped columns matter, but they are null, but the nulls matter
> too.  What are we really trying to say here?  Maybe this:
>
> Columns which have been dropped from the table also contribute to the
> maximum column limit.  Moreover, although the dropped column values for
> newly created tuples are internally marked as NULL in the tuple's null
> bitmap, the null bitmap also occupies space.

I'd say that's a small improvement that's worth making.  I've attached
a patch using your reformed version of that paragraph.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Вложения

Re: PostgreSQL Limits and lack of documentation about them.

От
David Rowley
Дата:
On Thu, 29 Nov 2018 at 08:17, Steve Crawford
<scrawford@pinpointresearch.com> wrote:
> Both for my edification and as a potentially important documentation detail, do operations that rebuild the table
suchas CLUSTER or pg_repack reclaim the column space?
 

I've never used pg_repack, but CLUSTER will reform the tuples so that
they no longer store the actual value for the Datums belonging to the
dropped column. They'll still contain the null bitmap to mention that
the dropped column's value is NULL.  The row won't disappear from
pg_attribute, so the attnums are not resequenced, therefore we must
maintain the dropped column with the NULL marking.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: PostgreSQL Limits and lack of documentation about them.

От
Peter Eisentraut
Дата:
On 29/11/2018 00:04, David Rowley wrote:
> On Thu, 29 Nov 2018 at 08:17, Steve Crawford
> <scrawford@pinpointresearch.com> wrote:
>> Both for my edification and as a potentially important documentation detail, do operations that rebuild the table
suchas CLUSTER or pg_repack reclaim the column space?
 
> 
> I've never used pg_repack, but CLUSTER will reform the tuples so that
> they no longer store the actual value for the Datums belonging to the
> dropped column. They'll still contain the null bitmap to mention that
> the dropped column's value is NULL.  The row won't disappear from
> pg_attribute, so the attnums are not resequenced, therefore we must
> maintain the dropped column with the NULL marking.

committed

-- 
Peter Eisentraut              http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: PostgreSQL Limits and lack of documentation about them.

От
David Rowley
Дата:
On Fri, 30 Nov 2018 at 02:01, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:
> committed

Thanks

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services