Обсуждение: Temp files for simple queries

Поиск
Список
Период
Сортировка

Temp files for simple queries

От
"Radoulov, Dimitre"
Дата:

Hello all,

We have a small Google Cloud SQL PostgreSQL 11 instance.
The instance is configured with 8G of memory and 4 vCPUs, work_mem is 4MB.

I was wondering why the instance uses disk temp files for such a small amount of data (see size in the log below)?

I suppose that it's not only the work_mem limit that could trigger disk temp file creation or the reported SQL statement is not the only one that contributes to it:

I 2021-06-22T09:12:59.164913Z 2021-06-22 09:12:59.163 UTC [1957798]: [2-1] db=<db>,user=<user> STATEMENT: SELECT c1, c2 FROM schema1.t1 WHERE c1 >= $1 ORDER BY c1
I 2021-06-22T09:12:59.164379Z 2021-06-22 09:12:59.163 UTC [1957798]: [1-1] db=<db>,user=<user> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp1957798.0", size 7380992

The t1 table (table and column names have been masked for privacy) occupies only  440 kB and has 160 records.


Best regards
Dimitre



Re: Temp files for simple queries

От
"Radoulov, Dimitre"
Дата:

Just to clarify: the size of the temp file is 7MB and with work_mem of 4MB it clearly doesn't fit.

But why an "order by" of a 440kB of data would require 7MB.

One possible reason is that that the number of records in that table varies significantly during the day, but - if that's the case - I would expect that the table would be much bigger than 440kB (size taken using \dt+).


Regards
Dimitre

On 22/06/2021 13.39, Radoulov, Dimitre wrote:

Hello all,

We have a small Google Cloud SQL PostgreSQL 11 instance.
The instance is configured with 8G of memory and 4 vCPUs, work_mem is 4MB.

I was wondering why the instance uses disk temp files for such a small amount of data (see size in the log below)?

I suppose that it's not only the work_mem limit that could trigger disk temp file creation or the reported SQL statement is not the only one that contributes to it:

I 2021-06-22T09:12:59.164913Z 2021-06-22 09:12:59.163 UTC [1957798]: [2-1] db=<db>,user=<user> STATEMENT: SELECT c1, c2 FROM schema1.t1 WHERE c1 >= $1 ORDER BY c1
I 2021-06-22T09:12:59.164379Z 2021-06-22 09:12:59.163 UTC [1957798]: [1-1] db=<db>,user=<user> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp1957798.0", size 7380992

The t1 table (table and column names have been masked for privacy) occupies only  440 kB and has 160 records.


Best regards
Dimitre



Re: Temp files for simple queries

От
Holger Jakobs
Дата:

Default settings might be too low. Try to set work_mem higher. If operations like sorting don't fit into work_mem, temporary files are used.

Am 22.06.21 um 13:39 schrieb Radoulov, Dimitre:

Hello all,

We have a small Google Cloud SQL PostgreSQL 11 instance.
The instance is configured with 8G of memory and 4 vCPUs, work_mem is 4MB.

I was wondering why the instance uses disk temp files for such a small amount of data (see size in the log below)?

I suppose that it's not only the work_mem limit that could trigger disk temp file creation or the reported SQL statement is not the only one that contributes to it:

I 2021-06-22T09:12:59.164913Z 2021-06-22 09:12:59.163 UTC [1957798]: [2-1] db=<db>,user=<user> STATEMENT: SELECT c1, c2 FROM schema1.t1 WHERE c1 >= $1 ORDER BY c1
I 2021-06-22T09:12:59.164379Z 2021-06-22 09:12:59.163 UTC [1957798]: [1-1] db=<db>,user=<user> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp1957798.0", size 7380992

The t1 table (table and column names have been masked for privacy) occupies only  440 kB and has 160 records.


Best regards
Dimitre



-- 
Holger Jakobs, Bergisch Gladbach, Tel. +49-178-9759012
Вложения

Re: Temp files for simple queries

От
Julien Rouhaud
Дата:
On Tue, Jun 22, 2021 at 01:51:55PM +0200, Radoulov, Dimitre wrote:
> Just to clarify: the size of the temp file is 7MB and with work_mem of 4MB
> it clearly doesn't fit.
> 
> But why an "order by" of a 440kB of data would require 7MB.
> 
> One possible reason is that that the number of records in that table varies
> significantly during the day, 

That would be the logical explanation.  You could configure auto_explain to
make sure of that (https://www.postgresql.org/docs/current/auto-explain.html).

> but - if that's the case - I would expect that
> the table would be much bigger than 440kB (size taken using \dt+).

Not necessarily.  autovacuum can truncate the file if all the trailing blocks
are empty (and if it can acquire an exclusive lock fast enough), so if some
client inserts a lot of rows, process them and remove them all, and later on
slowly start to insert new rows you will get that behavior.



Re: Temp files for simple queries

От
MichaelDBA
Дата:
I'm wondering if bloat could be causing a problem.  Perhaps more blocks 
are written to disk including a lot of blocks with little or no data due 
to bloat.

Julien Rouhaud wrote on 6/22/2021 7:55 AM:
> On Tue, Jun 22, 2021 at 01:51:55PM +0200, Radoulov, Dimitre wrote:
>> Just to clarify: the size of the temp file is 7MB and with work_mem of 4MB
>> it clearly doesn't fit.
>>
>> But why an "order by" of a 440kB of data would require 7MB.
>>
>> One possible reason is that that the number of records in that table varies
>> significantly during the day,
> That would be the logical explanation.  You could configure auto_explain to
> make sure of that (https://www.postgresql.org/docs/current/auto-explain.html).
>
>> but - if that's the case - I would expect that
>> the table would be much bigger than 440kB (size taken using \dt+).
> Not necessarily.  autovacuum can truncate the file if all the trailing blocks
> are empty (and if it can acquire an exclusive lock fast enough), so if some
> client inserts a lot of rows, process them and remove them all, and later on
> slowly start to insert new rows you will get that behavior.
>
>




Re: Temp files for simple queries

От
"Radoulov, Dimitre"
Дата:
Hello Julien,

On 22/06/2021 13.55, Julien Rouhaud wrote:
> On Tue, Jun 22, 2021 at 01:51:55PM +0200, Radoulov, Dimitre wrote:
>> Just to clarify: the size of the temp file is 7MB and with work_mem of 4MB
>> it clearly doesn't fit.
>>
>> But why an "order by" of a 440kB of data would require 7MB.
>>
>> One possible reason is that that the number of records in that table varies
>> significantly during the day,
> That would be the logical explanation.  You could configure auto_explain to
> make sure of that (https://www.postgresql.org/docs/current/auto-explain.html).
>
>> but - if that's the case - I would expect that
>> the table would be much bigger than 440kB (size taken using \dt+).
> Not necessarily.  autovacuum can truncate the file if all the trailing blocks
> are empty (and if it can acquire an exclusive lock fast enough), so if some
> client inserts a lot of rows, process them and remove them all, and later on
> slowly start to insert new rows you will get that behavior.

I thought that only a vacuum full could do that, thanks for pointing it out.


Best regards
Dimitre




Re: Temp files for simple queries

От
"Radoulov, Dimitre"
Дата:
Hello Michael,

On 22/06/2021 14.00, MichaelDBA wrote:
> I'm wondering if bloat could be causing a problem.  Perhaps more 
> blocks are written to disk including a lot of blocks with little or no 
> data due to bloat.
>
Using the query suggested here: 
www.citusdata.com/blog/2017/10/20/monitoring-your-bloat-in-postgres/, I 
get the following:


databasename | db1
schemaname   | schema1
tablename    | t1
can_estimate | t
est_rows     | 160
pct_bloat    | 18
mb_bloat     | 0.06
table_mb     | 0.352

So the bloat doesn't seem significant.


Best regards
Dimitre