Re: pg13dev: explain partial, parallel hashagg, and memory use

Поиск
Список
Период
Сортировка
От Justin Pryzby
Тема Re: pg13dev: explain partial, parallel hashagg, and memory use
Дата
Msg-id 20200805021319.GA28072@telsasoft.com
обсуждение исходный текст
Ответ на Re: pg13dev: explain partial, parallel hashagg, and memory use  (David Rowley <dgrowleyml@gmail.com>)
Ответы Re: pg13dev: explain partial, parallel hashagg, and memory use  (David Rowley <dgrowleyml@gmail.com>)
Список pgsql-hackers
On Wed, Aug 05, 2020 at 01:44:17PM +1200, David Rowley wrote:
> On Wed, 5 Aug 2020 at 13:21, Justin Pryzby <pryzby@telsasoft.com> wrote:
> >
> > I'm testing with a customer's data on pg13dev and got output for which Peak
> > Memory doesn't look right/useful.  I reproduced it on 565f16902.
> 
> Likely the sanity of those results depends on whether you think that
> the Memory Usage reported outside of the workers is meant to be the
> sum of all processes or the memory usage for the leader backend.
> 
> All that's going on here is that the Parallel Append is using some
> parallel safe paths and giving one to each worker. The 2 workers take
> the first 2 subpaths and the leader takes the third.  The memory usage
> reported helps confirm that's the case.

I'm not sure there's a problem, but all the 0kB were suspicious to me.  

I think you're saying that one worker alone handled each HashAgg, and the other
worker (and leader) show 0kB.  I guess in my naive thinking it's odd to show a
worker which wasn't active for that subpath (at least in text output).  But I
don't know the expected behavior of parallel hashagg, so that explains most of
my confusion. 

On Tue, Aug 04, 2020 at 10:01:18PM -0400, James Coleman wrote:
> Perhaps it could also say "Participating" or "Non-participating"?

Yes, that'd help me alot :)

Also odd (to me).  If I encourage more workers, there are "slots" for each
"planned" worker, even though fewer were launched:

postgres=# ALTER TABLE p3 SET (parallel_workers=11);
postgres=# SET max_parallel_workers_per_gather=11;
 Finalize HashAggregate  (cost=10299.64..10329.64 rows=3000 width=12) (actual time=297.793..299.933 rows=3000 loops=1)
   Group Key: p.i
   Batches: 1  Memory Usage: 625kB
   ->  Gather  (cost=2928.09..10134.64 rows=33000 width=12) (actual time=233.398..282.429 rows=13000 loops=1)
         Workers Planned: 11
         Workers Launched: 7
         ->  Parallel Append  (cost=1928.09..5834.64 rows=3000 width=12) (actual time=214.358..232.980 rows=1625
loops=8)
               ->  Partial HashAggregate  (cost=1933.46..1943.46 rows=1000 width=12) (actual time=167.936..171.345
rows=1000loops=4)
 
                     Group Key: p.i
                     Batches: 1  Memory Usage: 0kB
                     Worker 0:  Batches: 1  Memory Usage: 193kB
                     Worker 1:  Batches: 1  Memory Usage: 193kB
                     Worker 2:  Batches: 1  Memory Usage: 0kB
                     Worker 3:  Batches: 1  Memory Usage: 0kB
                     Worker 4:  Batches: 1  Memory Usage: 193kB
                     Worker 5:  Batches: 1  Memory Usage: 0kB
                     Worker 6:  Batches: 1  Memory Usage: 193kB
                     Worker 7:  Batches: 0  Memory Usage: 0kB
                     Worker 8:  Batches: 0  Memory Usage: 0kB
                     Worker 9:  Batches: 0  Memory Usage: 0kB
                     Worker 10:  Batches: 0  Memory Usage: 0kB


Thanks,
-- 
Justin



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Alvaro Herrera
Дата:
Сообщение: Re: [DOC] Document concurrent index builds waiting on each other
Следующее
От: Alvaro Herrera
Дата:
Сообщение: Re: [DOC] Document concurrent index builds waiting on each other