On Fri, May 22, 2020 at 1:14 PM Soumyadeep Chakraborty
<sochakraborty@pivotal.io> wrote:
> Some more data points:
Thanks!
> max_parallel_workers_per_gather Time(seconds)
> 0 29.04s
> 1 29.17s
> 2 28.78s
> 6 291.27s
>
> I checked with explain analyze to ensure that the number of workers
> planned = max_parallel_workers_per_gather
>
> Apart from the last result (max_parallel_workers_per_gather=6), all
> the other results seem favorable.
> Could the last result be down to the fact that the number of workers
> planned exceeded the number of vCPUs?
Interesting. I guess it has to do with patterns emerging from various
parameters like that magic number 64 I hard coded into the test patch,
and other unknowns in your storage stack. I see a small drop off that
I can't explain yet, but not that.
> I also wanted to evaluate Zedstore with your patch.
> I used the same setup as above.
> No discernible difference though, maybe I'm missing something:
It doesn't look like it's using table_block_parallelscan_nextpage() as
a block allocator so it's not affected by the patch. It has its own
thing zs_parallelscan_nextrange(), which does
pg_atomic_fetch_add_u64(&pzscan->pzs_allocatedtids,
ZS_PARALLEL_CHUNK_SIZE), and that macro is 0x100000.