Обсуждение: MongoDB 3.2 beating Postgres 9.5.1?

Поиск
Список
Период
Сортировка

MongoDB 3.2 beating Postgres 9.5.1?

От
Paul Jones
Дата:
I have been running the EDB benchmark that compares Postgres and MongoDB.
I believe EDB ran it against PG 9.4 and Mongo 2.6.  I am running it
against PG 9.5.1 and Mongo 3.2 with WiredTiger storage using 10,000,000
JSON records generated by the benchmark.  It looks like Mongo is winning,
and apparently because of its cache management.

The first queries on both run in ~30 min.  And, once PG fills its cache,
it whips Mongo on repeats of the *same* query (vmstat shows no disk
reads for PG).

However, when different query on the same table is issued to both,
vmstat shows that PG has to read the *entire* table again, and it takes
~30 min.  Mongo does a lot of reads initially but after about 5 minutes,
it stops reading and completes the query, most likely because it is
using its cache very effectively.

Host:    Virtual Machine
    4 CPUs
    16 Gb RAM
    200 Gb Disk
    RHEL 6.6

PG:    9.5.1 compiled from source
    shared_buffers = 7GB
    effectve_cache_size = 12GB

Mongo:    3.2 installed with RPM from Mongo

In PG, I created the table by:

CREATE TABLE json_tables
(
    data    JSONB
);

After loading, it creates the index:

CREATE INDEX json_tables_idx ON json_tables USING GIN (data jsonb_path_ops);

After a lot of experimentation, I discovered that the benchmark was not
using PG's index, so I modified the four queries to be:

SELECT data FROM json_tables WHERE data @> '{"brand": "ACME"}';
SELECT data FROM json_tables WHERE data @> '{"name": "Phone Service Basic Plan"}';
SELECT data FROM json_tables WHERE data @> '{"name": "AC3 Case Red"}';
SELECT data FROM json_tables WHERE data @> '{"type": "service"}';

Here are two consecutive explain analyze for PG, for the same query.
No functional difference in the plans that I can tell, but the effect
of PG's cache on the second is dramatic.

If anyone has ideas on how I can get PG to more effectively use the cache
for subsequent queries, I would love to hear them.

-------

benchmark=# explain analyze select data from json_tables where data @> '{"name": "AC3 Case Red"}';

                                                               QUERY PLAN



-----------------------------------------------------------------------------------------------------------------------------------------

Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=10000 width=1261)
(actual time=2157.118..1259550.327 rows=909091 loops=1)
   Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
   Rows Removed by Index Recheck: 4360296
   Heap Blocks: exact=37031 lossy=872059
   ->  Bitmap Index Scan on json_tables_idx  (cost=0.00..111.00 rows=10000 width =0) (actual time=2141.250..2141.250
rows=909091loops=1) 
         Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
Planning time: 291.932 ms
Execution time: 1259886.920 ms
(8 rows)

Time: 1261191.844 ms

benchmark=# explain analyze select data from json_tables where data @> '{"name": "AC3 Case Red"}';
                                                              QUERY PLAN


---------------------------------------------------------------------------------------------------------------------------------------

Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=10000 width=1261) (actual time=779.261..29815.262
rows=909091loops=1) 
   Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
   Rows Removed by Index Recheck: 4360296
   Heap Blocks: exact=37031 lossy=872059
   ->  Bitmap Index Scan on json_tables_idx  (cost=0.00..111.00 rows=10000 width =0) (actual time=769.081..769.081
rows=909091loops=1) 
         Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
Planning time: 33.967 ms
Execution time: 29869.381 ms

(8 rows)

Time: 29987.122 ms


Re: MongoDB 3.2 beating Postgres 9.5.1?

От
Oleg Bartunov
Дата:


On Mar 11, 2016 4:40 PM, "Paul Jones" <pbj@cmicdo.com> wrote:
>
> I have been running the EDB benchmark that compares Postgres and MongoDB.
> I believe EDB ran it against PG 9.4 and Mongo 2.6.  I am running it
> against PG 9.5.1 and Mongo 3.2 with WiredTiger storage using 10,000,000
> JSON records generated by the benchmark.  It looks like Mongo is winning,
> and apparently because of its cache management.

Dmitry was working on the same benchmarks. I think edb benchmark is broken by design. Better,  use ycsb benchmarks. I hope, Dmitry will share his
results.

>
> The first queries on both run in ~30 min.  And, once PG fills its cache,
> it whips Mongo on repeats of the *same* query (vmstat shows no disk
> reads for PG).
>
> However, when different query on the same table is issued to both,
> vmstat shows that PG has to read the *entire* table again, and it takes
> ~30 min.  Mongo does a lot of reads initially but after about 5 minutes,
> it stops reading and completes the query, most likely because it is
> using its cache very effectively.
>
> Host:   Virtual Machine
>         4 CPUs
>         16 Gb RAM
>         200 Gb Disk
>         RHEL 6.6
>
> PG:     9.5.1 compiled from source
>         shared_buffers = 7GB
>         effectve_cache_size = 12GB
>
> Mongo:  3.2 installed with RPM from Mongo
>
> In PG, I created the table by:
>
> CREATE TABLE json_tables
> (
>         data    JSONB
> );
>
> After loading, it creates the index:
>
> CREATE INDEX json_tables_idx ON json_tables USING GIN (data jsonb_path_ops);
>
> After a lot of experimentation, I discovered that the benchmark was not
> using PG's index, so I modified the four queries to be:
>
> SELECT data FROM json_tables WHERE data @> '{"brand": "ACME"}';
> SELECT data FROM json_tables WHERE data @> '{"name": "Phone Service Basic Plan"}';
> SELECT data FROM json_tables WHERE data @> '{"name": "AC3 Case Red"}';
> SELECT data FROM json_tables WHERE data @> '{"type": "service"}';
>
> Here are two consecutive explain analyze for PG, for the same query.
> No functional difference in the plans that I can tell, but the effect
> of PG's cache on the second is dramatic.
>
> If anyone has ideas on how I can get PG to more effectively use the cache
> for subsequent queries, I would love to hear them.
>
> -------
>
> benchmark=# explain analyze select data from json_tables where data @> '{"name": "AC3 Case Red"}';
>
>                                                                QUERY PLAN
>
>
> -----------------------------------------------------------------------------------------------------------------------------------------
>
> Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=10000 width=1261)
> (actual time=2157.118..1259550.327 rows=909091 loops=1)
>    Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
>    Rows Removed by Index Recheck: 4360296
>    Heap Blocks: exact=37031 lossy=872059
>    ->  Bitmap Index Scan on json_tables_idx  (cost=0.00..111.00 rows=10000 width =0) (actual time=2141.250..2141.250 rows=909091 loops=1)
>          Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
> Planning time: 291.932 ms
> Execution time: 1259886.920 ms
> (8 rows)
>
> Time: 1261191.844 ms
>
> benchmark=# explain analyze select data from json_tables where data @> '{"name": "AC3 Case Red"}';
>                                                               QUERY PLAN
>
> ---------------------------------------------------------------------------------------------------------------------------------------
>
> Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=10000 width=1261) (actual time=779.261..29815.262 rows=909091 loops=1)
>    Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
>    Rows Removed by Index Recheck: 4360296
>    Heap Blocks: exact=37031 lossy=872059
>    ->  Bitmap Index Scan on json_tables_idx  (cost=0.00..111.00 rows=10000 width =0) (actual time=769.081..769.081 rows=909091 loops=1)
>          Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
> Planning time: 33.967 ms
> Execution time: 29869.381 ms
>
> (8 rows)
>
> Time: 29987.122 ms
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

Re: MongoDB 3.2 beating Postgres 9.5.1?

От
Dmitry Dolgov
Дата:
Hi, Paul

I agree with Oleg, EDB benchmarks are strange sometimes. I did the same benchmarks several months ago. I never noticed the cache influence back then, so I tried to reproduce your situation now (on a 5*10^6 records although). I started to play with db cache (using `echo 3 > /proc/sys/vm/drop_cache`), and I see difference in time execution for two subsequent queries, but `explain` info are almost identical, e.g. `shared hit & read`:

```
benchmark=# explain (buffers, analyze, verbose) select data from json_tables where data @> '{"name": "AC3 Case Red"}';
                                                                 QUERY PLAN                                                                 
--------------------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on public.json_tables  (cost=102.74..19001.47 rows=4999 width=1257) (actual time=740.556..215956.655 rows=454546 loops=1)
   Output: data
   Recheck Cond: (json_tables.data @> '{"name": "AC3 Case Red"}'::jsonb)
   Rows Removed by Index Recheck: 2114606
   Heap Blocks: exact=31624 lossy=422922
   Buffers: shared hit=1371 read=455551
   ->  Bitmap Index Scan on json_tables_idx  (cost=0.00..101.49 rows=4999 width=0) (actual time=731.010..731.010 rows=454547 loops=1)
         Index Cond: (json_tables.data @> '{"name": "AC3 Case Red"}'::jsonb)
         Buffers: shared hit=1371 read=1005
 Planning time: 6.352 ms
 Execution time: 216075.830 ms
(11 rows)

benchmark=# explain (buffers, analyze, verbose) select data from json_tables where data @> '{"name": "AC3 Case Red"}';
                                                                QUERY PLAN                                                                 
-------------------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on public.json_tables  (cost=102.74..19001.47 rows=4999 width=1257) (actual time=222.476..10692.703 rows=454546 loops=1)
   Output: data
   Recheck Cond: (json_tables.data @> '{"name": "AC3 Case Red"}'::jsonb)
   Rows Removed by Index Recheck: 2114606
   Heap Blocks: exact=31624 lossy=422922
   Buffers: shared hit=1371 read=455551
   ->  Bitmap Index Scan on json_tables_idx  (cost=0.00..101.49 rows=4999 width=0) (actual time=214.736..214.736 rows=454547 loops=1)
         Index Cond: (json_tables.data @> '{"name": "AC3 Case Red"}'::jsonb)
         Buffers: shared hit=1371 read=1005
 Planning time: 0.089 ms
 Execution time: 10767.739 ms
(11 rows)
```

But I see almost the same execution time from mongodb `explain` (216075ms for pg and 177784ms for mongo, which isn't so much I think):

```
DBQuery.shellBatchSize = 10000000000; db.json_tables.find({"name": "AC3 Case Red"}).explain(true)
{
        "queryPlanner" : {
                "plannerVersion" : 1,
                "namespace" : "benchmark.json_tables",
                "indexFilterSet" : false,
                "parsedQuery" : {
                        "name" : {
                                "$eq" : "AC3 Case Red"
                        }
                },
                "winningPlan" : {
                        "stage" : "FETCH",
                        "inputStage" : {
                                "stage" : "IXSCAN",
                                "keyPattern" : {
                                        "name" : 1
                                },
                                "indexName" : "name_1",
                                "isMultiKey" : false,
                                "isUnique" : false,
                                "isSparse" : false,
                                "isPartial" : false,
                                "indexVersion" : 1,
                                "direction" : "forward",
                                "indexBounds" : {
                                        "name" : [
                                                "[\"AC3 Case Red\", \"AC3 Case Red\"]"
                                        ]
                                }
                        }
                },
                "rejectedPlans" : [ ]
        },
       "executionStats" : {
                "executionSuccess" : true,
                "nReturned" : 454546,
                "executionTimeMillis" : 177784,
                "totalKeysExamined" : 454546,
                "totalDocsExamined" : 454546,
                "executionStages" : {
                        "stage" : "FETCH",
                        "nReturned" : 454546,
                        "executionTimeMillisEstimate" : 175590,
                        "works" : 454547,
                        "advanced" : 454546,
                        "needTime" : 0,
                        "needYield" : 0,
                        "saveState" : 8638,
                        "restoreState" : 8638,
                        "isEOF" : 1,
                        "invalidates" : 0,
                        "docsExamined" : 454546,
                        "alreadyHasObj" : 0,
                        "inputStage" : {
                                "stage" : "IXSCAN",
                                "nReturned" : 454546,
                                "executionTimeMillisEstimate" : 700,
                                "works" : 454547,
                                "advanced" : 454546,
                                "needTime" : 0,
                                "needYield" : 0,
                                "saveState" : 8638,
                                "restoreState" : 8638,
                                "isEOF" : 1,
                                "invalidates" : 0,
                                "keyPattern" : {
                                        "name" : 1
                                },
                                "indexName" : "name_1",
                                "isMultiKey" : false,
                                "isUnique" : false,
                                "isSparse" : false,
                                "isPartial" : false,
                                "indexVersion" : 1,
                                "direction" : "forward",
                                "indexBounds" : {
                                        "name" : [
                                                "[\"AC3 Case Red\", \"AC3 Case Red\"]"
                                        ]
                                },
                                "keysExamined" : 454546,
                                "dupsTested" : 0,
                                "dupsDropped" : 0,
                                "seenInvalidated" : 0
                        }
                },
                "allPlansExecution" : [ ]
        },
        "serverInfo" : {
                "host" : "ip-172-30-0-236",
                "port" : 27017,
                "version" : "3.2.4",
                "gitVersion" : "e2ee9ffcf9f5a94fad76802e28cc978718bb7a30"
        },
        "ok" : 1
}
```

I not missed anything, am I right? Are you sure that it took much more time for PostgreSQL?
Besides, everything is fine for queries with more small results (while the query {"name": "AC3 Case Red"} is almost 1/10 of entire dataset):

```
=# insert into json_tables values('{"name": "test name"}'::jsonb);

=# explain (buffers, analyze, verbose) select data from json_tables where data @> '{"name": "test name"}';                                                   
                                                          QUERY PLAN                                                           
-------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on public.json_tables  (cost=62.75..18965.16 rows=5000 width=1257) (actual time=0.020..0.021 rows=1 loops=1)
   Output: data
   Recheck Cond: (json_tables.data @> '{"name": "test name"}'::jsonb)
   Heap Blocks: exact=1
   Buffers: shared hit=5
   ->  Bitmap Index Scan on json_tables_idx  (cost=0.00..61.50 rows=5000 width=0) (actual time=0.011..0.011 rows=1 loops=1)
         Index Cond: (json_tables.data @> '{"name": "test name"}'::jsonb)
         Buffers: shared hit=4
 Planning time: 1.164 ms
 Execution time: 0.045 ms
(10 rows)
```

As far as I know there isn't much to do about caching. I don't know if it's appropriate, but you can manually warm-up the cache (something like `cat /var/lib/postgresql/9.5/main/base/*/* > /dev/null`).

On 14 March 2016 at 00:30, Oleg Bartunov <obartunov@gmail.com> wrote:


On Mar 11, 2016 4:40 PM, "Paul Jones" <pbj@cmicdo.com> wrote:
>
> I have been running the EDB benchmark that compares Postgres and MongoDB.
> I believe EDB ran it against PG 9.4 and Mongo 2.6.  I am running it
> against PG 9.5.1 and Mongo 3.2 with WiredTiger storage using 10,000,000
> JSON records generated by the benchmark.  It looks like Mongo is winning,
> and apparently because of its cache management.

Dmitry was working on the same benchmarks. I think edb benchmark is broken by design. Better,  use ycsb benchmarks. I hope, Dmitry will share his
results.

>
> The first queries on both run in ~30 min.  And, once PG fills its cache,
> it whips Mongo on repeats of the *same* query (vmstat shows no disk
> reads for PG).
>
> However, when different query on the same table is issued to both,
> vmstat shows that PG has to read the *entire* table again, and it takes
> ~30 min.  Mongo does a lot of reads initially but after about 5 minutes,
> it stops reading and completes the query, most likely because it is
> using its cache very effectively.
>
> Host:   Virtual Machine
>         4 CPUs
>         16 Gb RAM
>         200 Gb Disk
>         RHEL 6.6
>
> PG:     9.5.1 compiled from source
>         shared_buffers = 7GB
>         effectve_cache_size = 12GB
>
> Mongo:  3.2 installed with RPM from Mongo
>
> In PG, I created the table by:
>
> CREATE TABLE json_tables
> (
>         data    JSONB
> );
>
> After loading, it creates the index:
>
> CREATE INDEX json_tables_idx ON json_tables USING GIN (data jsonb_path_ops);
>
> After a lot of experimentation, I discovered that the benchmark was not
> using PG's index, so I modified the four queries to be:
>
> SELECT data FROM json_tables WHERE data @> '{"brand": "ACME"}';
> SELECT data FROM json_tables WHERE data @> '{"name": "Phone Service Basic Plan"}';
> SELECT data FROM json_tables WHERE data @> '{"name": "AC3 Case Red"}';
> SELECT data FROM json_tables WHERE data @> '{"type": "service"}';
>
> Here are two consecutive explain analyze for PG, for the same query.
> No functional difference in the plans that I can tell, but the effect
> of PG's cache on the second is dramatic.
>
> If anyone has ideas on how I can get PG to more effectively use the cache
> for subsequent queries, I would love to hear them.
>
> -------
>
> benchmark=# explain analyze select data from json_tables where data @> '{"name": "AC3 Case Red"}';
>
>                                                                QUERY PLAN
>
>
> -----------------------------------------------------------------------------------------------------------------------------------------
>
> Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=10000 width=1261)
> (actual time=2157.118..1259550.327 rows=909091 loops=1)
>    Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
>    Rows Removed by Index Recheck: 4360296
>    Heap Blocks: exact=37031 lossy=872059
>    ->  Bitmap Index Scan on json_tables_idx  (cost=0.00..111.00 rows=10000 width =0) (actual time=2141.250..2141.250 rows=909091 loops=1)
>          Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
> Planning time: 291.932 ms
> Execution time: 1259886.920 ms
> (8 rows)
>
> Time: 1261191.844 ms
>
> benchmark=# explain analyze select data from json_tables where data @> '{"name": "AC3 Case Red"}';
>                                                               QUERY PLAN
>
> ---------------------------------------------------------------------------------------------------------------------------------------
>
> Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=10000 width=1261) (actual time=779.261..29815.262 rows=909091 loops=1)
>    Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
>    Rows Removed by Index Recheck: 4360296
>    Heap Blocks: exact=37031 lossy=872059
>    ->  Bitmap Index Scan on json_tables_idx  (cost=0.00..111.00 rows=10000 width =0) (actual time=769.081..769.081 rows=909091 loops=1)
>          Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
> Planning time: 33.967 ms
> Execution time: 29869.381 ms
>
> (8 rows)
>
> Time: 29987.122 ms
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


Re: MongoDB 3.2 beating Postgres 9.5.1?

От
Michael Paquier
Дата:
On Mon, Mar 14, 2016 at 8:31 AM, Dmitry Dolgov wrote:
> As far as I know there isn't much to do about caching. I don't know if it's
> appropriate, but you can manually warm-up the cache (something like `cat
> /var/lib/postgresql/9.5/main/base/*/* > /dev/null`).

pg_prewarm may help as well. This has the advantage to not rely on
oid2name or similar for the relation selectivity.
--
Michael


Re: MongoDB 3.2 beating Postgres 9.5.1?

От
Дата:
Your results are close enough to mine, I think, to prove the point.   And, I agree that the EDB benchmark is not necessary reflective of a real-world scenario.

However, the cache I'm referring to is PG's shared_buffer cache.   You can see the first run of the select causing a lot of disk reads.  The second identical run, reads purely from shared_buffers.

What I don't understand is, why does a slightly different select from the *same* table during the same session cause shared_buffers to be blown out and re-read??

I will see if I can try YCSB next week (I'm in workshops all week...)

Thanks!


On Monday, March 14, 2016 3:34 AM, Dmitry Dolgov <9erthalion6@gmail.com> wrote:


Hi, Paul

I agree with Oleg, EDB benchmarks are strange sometimes. I did the same benchmarks several months ago. I never noticed the cache influence back then, so I tried to reproduce your situation now (on a 5*10^6 records although). I started to play with db cache (using `echo 3 > /proc/sys/vm/drop_cache`), and I see difference in time execution for two subsequent queries, but `explain` info are almost identical, e.g. `shared hit & read`:

```
benchmark=# explain (buffers, analyze, verbose) select data from json_tables where data @> '{"name": "AC3 Case Red"}';
                                                                 QUERY PLAN                                                                 
--------------------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on public.json_tables  (cost=102.74..19001.47 rows=4999 width=1257) (actual time=740.556..215956.655 rows=454546 loops=1)
   Output: data
   Recheck Cond: (json_tables.data @> '{"name": "AC3 Case Red"}'::jsonb)
   Rows Removed by Index Recheck: 2114606
   Heap Blocks: exact=31624 lossy=422922
   Buffers: shared hit=1371 read=455551
   ->  Bitmap Index Scan on json_tables_idx  (cost=0.00..101.49 rows=4999 width=0) (actual time=731.010..731.010 rows=454547 loops=1)
         Index Cond: (json_tables.data @> '{"name": "AC3 Case Red"}'::jsonb)
         Buffers: shared hit=1371 read=1005
 Planning time: 6.352 ms
 Execution time: 216075.830 ms
(11 rows)

benchmark=# explain (buffers, analyze, verbose) select data from json_tables where data @> '{"name": "AC3 Case Red"}';
                                                                QUERY PLAN                                                                 
-------------------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on public.json_tables  (cost=102.74..19001.47 rows=4999 width=1257) (actual time=222.476..10692.703 rows=454546 loops=1)
   Output: data
   Recheck Cond: (json_tables.data @> '{"name": "AC3 Case Red"}'::jsonb)
   Rows Removed by Index Recheck: 2114606
   Heap Blocks: exact=31624 lossy=422922
   Buffers: shared hit=1371 read=455551
   ->  Bitmap Index Scan on json_tables_idx  (cost=0.00..101.49 rows=4999 width=0) (actual time=214.736..214.736 rows=454547 loops=1)
         Index Cond: (json_tables.data @> '{"name": "AC3 Case Red"}'::jsonb)
         Buffers: shared hit=1371 read=1005
 Planning time: 0.089 ms
 Execution time: 10767.739 ms
(11 rows)
```

But I see almost the same execution time from mongodb `explain` (216075ms for pg and 177784ms for mongo, which isn't so much I think):

```
DBQuery.shellBatchSize = 10000000000; db.json_tables.find({"name": "AC3 Case Red"}).explain(true)
{
        "queryPlanner" : {
                "plannerVersion" : 1,
                "namespace" : "benchmark.json_tables",
                "indexFilterSet" : false,
                "parsedQuery" : {
                        "name" : {
                                "$eq" : "AC3 Case Red"
                        }
                },
                "winningPlan" : {
                        "stage" : "FETCH",
                        "inputStage" : {
                                "stage" : "IXSCAN",
                                "keyPattern" : {
                                        "name" : 1
                                },
                                "indexName" : "name_1",
                                "isMultiKey" : false,
                                "isUnique" : false,
                                "isSparse" : false,
                                "isPartial" : false,
                                "indexVersion" : 1,
                                "direction" : "forward",
                                "indexBounds" : {
                                        "name" : [
                                                "[\"AC3 Case Red\", \"AC3 Case Red\"]"
                                        ]
                                }
                        }
                },
                "rejectedPlans" : [ ]
        },
       "executionStats" : {
                "executionSuccess" : true,
                "nReturned" : 454546,
                "executionTimeMillis" : 177784,
                "totalKeysExamined" : 454546,
                "totalDocsExamined" : 454546,
                "executionStages" : {
                        "stage" : "FETCH",
                        "nReturned" : 454546,
                        "executionTimeMillisEstimate" : 175590,
                        "works" : 454547,
                        "advanced" : 454546,
                        "needTime" : 0,
                        "needYield" : 0,
                        "saveState" : 8638,
                        "restoreState" : 8638,
                        "isEOF" : 1,
                        "invalidates" : 0,
                        "docsExamined" : 454546,
                        "alreadyHasObj" : 0,
                        "inputStage" : {
                                "stage" : "IXSCAN",
                                "nReturned" : 454546,
                                "executionTimeMillisEstimate" : 700,
                                "works" : 454547,
                                "advanced" : 454546,
                                "needTime" : 0,
                                "needYield" : 0,
                                "saveState" : 8638,
                                "restoreState" : 8638,
                                "isEOF" : 1,
                                "invalidates" : 0,
                                "keyPattern" : {
                                        "name" : 1
                                },
                                "indexName" : "name_1",
                                "isMultiKey" : false,
                                "isUnique" : false,
                                "isSparse" : false,
                                "isPartial" : false,
                                "indexVersion" : 1,
                                "direction" : "forward",
                                "indexBounds" : {
                                        "name" : [
                                                "[\"AC3 Case Red\", \"AC3 Case Red\"]"
                                        ]
                                },
                                "keysExamined" : 454546,
                                "dupsTested" : 0,
                                "dupsDropped" : 0,
                                "seenInvalidated" : 0
                        }
                },
                "allPlansExecution" : [ ]
        },
        "serverInfo" : {
                "host" : "ip-172-30-0-236",
                "port" : 27017,
                "version" : "3.2.4",
                "gitVersion" : "e2ee9ffcf9f5a94fad76802e28cc978718bb7a30"
        },
        "ok" : 1
}
```

I not missed anything, am I right? Are you sure that it took much more time for PostgreSQL?
Besides, everything is fine for queries with more small results (while the query {"name": "AC3 Case Red"} is almost 1/10 of entire dataset):

```
=# insert into json_tables values('{"name": "test name"}'::jsonb);

=# explain (buffers, analyze, verbose) select data from json_tables where data @> '{"name": "test name"}';                                                   
                                                          QUERY PLAN                                                           
-------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on public.json_tables  (cost=62.75..18965.16 rows=5000 width=1257) (actual time=0.020..0.021 rows=1 loops=1)
   Output: data
   Recheck Cond: (json_tables.data @> '{"name": "test name"}'::jsonb)
   Heap Blocks: exact=1
   Buffers: shared hit=5
   ->  Bitmap Index Scan on json_tables_idx  (cost=0.00..61.50 rows=5000 width=0) (actual time=0.011..0.011 rows=1 loops=1)
         Index Cond: (json_tables.data @> '{"name": "test name"}'::jsonb)
         Buffers: shared hit=4
 Planning time: 1.164 ms
 Execution time: 0.045 ms
(10 rows)
```

As far as I know there isn't much to do about caching. I don't know if it's appropriate, but you can manually warm-up the cache (something like `cat /var/lib/postgresql/9.5/main/base/*/* > /dev/null`).

On 14 March 2016 at 00:30, Oleg Bartunov <obartunov@gmail.com> wrote:

On Mar 11, 2016 4:40 PM, "Paul Jones" <pbj@cmicdo.com> wrote:
>
> I have been running the EDB benchmark that compares Postgres and MongoDB.
> I believe EDB ran it against PG 9.4 and Mongo 2.6.  I am running it
> against PG 9.5.1 and Mongo 3.2 with WiredTiger storage using 10,000,000
> JSON records generated by the benchmark.  It looks like Mongo is winning,
> and apparently because of its cache management.
Dmitry was working on the same benchmarks. I think edb benchmark is broken by design. Better,  use ycsb benchmarks. I hope, Dmitry will share his
results.
>
> The first queries on both run in ~30 min.  And, once PG fills its cache,
> it whips Mongo on repeats of the *same* query (vmstat shows no disk
> reads for PG).
>
> However, when different query on the same table is issued to both,
> vmstat shows that PG has to read the *entire* table again, and it takes
> ~30 min.  Mongo does a lot of reads initially but after about 5 minutes,
> it stops reading and completes the query, most likely because it is
> using its cache very effectively.
>
> Host:   Virtual Machine
>         4 CPUs
>         16 Gb RAM
>         200 Gb Disk
>         RHEL 6.6
>
> PG:     9.5.1 compiled from source
>         shared_buffers = 7GB
>         effectve_cache_size = 12GB
>
> Mongo:  3.2 installed with RPM from Mongo
>
> In PG, I created the table by:
>
> CREATE TABLE json_tables
> (
>         data    JSONB
> );
>
> After loading, it creates the index:
>
> CREATE INDEX json_tables_idx ON json_tables USING GIN (data jsonb_path_ops);
>
> After a lot of experimentation, I discovered that the benchmark was not
> using PG's index, so I modified the four queries to be:
>
> SELECT data FROM json_tables WHERE data @> '{"brand": "ACME"}';
> SELECT data FROM json_tables WHERE data @> '{"name": "Phone Service Basic Plan"}';
> SELECT data FROM json_tables WHERE data @> '{"name": "AC3 Case Red"}';
> SELECT data FROM json_tables WHERE data @> '{"type": "service"}';
>
> Here are two consecutive explain analyze for PG, for the same query.
> No functional difference in the plans that I can tell, but the effect
> of PG's cache on the second is dramatic.
>
> If anyone has ideas on how I can get PG to more effectively use the cache
> for subsequent queries, I would love to hear them.
>
> -------
>
> benchmark=# explain analyze select data from json_tables where data @> '{"name": "AC3 Case Red"}';
>
>                                                                QUERY PLAN
>
>
> -----------------------------------------------------------------------------------------------------------------------------------------
>
> Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=10000 width=1261)
> (actual time=2157.118..1259550.327 rows=909091 loops=1)
>    Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
>    Rows Removed by Index Recheck: 4360296
>    Heap Blocks: exact=37031 lossy=872059
>    ->  Bitmap Index Scan on json_tables_idx  (cost=0.00..111.00 rows=10000 width =0) (actual time=2141.250..2141.250 rows=909091 loops=1)
>          Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
> Planning time: 291.932 ms
> Execution time: 1259886.920 ms
> (8 rows)
>
> Time: 1261191.844 ms
>
> benchmark=# explain analyze select data from json_tables where data @> '{"name": "AC3 Case Red"}';
>                                                               QUERY PLAN
>
> ---------------------------------------------------------------------------------------------------------------------------------------
>
> Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=10000 width=1261) (actual time=779.261..29815.262 rows=909091 loops=1)
>    Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
>    Rows Removed by Index Recheck: 4360296
>    Heap Blocks: exact=37031 lossy=872059
>    ->  Bitmap Index Scan on json_tables_idx  (cost=0.00..111.00 rows=10000 width =0) (actual time=769.081..769.081 rows=909091 loops=1)
>          Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
> Planning time: 33.967 ms
> Execution time: 29869.381 ms
>
> (8 rows)
>
> Time: 29987.122 ms
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



Re: MongoDB 3.2 beating Postgres 9.5.1?

От
Дата:


On Tuesday, March 15, 2016 7:39 PM, "pbj@cmicdo.com" <pbj@cmicdo.com> wrote:

 > Your results are close enough to mine, I think, to prove the point.
 > And, I agree that the EDB benchmark is not necessary reflective of a
 > real-world scenario.
 >
 > However, the cache I'm referring to is PG's shared_buffer cache.
 > You can see the first run of the select causing a lot of disk reads.
 > The second identical run, reads purely from shared_buffers.
 >
 > What I don't understand is, why does a slightly different select from
 > the *same* table during the same session cause shared_buffers to be
 > blown out and re-read??
 >
 > I will see if I can try YCSB next week (I'm in workshops all week...)
 >
 > Thanks!

I was able to try YCSB today on both PG 9.5.1 and Mongo 3.2.  At first, PG
was running 4 times slower than Mongo.  Then I remembered about unlogged
tables (which I think is the way Mongo is all the time.), and remade
the PG table as UNLOGGED.  In a 50/50 read/update test over 1M records,
PG ran in 0.62 of the time of Mongo.

PG Load:
--------
[OVERALL], RunTime(ms), 104507.0
[OVERALL], Throughput(ops/sec), 9568.737022400413
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 293.0
[CLEANUP], MinLatency(us), 293.0
[CLEANUP], MaxLatency(us), 293.0
[CLEANUP], 95thPercentileLatency(us), 293.0
[CLEANUP], 99thPercentileLatency(us), 293.0
[INSERT], Operations, 1000000.0
[INSERT], AverageLatency(us), 101.329235
[INSERT], MinLatency(us), 88.0
[INSERT], MaxLatency(us), 252543.0
[INSERT], 95thPercentileLatency(us), 121.0
[INSERT], 99thPercentileLatency(us), 141.0
[INSERT], Return=OK, 1000000
 
PG Run:
-------
[OVERALL], RunTime(ms), 92763.0
[OVERALL], Throughput(ops/sec), 10780.16019318047
[READ], Operations, 499922.0
[READ], AverageLatency(us), 79.1722428698877
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 19935.0
[READ], 95thPercentileLatency(us), 94.0
[READ], 99thPercentileLatency(us), 112.0
[READ], Return=OK, 499922
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 222.0
[CLEANUP], MinLatency(us), 222.0
[CLEANUP], MaxLatency(us), 222.0
[CLEANUP], 95thPercentileLatency(us), 222.0
[CLEANUP], 99thPercentileLatency(us), 222.0
[UPDATE], Operations, 500078.0
[UPDATE], AverageLatency(us), 98.96430156895525
[UPDATE], MinLatency(us), 83.0
[UPDATE], MaxLatency(us), 26655.0
[UPDATE], 95thPercentileLatency(us), 127.0
[UPDATE], 99thPercentileLatency(us), 158.0
[UPDATE], Return=OK, 500078
 
Mongo Load:
-----------
[OVERALL], RunTime(ms), 133308.0
[OVERALL], Throughput(ops/sec), 7501.425270801452
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 1822.0
[CLEANUP], MinLatency(us), 1822.0
[CLEANUP], MaxLatency(us), 1822.0
[CLEANUP], 95thPercentileLatency(us), 1822.0
[CLEANUP], 99thPercentileLatency(us), 1822.0
[INSERT], Operations, 1000000.0
[INSERT], AverageLatency(us), 130.830678
[INSERT], MinLatency(us), 90.0
[INSERT], MaxLatency(us), 7147519.0
[INSERT], 95thPercentileLatency(us), 159.0
[INSERT], 99thPercentileLatency(us), 226.0
[INSERT], Return=OK, 1000000
 
Mongo Run:
---------
[OVERALL], RunTime(ms), 149150.0
[OVERALL], Throughput(ops/sec), 6704.65973851827
[READ], Operations, 500837.0
[READ], AverageLatency(us), 98.13153980237084
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 28271.0
[READ], 95thPercentileLatency(us), 166.0
[READ], 99thPercentileLatency(us), 186.0
[READ], Return=OK, 500837
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 2387.0
[CLEANUP], MinLatency(us), 2386.0
[CLEANUP], MaxLatency(us), 2387.0
[CLEANUP], 95thPercentileLatency(us), 2387.0
[CLEANUP], 99thPercentileLatency(us), 2387.0
[UPDATE], Operations, 499163.0
[UPDATE], AverageLatency(us), 195.21505600375028
[UPDATE], MinLatency(us), 118.0
[UPDATE], MaxLatency(us), 4513791.0
[UPDATE], 95thPercentileLatency(us), 211.0
[UPDATE], 99thPercentileLatency(us), 252.0
[UPDATE], Return=OK, 499163


 >
 >
 > On Monday, March 14, 2016 3:34 AM, Dmitry Dolgov <9erthalion6@gmail.com> wrote:
 >
 >
 > Hi, Paul
 >
 > I agree with Oleg, EDB benchmarks are strange sometimes. I did the same benchmarks several months ago. I never noticed the cache influence back then, so I tried to reproduce your situation now (on a 5*10^6 records although). I started to play with db cache (using `echo 3 > /proc/sys/vm/drop_cache`), and I see difference in time execution for two subsequent queries, but `explain` info are almost identical, e.g. `shared hit & read`:
 >
 > ....


Re: MongoDB 3.2 beating Postgres 9.5.1?

От
Kisung Kim
Дата:
Hi,
I recently test YCSB benchmark too.
But contrary to my expectation, PG (9.5) is slower than MongoDB 3.2.
Paul said that making table with no logging option improved the performance,
and it might be equal to MongoDB's behavior.
But in MongoDB documentation, it writes journal log too.
So I think turning off no logging option is not fair.
Am I wrong about MongoDB's behavior?

                                                                                                                                                       


(C)Bitnine, Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : kskim@bitnine.net
Office phone : 070-4800-5890, 408-606-8602
US Mobile phone : 408-805-2192


2016-03-19 5:05 GMT+09:00 <pbj@cmicdo.com>:


On Tuesday, March 15, 2016 7:39 PM, "pbj@cmicdo.com" <pbj@cmicdo.com> wrote:

 > Your results are close enough to mine, I think, to prove the point.
 > And, I agree that the EDB benchmark is not necessary reflective of a
 > real-world scenario.
 >
 > However, the cache I'm referring to is PG's shared_buffer cache.
 > You can see the first run of the select causing a lot of disk reads.
 > The second identical run, reads purely from shared_buffers.
 >
 > What I don't understand is, why does a slightly different select from
 > the *same* table during the same session cause shared_buffers to be
 > blown out and re-read??
 >
 > I will see if I can try YCSB next week (I'm in workshops all week...)
 >
 > Thanks!

I was able to try YCSB today on both PG 9.5.1 and Mongo 3.2.  At first, PG
was running 4 times slower than Mongo.  Then I remembered about unlogged
tables (which I think is the way Mongo is all the time.), and remade
the PG table as UNLOGGED.  In a 50/50 read/update test over 1M records,
PG ran in 0.62 of the time of Mongo.

PG Load:
--------
[OVERALL], RunTime(ms), 104507.0
[OVERALL], Throughput(ops/sec), 9568.737022400413
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 293.0
[CLEANUP], MinLatency(us), 293.0
[CLEANUP], MaxLatency(us), 293.0
[CLEANUP], 95thPercentileLatency(us), 293.0
[CLEANUP], 99thPercentileLatency(us), 293.0
[INSERT], Operations, 1000000.0
[INSERT], AverageLatency(us), 101.329235
[INSERT], MinLatency(us), 88.0
[INSERT], MaxLatency(us), 252543.0
[INSERT], 95thPercentileLatency(us), 121.0
[INSERT], 99thPercentileLatency(us), 141.0
[INSERT], Return=OK, 1000000
 
PG Run:
-------
[OVERALL], RunTime(ms), 92763.0
[OVERALL], Throughput(ops/sec), 10780.16019318047
[READ], Operations, 499922.0
[READ], AverageLatency(us), 79.1722428698877
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 19935.0
[READ], 95thPercentileLatency(us), 94.0
[READ], 99thPercentileLatency(us), 112.0
[READ], Return=OK, 499922
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 222.0
[CLEANUP], MinLatency(us), 222.0
[CLEANUP], MaxLatency(us), 222.0
[CLEANUP], 95thPercentileLatency(us), 222.0
[CLEANUP], 99thPercentileLatency(us), 222.0
[UPDATE], Operations, 500078.0
[UPDATE], AverageLatency(us), 98.96430156895525
[UPDATE], MinLatency(us), 83.0
[UPDATE], MaxLatency(us), 26655.0
[UPDATE], 95thPercentileLatency(us), 127.0
[UPDATE], 99thPercentileLatency(us), 158.0
[UPDATE], Return=OK, 500078
 
Mongo Load:
-----------
[OVERALL], RunTime(ms), 133308.0
[OVERALL], Throughput(ops/sec), 7501.425270801452
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 1822.0
[CLEANUP], MinLatency(us), 1822.0
[CLEANUP], MaxLatency(us), 1822.0
[CLEANUP], 95thPercentileLatency(us), 1822.0
[CLEANUP], 99thPercentileLatency(us), 1822.0
[INSERT], Operations, 1000000.0
[INSERT], AverageLatency(us), 130.830678
[INSERT], MinLatency(us), 90.0
[INSERT], MaxLatency(us), 7147519.0
[INSERT], 95thPercentileLatency(us), 159.0
[INSERT], 99thPercentileLatency(us), 226.0
[INSERT], Return=OK, 1000000
 
Mongo Run:
---------
[OVERALL], RunTime(ms), 149150.0
[OVERALL], Throughput(ops/sec), 6704.65973851827
[READ], Operations, 500837.0
[READ], AverageLatency(us), 98.13153980237084
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 28271.0
[READ], 95thPercentileLatency(us), 166.0
[READ], 99thPercentileLatency(us), 186.0
[READ], Return=OK, 500837
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 2387.0
[CLEANUP], MinLatency(us), 2386.0
[CLEANUP], MaxLatency(us), 2387.0
[CLEANUP], 95thPercentileLatency(us), 2387.0
[CLEANUP], 99thPercentileLatency(us), 2387.0
[UPDATE], Operations, 499163.0
[UPDATE], AverageLatency(us), 195.21505600375028
[UPDATE], MinLatency(us), 118.0
[UPDATE], MaxLatency(us), 4513791.0
[UPDATE], 95thPercentileLatency(us), 211.0
[UPDATE], 99thPercentileLatency(us), 252.0
[UPDATE], Return=OK, 499163


 >
 >
 > On Monday, March 14, 2016 3:34 AM, Dmitry Dolgov <9erthalion6@gmail.com> wrote:
 >
 >
 > Hi, Paul
 >
 > I agree with Oleg, EDB benchmarks are strange sometimes. I did the same benchmarks several months ago. I never noticed the cache influence back then, so I tried to reproduce your situation now (on a 5*10^6 records although). I started to play with db cache (using `echo 3 > /proc/sys/vm/drop_cache`), and I see difference in time execution for two subsequent queries, but `explain` info are almost identical, e.g. `shared hit & read`:
 >
 > ....



Re: MongoDB 3.2 beating Postgres 9.5.1?

От
Sameer Kumar
Дата:


On Fri, 11 Mar 2016, 9:39 p.m. Paul Jones, <pbj@cmicdo.com> wrote:
I have been running the EDB benchmark that compares Postgres and MongoDB.
I believe EDB ran it against PG 9.4 and Mongo 2.6.  I am running it
against PG 9.5.1 and Mongo 3.2 with WiredTiger storage using 10,000,000
JSON records generated by the benchmark.  It looks like Mongo is winning,
and apparently because of its cache management.

The first queries on both run in ~30 min.  And, once PG fills its cache,
it whips Mongo on repeats of the *same* query (vmstat shows no disk
reads for PG).

However, when different query on the same table is issued to both,
vmstat shows that PG has to read the *entire* table again, and it takes
~30 min.  Mongo does a lot of reads initially but after about 5 minutes,
it stops reading and completes the query, most likely because it is
using its cache very effectively.

Host:   Virtual Machine
        4 CPUs
        16 Gb RAM
        200 Gb Disk
        RHEL 6.6

PG:     9.5.1 compiled from source
        shared_buffers = 7GB
        effectve_cache_size = 12GB

Mongo:  3.2 installed with RPM from Mongo

In PG, I created the table by:

CREATE TABLE json_tables
(
        data    JSONB
);

After loading, it creates the index:

CREATE INDEX json_tables_idx ON json_tables USING GIN (data jsonb_path_ops);

This would create one GIN index which is going to be a bit larger than usual btree /n-tree index on a specific JSON field. And would be slower too. I suggest that you create an index on the specific expression using JSON operators. In my opinion that index would be much more nearer to mongoDB indexes.



After a lot of experimentation, I discovered that the benchmark was not
using PG's index, so I modified the four queries to be:

SELECT data FROM json_tables WHERE data @> '{"brand": "ACME"}';
SELECT data FROM json_tables WHERE data @> '{"name": "Phone Service Basic Plan"}';
SELECT data FROM json_tables WHERE data @> '{"name": "AC3 Case Red"}';
SELECT data FROM json_tables WHERE data @> '{"type": "service"}';

Here are two consecutive explain analyze for PG, for the same query.
No functional difference in the plans that I can tell, but the effect
of PG's cache on the second is dramatic.

If anyone has ideas on how I can get PG to more effectively use the cache
for subsequent queries, I would love to hear them.

-------

benchmark=# explain analyze select data from json_tables where data @> '{"name": "AC3 Case Red"}';

                                                               QUERY PLAN


-----------------------------------------------------------------------------------------------------------------------------------------

Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=10000 width=1261)
(actual time=2157.118..1259550.327 rows=909091 loops=1)
   Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
   Rows Removed by Index Recheck: 4360296
   Heap Blocks: exact=37031 lossy=872059
   ->  Bitmap Index Scan on json_tables_idx  (cost=0.00..111.00 rows=10000 width =0) (actual time=2141.250..2141.250 rows=909091 loops=1)
         Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
Planning time: 291.932 ms
Execution time: 1259886.920 ms
(8 rows)

Time: 1261191.844 ms

benchmark=# explain analyze select data from json_tables where data @> '{"name": "AC3 Case Red"}';
                                                              QUERY PLAN

---------------------------------------------------------------------------------------------------------------------------------------

Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=10000 width=1261) (actual time=779.261..29815.262 rows=909091 loops=1)
   Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
   Rows Removed by Index Recheck: 4360296
   Heap Blocks: exact=37031 lossy=872059
   ->  Bitmap Index Scan on json_tables_idx  (cost=0.00..111.00 rows=10000 width =0) (actual time=769.081..769.081 rows=909091 loops=1)
         Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
Planning time: 33.967 ms
Execution time: 29869.381 ms

(8 rows)

Time: 29987.122 ms


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
--
--
Best Regards
Sameer Kumar | DB Solution Architect 
ASHNIK PTE. LTD.

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com

Re: MongoDB 3.2 beating Postgres 9.5.1?

От
Kisung Kim
Дата:
Of course, I do not create GIN index.
Maybe the problem is related to checkpoint and WAL.
I don't know how to make the comparison with MongoDB fair enough.

                                                                                                                                                       


(C)Bitnine, Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : kskim@bitnine.net
Office phone : 070-4800-5890, 408-606-8602
US Mobile phone : 408-805-2192


2016-07-19 11:23 GMT+09:00 Sameer Kumar <sameer.kumar@ashnik.com>:


On Fri, 11 Mar 2016, 9:39 p.m. Paul Jones, <pbj@cmicdo.com> wrote:
I have been running the EDB benchmark that compares Postgres and MongoDB.
I believe EDB ran it against PG 9.4 and Mongo 2.6.  I am running it
against PG 9.5.1 and Mongo 3.2 with WiredTiger storage using 10,000,000
JSON records generated by the benchmark.  It looks like Mongo is winning,
and apparently because of its cache management.

The first queries on both run in ~30 min.  And, once PG fills its cache,
it whips Mongo on repeats of the *same* query (vmstat shows no disk
reads for PG).

However, when different query on the same table is issued to both,
vmstat shows that PG has to read the *entire* table again, and it takes
~30 min.  Mongo does a lot of reads initially but after about 5 minutes,
it stops reading and completes the query, most likely because it is
using its cache very effectively.

Host:   Virtual Machine
        4 CPUs
        16 Gb RAM
        200 Gb Disk
        RHEL 6.6

PG:     9.5.1 compiled from source
        shared_buffers = 7GB
        effectve_cache_size = 12GB

Mongo:  3.2 installed with RPM from Mongo

In PG, I created the table by:

CREATE TABLE json_tables
(
        data    JSONB
);

After loading, it creates the index:

CREATE INDEX json_tables_idx ON json_tables USING GIN (data jsonb_path_ops);

This would create one GIN index which is going to be a bit larger than usual btree /n-tree index on a specific JSON field. And would be slower too. I suggest that you create an index on the specific expression using JSON operators. In my opinion that index would be much more nearer to mongoDB indexes.



After a lot of experimentation, I discovered that the benchmark was not
using PG's index, so I modified the four queries to be:

SELECT data FROM json_tables WHERE data @> '{"brand": "ACME"}';
SELECT data FROM json_tables WHERE data @> '{"name": "Phone Service Basic Plan"}';
SELECT data FROM json_tables WHERE data @> '{"name": "AC3 Case Red"}';
SELECT data FROM json_tables WHERE data @> '{"type": "service"}';

Here are two consecutive explain analyze for PG, for the same query.
No functional difference in the plans that I can tell, but the effect
of PG's cache on the second is dramatic.

If anyone has ideas on how I can get PG to more effectively use the cache
for subsequent queries, I would love to hear them.

-------

benchmark=# explain analyze select data from json_tables where data @> '{"name": "AC3 Case Red"}';

                                                               QUERY PLAN


-----------------------------------------------------------------------------------------------------------------------------------------

Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=10000 width=1261)
(actual time=2157.118..1259550.327 rows=909091 loops=1)
   Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
   Rows Removed by Index Recheck: 4360296
   Heap Blocks: exact=37031 lossy=872059
   ->  Bitmap Index Scan on json_tables_idx  (cost=0.00..111.00 rows=10000 width =0) (actual time=2141.250..2141.250 rows=909091 loops=1)
         Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
Planning time: 291.932 ms
Execution time: 1259886.920 ms
(8 rows)

Time: 1261191.844 ms

benchmark=# explain analyze select data from json_tables where data @> '{"name": "AC3 Case Red"}';
                                                              QUERY PLAN

---------------------------------------------------------------------------------------------------------------------------------------

Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=10000 width=1261) (actual time=779.261..29815.262 rows=909091 loops=1)
   Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
   Rows Removed by Index Recheck: 4360296
   Heap Blocks: exact=37031 lossy=872059
   ->  Bitmap Index Scan on json_tables_idx  (cost=0.00..111.00 rows=10000 width =0) (actual time=769.081..769.081 rows=909091 loops=1)
         Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
Planning time: 33.967 ms
Execution time: 29869.381 ms

(8 rows)

Time: 29987.122 ms


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
--
--
Best Regards
Sameer Kumar | DB Solution Architect 
ASHNIK PTE. LTD.

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: MongoDB 3.2 beating Postgres 9.5.1?

От
Teodor Sigaev
Дата:
> CREATE INDEX json_tables_idx ON json_tables USING GIN (data jsonb_path_ops);
> Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=10000 width=1261)
> (actual time=2157.118..1259550.327 rows=909091 loops=1)
>     Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
>     Rows Removed by Index Recheck: 4360296
>     Heap Blocks: exact=37031 lossy=872059
Hmm, looks like too small work_mem because lossy heap block count  is too big.


--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/


Re: MongoDB 3.2 beating Postgres 9.5.1?

От
Дата:


On Monday, July 18, 2016 10:14 PM, Kisung Kim <kskim@bitnine.net> wrote:


Hi,
I recently test YCSB benchmark too.
But contrary to my expectation, PG (9.5) is slower than MongoDB 3.2.
Paul said that making table with no logging option improved the performance,
and it might be equal to MongoDB's behavior.
But in MongoDB documentation, it writes journal log too.
So I think turning off no logging option is not fair.
Am I wrong about MongoDB's behavior?

My understanding is that, even with Mongo journaling,
it is not as reliable as Postgres.  So, I felt that using unlogged
tables leveled the playing field for Postgres.

PJ

                                                                                                                                                       

(C)Bitnine, Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : kskim@bitnine.net
Office phone : 070-4800-5890, 408-606-8602
US Mobile phone : 408-805-2192


2016-03-19 5:05 GMT+09:00 <pbj@cmicdo.com>:


On Tuesday, March 15, 2016 7:39 PM, "pbj@cmicdo.com" <pbj@cmicdo.com> wrote:

 > Your results are close enough to mine, I think, to prove the point.
 > And, I agree that the EDB benchmark is not necessary reflective of a
 > real-world scenario.
 >
 > However, the cache I'm referring to is PG's shared_buffer cache.
 > You can see the first run of the select causing a lot of disk reads.
 > The second identical run, reads purely from shared_buffers.
 >
 > What I don't understand is, why does a slightly different select from
 > the *same* table during the same session cause shared_buffers to be
 > blown out and re-read??
 >
 > I will see if I can try YCSB next week (I'm in workshops all week...)
 >
 > Thanks!

I was able to try YCSB today on both PG 9.5.1 and Mongo 3.2.  At first, PG
was running 4 times slower than Mongo.  Then I remembered about unlogged
tables (which I think is the way Mongo is all the time.), and remade
the PG table as UNLOGGED.  In a 50/50 read/update test over 1M records,
PG ran in 0.62 of the time of Mongo.

PG Load:
--------
[OVERALL], RunTime(ms), 104507.0
[OVERALL], Throughput(ops/sec), 9568.737022400413
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 293.0
[CLEANUP], MinLatency(us), 293.0
[CLEANUP], MaxLatency(us), 293.0
[CLEANUP], 95thPercentileLatency(us), 293.0
[CLEANUP], 99thPercentileLatency(us), 293.0
[INSERT], Operations, 1000000.0
[INSERT], AverageLatency(us), 101.329235
[INSERT], MinLatency(us), 88.0
[INSERT], MaxLatency(us), 252543.0
[INSERT], 95thPercentileLatency(us), 121.0
[INSERT], 99thPercentileLatency(us), 141.0
[INSERT], Return=OK, 1000000
 
PG Run:
-------
[OVERALL], RunTime(ms), 92763.0
[OVERALL], Throughput(ops/sec), 10780.16019318047
[READ], Operations, 499922.0
[READ], AverageLatency(us), 79.1722428698877
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 19935.0
[READ], 95thPercentileLatency(us), 94.0
[READ], 99thPercentileLatency(us), 112.0
[READ], Return=OK, 499922
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 222.0
[CLEANUP], MinLatency(us), 222.0
[CLEANUP], MaxLatency(us), 222.0
[CLEANUP], 95thPercentileLatency(us), 222.0
[CLEANUP], 99thPercentileLatency(us), 222.0
[UPDATE], Operations, 500078.0
[UPDATE], AverageLatency(us), 98.96430156895525
[UPDATE], MinLatency(us), 83.0
[UPDATE], MaxLatency(us), 26655.0
[UPDATE], 95thPercentileLatency(us), 127.0
[UPDATE], 99thPercentileLatency(us), 158.0
[UPDATE], Return=OK, 500078
 
Mongo Load:
-----------
[OVERALL], RunTime(ms), 133308.0
[OVERALL], Throughput(ops/sec), 7501.425270801452
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 1822.0
[CLEANUP], MinLatency(us), 1822.0
[CLEANUP], MaxLatency(us), 1822.0
[CLEANUP], 95thPercentileLatency(us), 1822.0
[CLEANUP], 99thPercentileLatency(us), 1822.0
[INSERT], Operations, 1000000.0
[INSERT], AverageLatency(us), 130.830678
[INSERT], MinLatency(us), 90.0
[INSERT], MaxLatency(us), 7147519.0
[INSERT], 95thPercentileLatency(us), 159.0
[INSERT], 99thPercentileLatency(us), 226.0
[INSERT], Return=OK, 1000000
 
Mongo Run:
---------
[OVERALL], RunTime(ms), 149150.0
[OVERALL], Throughput(ops/sec), 6704.65973851827
[READ], Operations, 500837.0
[READ], AverageLatency(us), 98.13153980237084
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 28271.0
[READ], 95thPercentileLatency(us), 166.0
[READ], 99thPercentileLatency(us), 186.0
[READ], Return=OK, 500837
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 2387.0
[CLEANUP], MinLatency(us), 2386.0
[CLEANUP], MaxLatency(us), 2387.0
[CLEANUP], 95thPercentileLatency(us), 2387.0
[CLEANUP], 99thPercentileLatency(us), 2387.0
[UPDATE], Operations, 499163.0
[UPDATE], AverageLatency(us), 195.21505600375028
[UPDATE], MinLatency(us), 118.0
[UPDATE], MaxLatency(us), 4513791.0
[UPDATE], 95thPercentileLatency(us), 211.0
[UPDATE], 99thPercentileLatency(us), 252.0
[UPDATE], Return=OK, 499163


 >
 >
 > On Monday, March 14, 2016 3:34 AM, Dmitry Dolgov <9erthalion6@gmail.com> wrote:
 >
 >
 > Hi, Paul
 >
 > I agree with Oleg, EDB benchmarks are strange sometimes. I did the same benchmarks several months ago. I never noticed the cache influence back then, so I tried to reproduce your situation now (on a 5*10^6 records although). I started to play with db cache (using `echo 3 > /proc/sys/vm/drop_cache`), and I see difference in time execution for two subsequent queries, but `explain` info are almost identical, e.g. `shared hit & read`:
 >
 > ....





Re: MongoDB 3.2 beating Postgres 9.5.1?

От
Дата:


On Tuesday, July 19, 2016 6:19 AM, Teodor Sigaev <teodor@sigaev.ru> wrote:


> CREATE INDEX json_tables_idx ON json_tables USING GIN (data jsonb_path_ops);
> Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=10000 width=1261)
> (actual time=2157.118..1259550.327 rows=909091 loops=1)
>    Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
>    Rows Removed by Index Recheck: 4360296
>    Heap Blocks: exact=37031 lossy=872059
Hmm, looks like too small work_mem because lossy heap block count  is too big.

Ok, thanks.


--
Teodor Sigaev                                  E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: MongoDB 3.2 beating Postgres 9.5.1?

От
Kisung Kim
Дата:
Hi,
I recently test YCSB benchmark too.
But contrary to my expectation, PG (9.5) is slower than MongoDB 3.2.
Paul said that making table with no logging option improved the performance,
and it might be equal to MongoDB's behavior.
But in MongoDB documentation, it writes journal log too.
So I think turning off no logging option is not fair.
Am I wrong about MongoDB's behavior?

 

                                                                                                                                                       


(C)Bitnine, Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : kskim@bitnine.net
Office phone : 070-4800-5890, 408-606-8602
US Mobile phone : 408-805-2192


2016-03-19 5:05 GMT+09:00 <pbj@cmicdo.com>:


On Tuesday, March 15, 2016 7:39 PM, "pbj@cmicdo.com" <pbj@cmicdo.com> wrote:

 > Your results are close enough to mine, I think, to prove the point.
 > And, I agree that the EDB benchmark is not necessary reflective of a
 > real-world scenario.
 >
 > However, the cache I'm referring to is PG's shared_buffer cache.
 > You can see the first run of the select causing a lot of disk reads.
 > The second identical run, reads purely from shared_buffers.
 >
 > What I don't understand is, why does a slightly different select from
 > the *same* table during the same session cause shared_buffers to be
 > blown out and re-read??
 >
 > I will see if I can try YCSB next week (I'm in workshops all week...)
 >
 > Thanks!

I was able to try YCSB today on both PG 9.5.1 and Mongo 3.2.  At first, PG
was running 4 times slower than Mongo.  Then I remembered about unlogged
tables (which I think is the way Mongo is all the time.), and remade
the PG table as UNLOGGED.  In a 50/50 read/update test over 1M records,
PG ran in 0.62 of the time of Mongo.

PG Load:
--------
[OVERALL], RunTime(ms), 104507.0
[OVERALL], Throughput(ops/sec), 9568.737022400413
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 293.0
[CLEANUP], MinLatency(us), 293.0
[CLEANUP], MaxLatency(us), 293.0
[CLEANUP], 95thPercentileLatency(us), 293.0
[CLEANUP], 99thPercentileLatency(us), 293.0
[INSERT], Operations, 1000000.0
[INSERT], AverageLatency(us), 101.329235
[INSERT], MinLatency(us), 88.0
[INSERT], MaxLatency(us), 252543.0
[INSERT], 95thPercentileLatency(us), 121.0
[INSERT], 99thPercentileLatency(us), 141.0
[INSERT], Return=OK, 1000000
 
PG Run:
-------
[OVERALL], RunTime(ms), 92763.0
[OVERALL], Throughput(ops/sec), 10780.16019318047
[READ], Operations, 499922.0
[READ], AverageLatency(us), 79.1722428698877
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 19935.0
[READ], 95thPercentileLatency(us), 94.0
[READ], 99thPercentileLatency(us), 112.0
[READ], Return=OK, 499922
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 222.0
[CLEANUP], MinLatency(us), 222.0
[CLEANUP], MaxLatency(us), 222.0
[CLEANUP], 95thPercentileLatency(us), 222.0
[CLEANUP], 99thPercentileLatency(us), 222.0
[UPDATE], Operations, 500078.0
[UPDATE], AverageLatency(us), 98.96430156895525
[UPDATE], MinLatency(us), 83.0
[UPDATE], MaxLatency(us), 26655.0
[UPDATE], 95thPercentileLatency(us), 127.0
[UPDATE], 99thPercentileLatency(us), 158.0
[UPDATE], Return=OK, 500078
 
Mongo Load:
-----------
[OVERALL], RunTime(ms), 133308.0
[OVERALL], Throughput(ops/sec), 7501.425270801452
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 1822.0
[CLEANUP], MinLatency(us), 1822.0
[CLEANUP], MaxLatency(us), 1822.0
[CLEANUP], 95thPercentileLatency(us), 1822.0
[CLEANUP], 99thPercentileLatency(us), 1822.0
[INSERT], Operations, 1000000.0
[INSERT], AverageLatency(us), 130.830678
[INSERT], MinLatency(us), 90.0
[INSERT], MaxLatency(us), 7147519.0
[INSERT], 95thPercentileLatency(us), 159.0
[INSERT], 99thPercentileLatency(us), 226.0
[INSERT], Return=OK, 1000000
 
Mongo Run:
---------
[OVERALL], RunTime(ms), 149150.0
[OVERALL], Throughput(ops/sec), 6704.65973851827
[READ], Operations, 500837.0
[READ], AverageLatency(us), 98.13153980237084
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 28271.0
[READ], 95thPercentileLatency(us), 166.0
[READ], 99thPercentileLatency(us), 186.0
[READ], Return=OK, 500837
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 2387.0
[CLEANUP], MinLatency(us), 2386.0
[CLEANUP], MaxLatency(us), 2387.0
[CLEANUP], 95thPercentileLatency(us), 2387.0
[CLEANUP], 99thPercentileLatency(us), 2387.0
[UPDATE], Operations, 499163.0
[UPDATE], AverageLatency(us), 195.21505600375028
[UPDATE], MinLatency(us), 118.0
[UPDATE], MaxLatency(us), 4513791.0
[UPDATE], 95thPercentileLatency(us), 211.0
[UPDATE], 99thPercentileLatency(us), 252.0
[UPDATE], Return=OK, 499163


 >
 >
 > On Monday, March 14, 2016 3:34 AM, Dmitry Dolgov <9erthalion6@gmail.com> wrote:
 >
 >
 > Hi, Paul
 >
 > I agree with Oleg, EDB benchmarks are strange sometimes. I did the same benchmarks several months ago. I never noticed the cache influence back then, so I tried to reproduce your situation now (on a 5*10^6 records although). I started to play with db cache (using `echo 3 > /proc/sys/vm/drop_cache`), and I see difference in time execution for two subsequent queries, but `explain` info are almost identical, e.g. `shared hit & read`:
 >
 > ....