On 8/9/16 9:29 AM, Tom Lane wrote:
> "hari.prasath" <hari.prasath@zohocorp.com> writes:
>> I am using jsonb for storing key-value pair information(500 keys) and it was a very big data set with some 10M
rows.Whenever i try to extract some keys(let say some 10 keys and its values) its really very slow.
>> Is this due to jsonb parsing (or) each time json will be loaded from disk to memory for 10keys(mainly if my keys are
atend of 500 this is very slow).?
>
> It's probably mostly the cost to fetch and decompress the very wide json
> field. jsonb is pretty quick at finding an object key once it's got
> the value available to look at.
>
> You could possibly alleviate some of the speed issue by storing the column
> uncompressed (see ALTER TABLE ... SET STORAGE EXTERNAL), but that would
> bloat your disk space requirements so I'm not really sure it'd be a win.
Actually I've done some testing with this and there is a *significant*
overhead in getting multiple keys from a large document. There's a
significant extra cost for the first key, but there's also a non-trivial
cost for every key after that.
I suspect the issue is the goofy logic used to store key name offsets
(to improve compression), but I never got around to actually tracing it.
I suspect there's a win to be had by having both json types use the
ExpandedObject stuff.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532) mobile: 512-569-9461