Обсуждение: Compression In Postgresql 9.6

Поиск
Список
Период
Сортировка

Compression In Postgresql 9.6

От
Shital A
Дата:
Hello,

Need inputs on below:

We are working on a setting up a new highly transactional (tps 100k) OLTP system for payments using blockchain and postgresql 9.6 as DB on Rhel 7.6. Postgres version is 9.6 and not latest because of specs of blockchain component. 

There is a requirement for data compression on DB level. Please provide your inputs on how this can be best achieved.

Checked in-build Toast, it compressed the data provided exceed the 2kb pagesize? If the data values are small and even if there are billion records they wont be compressed, this is what I understood.

Are there any suggestions of compressing older data irrespective of row size transparently?

Thanks.

Re: Compression In Postgresql 9.6

От
Ron
Дата:
On 8/5/19 1:30 AM, Shital A wrote:
> Hello,
>
> Need inputs on below:
>
> We are working on a setting up a new highly transactional (tps 100k) OLTP 
> system for payments using blockchain and postgresql 9.6 as DB on Rhel 7.6. 
> Postgres version is 9.6 and not latest because of specs of blockchain 
> component.
>
> There is a requirement for data compression on DB level. Please provide 
> your inputs on how this can be best achieved.

Column-level compression?  Because some data just doesn't compress well.

>
> Checked in-build Toast, it compressed the data provided exceed the 2kb 
> pagesize? If the data values are small and even if there are billion 
> records they wont be compressed, this is what I understood.
>
> Are there any suggestions of compressing older data irrespective of row 
> size transparently?

Are your tables partitioned?

-- 
Angular momentum makes the world go 'round.



Re: Compression In Postgresql 9.6

От
Shital A
Дата:


On Mon, 5 Aug 2019, 12:42 Ron, <ronljohnsonjr@gmail.com> wrote:
On 8/5/19 1:30 AM, Shital A wrote:
> Hello,
>
> Need inputs on below:
>
> We are working on a setting up a new highly transactional (tps 100k) OLTP
> system for payments using blockchain and postgresql 9.6 as DB on Rhel 7.6.
> Postgres version is 9.6 and not latest because of specs of blockchain
> component.
>
> There is a requirement for data compression on DB level. Please provide
> your inputs on how this can be best achieved.

Column-level compression?  Because some data just doesn't compress well.

>
> Checked in-build Toast, it compressed the data provided exceed the 2kb
> pagesize? If the data values are small and even if there are billion
> records they wont be compressed, this is what I understood.
>
> Are there any suggestions of compressing older data irrespective of row
> size transparently?

Are your tables partitioned?

--
Angular momentum makes the world go 'round.


Thanks for your reply, Ron. 
Any reference link that can help to understand column level compression in Postgrace?


Tables are not partitioned. 


Thanks! 

Re: Compression In Postgresql 9.6

От
Imre Samu
Дата:
> because of specs of blockchain component. 

and
IF  (your) blockchain component is using BYTEA everywhere  ( binary data type : https://www.postgresql.org/docs/9.6/datatype-binary.html ) 
THEN ( imho) you can't expect lot of space saving.

>  Rhel 7.6.  . ...   There is a requirement for data compression 

On RedHat (>=7.5)  you can test the new VDO compression layer 
it is mentioned few weeks ago: "... VDO compression for tables that are less update intensive. "

on Postgres level - you can enable the "wal_compression" 

regards,
Imre


Shital A <brightuser2019@gmail.com> ezt írta (időpont: 2019. aug. 5., H, 8:30):
Hello,

Need inputs on below:

We are working on a setting up a new highly transactional (tps 100k) OLTP system for payments using blockchain and postgresql 9.6 as DB on Rhel 7.6. Postgres version is 9.6 and not latest because of specs of blockchain component. 

There is a requirement for data compression on DB level. Please provide your inputs on how this can be best achieved.

Checked in-build Toast, it compressed the data provided exceed the 2kb pagesize? If the data values are small and even if there are billion records they wont be compressed, this is what I understood.

Are there any suggestions of compressing older data irrespective of row size transparently?

Thanks.

Re: Compression In Postgresql 9.6

От
Kenneth Marshall
Дата:
On Mon, Aug 05, 2019 at 12:00:14PM +0530, Shital A wrote:
> Hello,
> 
> Need inputs on below:
> 
> We are working on a setting up a new highly transactional (tps 100k) OLTP
> system for payments using blockchain and postgresql 9.6 as DB on Rhel 7.6.
> Postgres version is 9.6 and not latest because of specs of blockchain
> component.
> 
> There is a requirement for data compression on DB level. Please provide
> your inputs on how this can be best achieved.
> 
> Checked in-build Toast, it compressed the data provided exceed the 2kb
> pagesize? If the data values are small and even if there are billion
> records they wont be compressed, this is what I understood.
> 
> Are there any suggestions of compressing older data irrespective of row
> size transparently?
> 
> Thanks.

Hi,

On RHEL/Centos you can use VDO filesystem compression to make an archive
tablespace to use for older data. That will compress everything.

Regards,
Ken



Re: Compression In Postgresql 9.6

От
Ron
Дата:
On 8/5/19 7:31 AM, Kenneth Marshall wrote:
> On Mon, Aug 05, 2019 at 12:00:14PM +0530, Shital A wrote:
>> Hello,
>>
>> Need inputs on below:
>>
>> We are working on a setting up a new highly transactional (tps 100k) OLTP
>> system for payments using blockchain and postgresql 9.6 as DB on Rhel 7.6.
>> Postgres version is 9.6 and not latest because of specs of blockchain
>> component.
>>
>> There is a requirement for data compression on DB level. Please provide
>> your inputs on how this can be best achieved.
>>
>> Checked in-build Toast, it compressed the data provided exceed the 2kb
>> pagesize? If the data values are small and even if there are billion
>> records they wont be compressed, this is what I understood.
>>
>> Are there any suggestions of compressing older data irrespective of row
>> size transparently?
>>
>> Thanks.
> Hi,
>
> On RHEL/Centos you can use VDO filesystem compression to make an archive
> tablespace to use for older data. That will compress everything.

Doesn't this imply that either his table is partitioned or he regularly 
moves records from the main table to the archive table?


-- 
Angular momentum makes the world go 'round.



Re: Compression In Postgresql 9.6

От
Kenneth Marshall
Дата:
> >Hi,
> >
> >On RHEL/Centos you can use VDO filesystem compression to make an archive
> >tablespace to use for older data. That will compress everything.
> 
> Doesn't this imply that either his table is partitioned or he
> regularly moves records from the main table to the archive table?
> 

Hi,

Yes, he will need to do something to meet his goal of both a 100k TPS
and have older archives online. He could also use something like
postgres_fdw to have the archives on a seperate server completely.

Regards,
Ken



Re: Compression In Postgresql 9.6

От
Shital A
Дата:


On Mon, 5 Aug 2019, 18:57 Kenneth Marshall, <ktm@rice.edu> wrote:
> >Hi,
> >
> >On RHEL/Centos you can use VDO filesystem compression to make an archive
> >tablespace to use for older data. That will compress everything.
>
> Doesn't this imply that either his table is partitioned or he
> regularly moves records from the main table to the archive table?
>

Hi,

Yes, he will need to do something to meet his goal of both a 100k TPS
and have older archives online. He could also use something like
postgres_fdw to have the archives on a seperate server completely.

Regards,
Ken


Thanks for the suggestions guys ! 

After checking i am thinking about following approach:

1. Create a FS on a separate drive on the server with VDO

2. Create a tablespace on FS created above for storing the historical/less update intensive data

3. Other tablespaces remain on non compressed FS

4. Use table partitioning and create the tables in tablespace created in step 2.

- will this complicate the DB design? In terms of replication, backup and restores
- Can this give optimum performance. 

Let me know your views ! 

Thank You !