Обсуждение: Advice on best way to store a large amount of data in postgresql

Поиск
Список
Период
Сортировка

Advice on best way to store a large amount of data in postgresql

От
spiral
Дата:
Hello,

We have a table containing ~1.75 billion rows, using 170GB storage.
The table schema is the following:

messages=# \d messages
                 Table "public.messages"
    Column    |  Type   | Collation | Nullable | Default 
--------------+---------+-----------+----------+---------
 mid          | bigint  |           | not null | 
 channel      | bigint  |           | not null | 
 member       | integer |           |          | 
 sender       | bigint  |           | not null | 
 original_mid | bigint  |           |          | 
 guild        | bigint  |           |          | 
Indexes:
    "messages_pkey" PRIMARY KEY, btree (mid)


This table is used essentially as a key-value store; rows are accessed
only with `mid` primary key. Additionally, inserted rows may only be
deleted, but never updated.

We only run the following queries:
- INSERT INTO messages VALUES (...data...);
- SELECT * FROM messages WHERE mid = $1;
- DELETE FROM messages WHERE mid = $1;
- DELETE FROM messages WHERE mid IN ($1...$n);
- SELECT count(*) FROM messages;

For the "IN" query, it is possible for there to be up to 100
parameters, and it is possible that none of them will match an existing
row.

So, the problem: I don't know how to best store this data in
postgres, or what system requirements would be needed.
Originally, this table did not contain a substantial amount of data,
and so I stored it in the same database as our CRUD user data. However,
as the table became larger, cache was being allocated to (mostly
unused) historical data from the `messages` table, and I decided to
move the large table to its own postgres instance.

At the same time, I partitioned the table, with TimescaleDB's automatic
time-series partitioning, because our data is essentially time-series
(`mid` values are Twitter-style snowflakes) and it was said that
partitioning would improve performance.
This ended up being a mistake... shared_buffers memory usage went way
up, from the 20GB of the previous combined database to 28GB for just
the messages database, and trying to lower shared_buffers at all made
the database start throwing "out of shared memory" errors when running
DELETE queries. A TimescaleDB update did improve this, but 28GB is way
more memory than I can afford to allocate to this database - instead of
"out of shared memory", it gets OOM killed by the system.

What is the best course of action here?
- Ideally, I would like to host this database on a machine with 4
  (Ryzen) cores, 8GB RAM, and tiered storage (our cloud provider doesn't
  support adding additional local storage to a VPS plan). Of course,
  this seems very unrealistic, so it's not a requirement, but the
  closer we can get to this, the better.
- Is it a good idea to use table partitioning? I heard advice that one
  should partition tables with above a couple million rows, but I don't
  know how true this is. We have a table with ~6mil rows in our main
  database that has somewhat slow lookups, but we also have a table
  with ~13mil rows that has fast lookups, so I'm not sure.

Thanks
spiral



Re: Advice on best way to store a large amount of data in postgresql

От
"Michaeldba@sqlexec.com"
Дата:
That’s crazy only having 8GB memory when you have tables with over 100GBs. One general rule of thumb is have enough
memoryto hold the biggest index. 

Sent from my iPad

> On Jan 9, 2023, at 3:23 AM, spiral <spiral@spiral.sh> wrote:
>
> Hello,
>
> We have a table containing ~1.75 billion rows, using 170GB storage.
> The table schema is the following:
>
> messages=# \d messages
>                 Table "public.messages"
>    Column    |  Type   | Collation | Nullable | Default
> --------------+---------+-----------+----------+---------
> mid          | bigint  |           | not null |
> channel      | bigint  |           | not null |
> member       | integer |           |          |
> sender       | bigint  |           | not null |
> original_mid | bigint  |           |          |
> guild        | bigint  |           |          |
> Indexes:
>    "messages_pkey" PRIMARY KEY, btree (mid)
>
>
> This table is used essentially as a key-value store; rows are accessed
> only with `mid` primary key. Additionally, inserted rows may only be
> deleted, but never updated.
>
> We only run the following queries:
> - INSERT INTO messages VALUES (...data...);
> - SELECT * FROM messages WHERE mid = $1;
> - DELETE FROM messages WHERE mid = $1;
> - DELETE FROM messages WHERE mid IN ($1...$n);
> - SELECT count(*) FROM messages;
>
> For the "IN" query, it is possible for there to be up to 100
> parameters, and it is possible that none of them will match an existing
> row.
>
> So, the problem: I don't know how to best store this data in
> postgres, or what system requirements would be needed.
> Originally, this table did not contain a substantial amount of data,
> and so I stored it in the same database as our CRUD user data. However,
> as the table became larger, cache was being allocated to (mostly
> unused) historical data from the `messages` table, and I decided to
> move the large table to its own postgres instance.
>
> At the same time, I partitioned the table, with TimescaleDB's automatic
> time-series partitioning, because our data is essentially time-series
> (`mid` values are Twitter-style snowflakes) and it was said that
> partitioning would improve performance.
> This ended up being a mistake... shared_buffers memory usage went way
> up, from the 20GB of the previous combined database to 28GB for just
> the messages database, and trying to lower shared_buffers at all made
> the database start throwing "out of shared memory" errors when running
> DELETE queries. A TimescaleDB update did improve this, but 28GB is way
> more memory than I can afford to allocate to this database - instead of
> "out of shared memory", it gets OOM killed by the system.
>
> What is the best course of action here?
> - Ideally, I would like to host this database on a machine with 4
>  (Ryzen) cores, 8GB RAM, and tiered storage (our cloud provider doesn't
>  support adding additional local storage to a VPS plan). Of course,
>  this seems very unrealistic, so it's not a requirement, but the
>  closer we can get to this, the better.
> - Is it a good idea to use table partitioning? I heard advice that one
>  should partition tables with above a couple million rows, but I don't
>  know how true this is. We have a table with ~6mil rows in our main
>  database that has somewhat slow lookups, but we also have a table
>  with ~13mil rows that has fast lookups, so I'm not sure.
>
> Thanks
> spiral
>
>




Re: Advice on best way to store a large amount of data in postgresql

От
Justin Pryzby
Дата:
On Sun, Jan 08, 2023 at 07:02:01AM -0500, spiral wrote:
> This table is used essentially as a key-value store; rows are accessed
> only with `mid` primary key. Additionally, inserted rows may only be
> deleted, but never updated.
> 
> We only run the following queries:
> - INSERT INTO messages VALUES (...data...);
> - SELECT * FROM messages WHERE mid = $1;
> - DELETE FROM messages WHERE mid = $1;
> - DELETE FROM messages WHERE mid IN ($1...$n);
> - SELECT count(*) FROM messages;

Great - it's good to start with the queries to optimize.

Are you using the extended query protocol with "bind" parameters, or are they
escaped and substituted by the client library ?

> So, the problem: I don't know how to best store this data in
> postgres, or what system requirements would be needed.
> Originally, this table did not contain a substantial amount of data,
> and so I stored it in the same database as our CRUD user data. However,
> as the table became larger, cache was being allocated to (mostly
> unused) historical data from the `messages` table, and I decided to
> move the large table to its own postgres instance.
> 
> At the same time, I partitioned the table, with TimescaleDB's automatic
> time-series partitioning, because our data is essentially time-series
> (`mid` values are Twitter-style snowflakes) and it was said that
> partitioning would improve performance.
> This ended up being a mistake... shared_buffers memory usage went way
> up, from the 20GB of the previous combined database to 28GB for just
> the messages database, and trying to lower shared_buffers at all made
> the database start throwing "out of shared memory" errors when running
> DELETE queries. A TimescaleDB update did improve this, but 28GB is way
> more memory than I can afford to allocate to this database - instead of
> "out of shared memory", it gets OOM killed by the system.

Can you avoid using DELETE and instead use DROP ?  I mean, can you
arrange your partitioning such that the things to be dropped are all in
one partition, to handle in bulk ?  That's one of the main reasons for
using partitioning.

(Or, as a worse option, if you need to use DELETE, can you change the
query to DELETE one MID at a time, and loop over MIDs?)

What version of postgres is it ?  Ah, I found that you reported the same thing
at least one other place.  (It'd be useful to include here that information as
well as the prior discussion with other product/vendor).

https://github.com/timescale/timescaledb/issues/5075

In this other issue report, you said that you increased
max_locks_per_transaction.  I suppose you need to increase it further,
or decrease your chunk size.  How many "partitions" do you have
(actually, timescale uses inheritance) ?

-- 
Justin



Re: Advice on best way to store a large amount of data in postgresql

От
Samed YILDIRIM
Дата:
Hi Spiral,

If I were you, I would absolutely consider using table partitioning. There are a couple of questions to be answered.
1. What is the rate/speed of the table's growth?
2. What is the range of values you use for mid columns to query the table? Are they generally close to each other? Or, are they generally closer to the newest rows?
3. What is your speed limitation/expectation for the query execution time?
4. What is the version of PostgreSQL installation you use?

Best regards.
Samed YILDIRIM


On Mon, 9 Jan 2023 at 10:23, spiral <spiral@spiral.sh> wrote:
Hello,

We have a table containing ~1.75 billion rows, using 170GB storage.
The table schema is the following:

messages=# \d messages
                 Table "public.messages"
    Column    |  Type   | Collation | Nullable | Default
--------------+---------+-----------+----------+---------
 mid          | bigint  |           | not null |
 channel      | bigint  |           | not null |
 member       | integer |           |          |
 sender       | bigint  |           | not null |
 original_mid | bigint  |           |          |
 guild        | bigint  |           |          |
Indexes:
    "messages_pkey" PRIMARY KEY, btree (mid)


This table is used essentially as a key-value store; rows are accessed
only with `mid` primary key. Additionally, inserted rows may only be
deleted, but never updated.

We only run the following queries:
- INSERT INTO messages VALUES (...data...);
- SELECT * FROM messages WHERE mid = $1;
- DELETE FROM messages WHERE mid = $1;
- DELETE FROM messages WHERE mid IN ($1...$n);
- SELECT count(*) FROM messages;

For the "IN" query, it is possible for there to be up to 100
parameters, and it is possible that none of them will match an existing
row.

So, the problem: I don't know how to best store this data in
postgres, or what system requirements would be needed.
Originally, this table did not contain a substantial amount of data,
and so I stored it in the same database as our CRUD user data. However,
as the table became larger, cache was being allocated to (mostly
unused) historical data from the `messages` table, and I decided to
move the large table to its own postgres instance.

At the same time, I partitioned the table, with TimescaleDB's automatic
time-series partitioning, because our data is essentially time-series
(`mid` values are Twitter-style snowflakes) and it was said that
partitioning would improve performance.
This ended up being a mistake... shared_buffers memory usage went way
up, from the 20GB of the previous combined database to 28GB for just
the messages database, and trying to lower shared_buffers at all made
the database start throwing "out of shared memory" errors when running
DELETE queries. A TimescaleDB update did improve this, but 28GB is way
more memory than I can afford to allocate to this database - instead of
"out of shared memory", it gets OOM killed by the system.

What is the best course of action here?
- Ideally, I would like to host this database on a machine with 4
  (Ryzen) cores, 8GB RAM, and tiered storage (our cloud provider doesn't
  support adding additional local storage to a VPS plan). Of course,
  this seems very unrealistic, so it's not a requirement, but the
  closer we can get to this, the better.
- Is it a good idea to use table partitioning? I heard advice that one
  should partition tables with above a couple million rows, but I don't
  know how true this is. We have a table with ~6mil rows in our main
  database that has somewhat slow lookups, but we also have a table
  with ~13mil rows that has fast lookups, so I'm not sure.

Thanks
spiral


Re: Advice on best way to store a large amount of data in postgresql

От
spiral
Дата:
(re-sending this because I forgot to use "reply all". Sorry!)

On Mon, 9 Jan 2023 11:56:47 -0600
Justin Pryzby <pryzby@telsasoft.com> wrote:

> Are you using the extended query protocol with "bind" parameters, or
> are they escaped and substituted by the client library ?

Our client library uses parameters, yes. "$1" is passed literally to
postgres.

> can you arrange your partitioning such that the things to be dropped
> are all in one partition, to handle in bulk ?

Unfortunately, no. Deletes are all generated from user actions - if a
user deletes one message, we need to delete one single row from our
database.

> [...] one of the main reasons for using partitioning
> How many "partitions" do you have (actually, timescale uses
> inheritance) ?

We have ~1600 timescaledb chunks. We could increase the chunk count. We
could also stop using partitions/chunks if it would improve things. I'm
currently setting up a test database without partitioning to see what
the performance would look like.

spiral