Partitioning by day would result in less partitions but of course it would create a "hot" table where all the writes go.
Actually I have thought of an alternative and I'd be interested in your opinion of it.
I leave the metrics table alone, The current code continues to read and write from the metrics. Every night I create a table based on metricts_YYYYMMDD which inherit from metrics and move data (using the "ONLY" clause in the delete) into the table and then set a constraint for that table for that day. I also adjust the constraint for the metrics table which is basically saying "where timestamp > YYYMMDD".
This way there is no trigger in the parent table to slow down the inserts and I still have partitions that will speed up read queries. I realize that moving large amounts of data is going to be painful but perhaps I can do it in chunks.
Tim Uckun wrote > 1. Should I be worried about having possibly hundreds of thousands of > shards.
IIRC, yes.
> 2. Is PG smart enough to handle overlapping constraints on table and limit > it's querying to only those tables that have the correct time constraint.
Probably yes, but seems easy enough to verify.
All constraints are checked for each partiton and if any return false the entire partiton will be excluded; which means multiple partitions can be included.
Note, this is large reason why #1 poses a problem.