Hello,
I'm thinking about migrating from another DBMS to postgresql. I have an
almost working proof of concept, but still have some doubts about the
following use case.
I have a table like the following
CREATE TABLE test
(
code character varying(32) NOT NULL,
tag integer NOT NULL,
value double precision,
CONSTRAINT test_pkey PRIMARY KEY (code, tag)
);
It represents a sequence (with holes) of values associated with a code.
The application code usually performs selection queries like
select tag, value from test where code='XXX';
also, deletions are like
delete from test where code='XXX';
and insertions follow the same pattern (all the data for a code is
inserted using a loop in a single transaction). That's more or less all.
so this type of workload is greatly enhanced by an index-organized table
(oracle) or a clustered index (SQL Server/Sybase).
From what I understood this kind of table is presently not supported by
postgresql (is it?) so, what do you advice?
I would think about something like
CREATE TABLE testnew
(
code character varying(32) NOT NULL,
first_tag integer, /* the tag value associated with the first value */
"values" double precision[], /* the datum, or NaN if not valid */
valid_values bit(1)[], /* true if a datum is present */
CONSTRAINT testnew_pkey PRIMARY KEY (code)
);
but this would require an application refactoring. Any idea?
TIA,
e.