Обсуждение: sets and insert-select with rule

Поиск
Список
Период
Сортировка

sets and insert-select with rule

От
"Gyozo Papp"
Дата:
Hello,
 
 
First of all: I' m using postgresql v7.0.2 on debian 2.2. (and I have two completely different questions)
 
 
i) What is the most effective representation of type "set" in postgres?
As I remember there is no attribute type to represent definitely sets. (Please, correct me if  I'm wrong.)
Arrays  and bitstrings seem to serve this purpose, but what about performance regarding to the commonly used set operation (is in, union, etc)?
 
ii) When  I execute the following query:
 
=> INSERT INTO stat (vendor_id, c_date, c_num) SELECT vendor_id, current_date, count(*) FROM device_id in (...) GROUP BY vendor_id;
I get an error message something like "ExecAggrEval(), aggregate function is not available here".
(this is surely not what I get back if it's needed I post the right message )
 
I know what cause the problem...
I create a rule on table stat that turns an INSERT query into UPDATE if there's already a record with the same vendor_id and c_date.
=> CREATE RULE  r_logstat AS ON INSERT TO stat
->  WHERE EXISTS (SELECT * FROM stat WHERE vendor_id = new.vendor_id AND c_date= new.c_date)
->  DO INSTEAD UPDATE stat SET c_num = c_num + new.c_num WHERE vendor_id = new.vendor_id AND c_date= new.c_date;
if the rule is not invoked everything goes fine, but if  there is a row that should be updated instead of INSERT the query fails.
How can I solve this ? Maybe an upgrade to v7.1 blows the whole thing out?

- Papp Gyozo -

Re: sets and insert-select with rule

От
Tom Lane
Дата:
"Gyozo Papp" <pgerzson@freestart.hu> writes:
> How can I solve this ? Maybe an upgrade to v7.1 blows the whole thing out?

Possibly.  It's hard to tell what your problem is with such an
incomplete, fuzzily-described example.  But there have been a number
of bugs fixed in that general area.

            regards, tom lane

7.1 dumps with large objects

От
"David Wall"
Дата:
Wonderful job on getting 7.1 released.  I've just installed it in place of a
7.1beta4 database, with the great advantage of not even having to migrate
the database.

It seems that 7.1 is able to handle large objects in its dump/restore
natively now and no longer requires the use of the contrib program to dump
them.  Large objects are represented by OIDs in the table schema, and I'm
trying to make sure that I understand the process correctly from what I've
read in the admin guide and comand reference guide.

In my case, the OID does not mean anything to my programs, and they are not
used as keys.  So I presume that I don't really care about preserving OIDs.
Does this just mean that if I restore a blob, it will get a new OID, but
otherwise everything will be okay?

This is my plan of attack:

To backup my database (I have several databases running in a single
postgresql server, and I'd like to be able to back them up separately since
they could move from one machine to another as the loads increase), I'll be
using:

pg_dump -b -Fc dbname > dbname.dump

Then, to restore, I'd use:

pg_restore -d dbname dbname.dump

Is that going to work for me?

I also noted that pg_dump has a -Z level specifier for compression.  When
not specified, the backup showed a compression level of "-1" (using
pg_restore -l).  Is that the highest compression level, or does that mean it
was disabled?  I did note that the -Fc option created a file that was larger
than a plain file, and not anywhere near as small as if I gzip'ed the
output.  In my case, it's a very small test database, so I don't know if
that's the reason, or whether -Fc by itself doesn't really compress unless
the -Z option is used.

And for -Z, is 0 or 9 the highest level compression?  Is there a particular
value that's generally considered the best tradeoff in terms of speed versus
space?

Thanks,
David


Re: 7.1 dumps with large objects

От
Tom Larard
Дата:
On Sat, 14 Apr 2001, David Wall wrote:
> It seems that 7.1 is able to handle large objects in its dump/restore
> natively now and no longer requires the use of the contrib program to dump
> them.  Large objects are represented by OIDs in the table schema, and I'm
> trying to make sure that I understand the process correctly from what I've
> read in the admin guide and comand reference guide.

Hmmn, as you clearly know how to dump blobs in the old versions, can you
tell me how to do it, or point me in the direction of the 'contrib'
program that you spoke of?

Thanks


insert-select once more

От
"Gyozo Papp"
Дата:
Hello,

I realised what I wrote in my last message was really confused.
You are absolutely right, I try to describe the whole situation.

1) There is a table which stores the goods of the vendors:
=#CREATE TABLE device (device_id serial, vendor_id integer, ... and some other columns);

2) there is another table  (named stat)  to store search hits on each vendor in daily distribution  which equal the sum
ofsearch hits of their devices. 
Therefore i created this table as follows:
=# CREATE TABLE stat (vendor_id integer, c_date date, c_num integer);
The column c_num is for storing the total count of hits a day.

3) I created a rule to avoid the next process: "if there is already a row with the same vendor_id and c_date, make an
UPDATE(to add the new hit count (NEW.c_num) to the saved one), but if it is missing the original INSERT will be
executed."
=# CREATE RULE  r_logstat AS ON INSERT TO stat
    WHERE EXISTS (SELECT * FROM stat WHERE vendor_id = new.vendor_id AND c_date= new.c_date)
    DO INSTEAD UPDATE stat SET c_num = c_num + new.c_num WHERE vendor_id = new.vendor_id AND c_date= new.c_date;

4) and now, the query:
=> INSERT INTO stat (vendor_id, c_date, c_num)
    SELECT vendor_id, current_date, count(*) FROM device WHERE [expressions select rows] GROUP BY vendor_id;
and the result (if there is a row that should be rather updated):
ERROR: ExecEvalAggref: no aggregates in this expression context

If you execute the previous query twice against an originally empty table stat,  you get this error for the second
attempt.

This behaviour may be provided by triggers, too, but the reason why I'm forced to use the rule is:
 (Progammer's Guide / chapter 8. Rules versus Triggers)
-"A trigger is fired for any row affected once. A rule manipulates the parsetree or generates an additional one. So if
manyrows are affected in one statement, a rule issuing one extra query would usually do a better job than a trigger
thatis called for any single row and must execute his operations this many times." 

Is this problem in v7.1 or I must upgrade? (I'm using v7.0.2)
Thanks in  advance,

- Papp Gyozo -


Вложения

Re: insert-select once more

От
will trillich
Дата:
On Mon, Apr 16, 2001 at 02:22:42PM +0200, Gyozo Papp wrote:
> 3) I created a rule to avoid the next process: "if there is already a row with the same vendor_id and c_date, make an
UPDATE(to add the new hit count (NEW.c_num) to the saved one), but if it is missing the original INSERT will be
executed."
> =# CREATE RULE  r_logstat AS ON INSERT TO stat
>     WHERE EXISTS (SELECT * FROM stat WHERE vendor_id = new.vendor_id AND c_date= new.c_date)
>     DO INSTEAD UPDATE stat SET c_num = c_num + new.c_num WHERE vendor_id = new.vendor_id AND c_date= new.c_date;

that's a neat concept. in my mind <guess> the WHERE allows you to
position 'cursors' within various tables to get certain values
from them, before the rule takes hold.

--DROP RULE prof_insert;
CREATE RULE prof_insert AS
    ON INSERT TO prof
        WHERE
            _faculty.who = get_whoid( NEW.login )
            AND
            _faculty.edu = get_eduid( NEW.educode )
    DO INSTEAD
        INSERT INTO _prof (
            course,
            who,
--          status,
            editor
        ) VALUES (
            get_courseid( NEW.educode, NEW.topiccode, NEW.coursecode, NEW.language, NEW.medium ),
            _faculty.who, -- <<< specified via above WHERE clause
--          NEW.status,
            NEW.editor
        )
    ;

what that does, is get a _faculty.who pointer based on NEW.login
and NEW.educode; it does NOT tell postgres "if there is none,
then ignore this rule".

</guess> i think.

> 4) and now, the query:
> => INSERT INTO stat (vendor_id, c_date, c_num)
>     SELECT vendor_id, current_date, count(*) FROM device WHERE [expressions select rows] GROUP BY vendor_id;
> and the result (if there is a row that should be rather updated):
> ERROR: ExecEvalAggref: no aggregates in this expression context
>
> If you execute the previous query twice against an originally empty table stat,  you get this error for the second
attempt.

which meshes with my theory. but i still could be wrong. :)

--
don't visit this page. it's bad for you. take my expert word for it.
http://www.salon.com/people/col/pagl/2001/03/21/spring/index1.html

will@serensoft.com
http://sourceforge.net/projects/newbiedoc -- we need your brain!
http://www.dontUthink.com/ -- your brain needs us!