Обсуждение: what are correct steps to programmatically write/read large objects to/from a data base table?

Поиск
Список
Период
Сортировка

what are correct steps to programmatically write/read large objects to/from a data base table?

От
"Ian Bell"
Дата:

Hello,

 

I have just started working with PostgreSQL (v10.3) running under Window 10.  Also I am new to SQL in general.  I am writing test code to see how to write/read large objects and have a question about the order/number of steps when executing code written in C/C++ and C#. 

 

I want to write and read binary data as large objects. 

 

1)      In PSQL, I do this using the following commands:

create database mydb;

\c mydb;

create table test( name text, mylargevariable oid);

insert into test( name, mylargevariable)  values( ‘n1’, lo_import(‘f:/temp/test.txt’) );

select convert_from( lo_get(test.large), ‘UTF8’) from test;

 

The point of this example is that the large object id value (‘OID) is stored in the ‘mylargevariable’ field of the table ‘test’.   My question is about how to programmatically do this using code written in C/C++ and C#.   I address this in my next point.

 

2)      The PostgreSQL sample code illustrates how to write/read a large object in C/C++ while the NPGSQL sample code  illustrates how to write/read large objects in C#.    I am able to successfully compile and run both sets of sample code.  Here are my questions:

a.       My understanding is these examples write the binary data as a large object to the ‘pg_largeobject’ table.   Would you please confirm this is correct?

b.      If I want to be able to retrieve the binary data associated with a large object stored in the ‘pg_largeobject’ table at a later time (i.e. start a new process or run a new application) then I will have to persist/save/write the OID’s referencing the large object to a table when the large object is first created.  So the pseudo code for both C/C++ and C# programs would be as follow:

                                                               i.      Create my database and create my own table which will store large object OIDs

                                                             ii.      Create a large object OID (e.g. call the function ‘lo_creat’ in the C libpg library)

                                                            iii.      Write the binary data to this large object (e.g. call the function ‘lo_write’ in the C libpg library)

                                                           iv.      Write the large object OID generated in step ii to a PostgreSQL table that I created in step i

Would you please confirm these steps and their order are correct?  The reason for asking this question is I want to confirm that i) while non-large objects require only one write to the data base ii) in contrast, large-objects require two writes (i.e. the first to write the large object, the second to insert/write the large object OID to a table field).    I actually have written test code that does just this and it seems to be working.  However, given my lack of experience with PostgreSQL and SQL in general, I would like confirmation the writing/reading binary data large object is this simple and that I have not missed something important.

 

Thank you

 

Ian

 

Re: what are correct steps to programmatically write/read largeobjects to/from a data base table?

От
Laurenz Albe
Дата:
Ian Bell wrote:
> I have just started working with PostgreSQL (v10.3) running under Window 10.
> Also I am new to SQL in general.  I am writing test code to see how to write/read
> large objects and have a question about the order/number of steps when
> executing code written in C/C++ and C#. 
>  
> I want to write and read binary data as large objects. 
>  
> 1)      In PSQL, I do this using the following commands:
> 
> create database mydb;
> \c mydb;
> create table test( name text, mylargevariable oid);
> insert into test( name, mylargevariable)  values( ‘n1’, lo_import(‘f:/temp/test.txt’) );
> select convert_from( lo_get(test.large), ‘UTF8’) from test;
>  
> The point of this example is that the large object id value (‘OID’) is stored in the
> ‘mylargevariable’ field of the table ‘test’.   My question is about how to programmatically
> do this using code written in C/C++ and C#.   I address this in my next point.
>  
> 2) The PostgreSQL sample code illustrates how to write/read a large object in C/C++
>    while the NPGSQL sample code  illustrates how to write/read large objects in C#.
>    I am able to successfully compile and run both sets of sample code.
>    Here are my questions:
> 
> a. My understanding is these examples write the binary data as a large object to the
>    ‘pg_largeobject’ table.   Would you please confirm this is correct?
> 
> b. If I want to be able to retrieve the binary data associated with a large object stored
>    in the ‘pg_largeobject’ table at a later time (i.e. start a new process or run a
>    new application) then I will have to persist/save/write the OID’s referencing the
>    large object to a table when the large object is first created.  So the pseudo code
>    for both C/C++ and C# programs would be as follow:
> 
>    i.      Create my database and create my own table which will store large object OIDs
>   ii.      Create a large object OID (e.g. call the function ‘lo_creat’ in the C libpg library)
>  iii.      Write the binary data to this large object (e.g. call the function ‘lo_write’ in the
>            C libpg library)
>   iv.      Write the large object OID generated in step ii to a PostgreSQL table that I created in step i
> 
> Would you please confirm these steps and their order are correct?  The reason for asking
> this question is I want to confirm that i) while non-large objects require only one write
> to the data base ii) in contrast, large-objects require two writes (i.e. the first to write
> the large object, the second to insert/write the large object OID to a table field).
> I actually have written test code that does just this and it seems to be working.
> However, given my lack of experience with PostgreSQL and SQL in general, I would like
> confirmation the writing/reading binary data large object is this simple and that I have not
> missed something important.

That is all correct, and I think you have understood large objects well.

Have a look at the "lo" contrib: https://www.postgresql.org/docs/current/static/lo.html
Some of its functionality might help you to manage the large objects.

I'd like to remark that adding large "bytea" values to a table also affects
two tables: The table itself and the "TOAST table" where the bytea will
be stored out of line.  But of course it is only one client-server
round trip.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com


Hello Able,

Thanks for confirming.

I was not aware of the lo module and indeed have not yet gotten to the appendices in the Postgre manual.    Your
referencewas very helpful.  It answered another question I had.  

Thank you,

Ian

-----Original Message-----
From: Laurenz Albe [mailto:laurenz.albe@cybertec.at]
Sent: 12 April, 2018 02:43
To: ib@ianbellsoftware.com; pgsql-novice@lists.postgresql.org
Subject: Re: what are correct steps to programmatically write/read large objects to/from a data base table?

Ian Bell wrote:
> I have just started working with PostgreSQL (v10.3) running under Window 10.
> Also I am new to SQL in general.  I am writing test code to see how to
> write/read large objects and have a question about the order/number of
> steps when executing code written in C/C++ and C#.
>
> I want to write and read binary data as large objects.
>
> 1)      In PSQL, I do this using the following commands:
>
> create database mydb;
> \c mydb;
> create table test( name text, mylargevariable oid); insert into test(
> name, mylargevariable)  values( ‘n1’, lo_import(‘f:/temp/test.txt’) );
> select convert_from( lo_get(test.large), ‘UTF8’) from test;
>
> The point of this example is that the large object id value (‘OID’) is stored in the
> ‘mylargevariable’ field of the table ‘test’.   My question is about how to programmatically
> do this using code written in C/C++ and C#.   I address this in my next point.
>
> 2) The PostgreSQL sample code illustrates how to write/read a large object in C/C++
>    while the NPGSQL sample code  illustrates how to write/read large objects in C#.
>    I am able to successfully compile and run both sets of sample code.
>    Here are my questions:
>
> a. My understanding is these examples write the binary data as a large object to the
>    ‘pg_largeobject’ table.   Would you please confirm this is correct?
>
> b. If I want to be able to retrieve the binary data associated with a large object stored
>    in the ‘pg_largeobject’ table at a later time (i.e. start a new process or run a
>    new application) then I will have to persist/save/write the OID’s referencing the
>    large object to a table when the large object is first created.  So the pseudo code
>    for both C/C++ and C# programs would be as follow:
>
>    i.      Create my database and create my own table which will store large object OIDs
>   ii.      Create a large object OID (e.g. call the function ‘lo_creat’ in the C libpg library)
>  iii.      Write the binary data to this large object (e.g. call the function ‘lo_write’ in the
>            C libpg library)
>   iv.      Write the large object OID generated in step ii to a PostgreSQL table that I created in step i
>
> Would you please confirm these steps and their order are correct?  The
> reason for asking this question is I want to confirm that i) while
> non-large objects require only one write to the data base ii) in
> contrast, large-objects require two writes (i.e. the first to write the large object, the second to insert/write the
largeobject OID to a table field). 
> I actually have written test code that does just this and it seems to be working.
> However, given my lack of experience with PostgreSQL and SQL in
> general, I would like confirmation the writing/reading binary data
> large object is this simple and that I have not missed something important.

That is all correct, and I think you have understood large objects well.

Have a look at the "lo" contrib: https://www.postgresql.org/docs/current/static/lo.html
Some of its functionality might help you to manage the large objects.

I'd like to remark that adding large "bytea" values to a table also affects two tables: The table itself and the "TOAST
table"where the bytea will be stored out of line.  But of course it is only one client-server round trip. 

Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com