Обсуждение: Ora2pg migration

Поиск
Список
Период
Сортировка

Ora2pg migration

От
"Narendran .j"
Дата:
Hello Team,

I have export one table which has blob data type by ora2pg tool.while Importing insert file in postgres I'm getting this error .
psql:ERROR: out of memory 
DETAIL: Cannot enlarge string buffer containing 1073741808 bytes by 8191 more bytes.

Please give me any suggestions and I adjust shared_buffer it didn't work.

Regards,
Narendran 

Re: Ora2pg migration

От
Laurenz Albe
Дата:
On Fri, 2024-03-22 at 11:08 +0530, Narendran .j wrote:
> I have export one table which has blob data type by ora2pg tool.while Importing insert file in postgres I'm getting
thiserror . 
> psql:ERROR: out of memory 
> DETAIL: Cannot enlarge string buffer containing 1073741808 bytes by 8191 more bytes.
>
> Please give me any suggestions and I adjust shared_buffer it didn't work.

1GB is a hard limit for memory allocations.

How big are those BLOBs?  The theoretical limit in PostgreSQL is 1GB, but in
practice things start getting difficult around 500MB, unless you use binary
transfer.

Yours,
Laurenz Albe



Re: Ora2pg migration

От
"Narendran .j"
Дата:
Hello 
It's almost 8gb, could you explain me how can I transfer as binary file?.I'm new on this.

On Fri, 22 Mar 2024, 3:33 pm Laurenz Albe, <laurenz.albe@cybertec.at> wrote:
On Fri, 2024-03-22 at 11:08 +0530, Narendran .j wrote:
> I have export one table which has blob data type by ora2pg tool.while Importing insert file in postgres I'm getting this error .
> psql:ERROR: out of memory 
> DETAIL: Cannot enlarge string buffer containing 1073741808 bytes by 8191 more bytes.
>
> Please give me any suggestions and I adjust shared_buffer it didn't work.

1GB is a hard limit for memory allocations.

How big are those BLOBs?  The theoretical limit in PostgreSQL is 1GB, but in
practice things start getting difficult around 500MB, unless you use binary
transfer.

Yours,
Laurenz Albe

Re: Ora2pg migration

От
Rajesh Kumar
Дата:
Kindly provide the possible options to perform migration in this case. 

Kindly elaborate more on binary transfer.

On Fri, 22 Mar 2024, 15:34 Laurenz Albe, <laurenz.albe@cybertec.at> wrote:
On Fri, 2024-03-22 at 11:08 +0530, Narendran .j wrote:
> I have export one table which has blob data type by ora2pg tool.while Importing insert file in postgres I'm getting this error .
> psql:ERROR: out of memory 
> DETAIL: Cannot enlarge string buffer containing 1073741808 bytes by 8191 more bytes.
>
> Please give me any suggestions and I adjust shared_buffer it didn't work.

1GB is a hard limit for memory allocations.

How big are those BLOBs?  The theoretical limit in PostgreSQL is 1GB, but in
practice things start getting difficult around 500MB, unless you use binary
transfer.

Yours,
Laurenz Albe


Re: Ora2pg migration

От
Laurenz Albe
Дата:
> On Fri, 22 Mar 2024, 3:33 pm Laurenz Albe, <laurenz.albe@cybertec.at> wrote:
> > On Fri, 2024-03-22 at 11:08 +0530, Narendran .j wrote:
> > > I have export one table which has blob data type by ora2pg tool.while Importing insert file in postgres I'm
gettingthis error . 
> > > psql:ERROR: out of memory 
> > > DETAIL: Cannot enlarge string buffer containing 1073741808 bytes by 8191 more bytes.
> > >
> > > Please give me any suggestions and I adjust shared_buffer it didn't work.
> >
> > 1GB is a hard limit for memory allocations.
> >
> > How big are those BLOBs?  The theoretical limit in PostgreSQL is 1GB, but in
> > practice things start getting difficult around 500MB, unless you use binary
> > transfer.
>
> It's almost 8gb, could you explain me how can I transfer as binary file?.I'm new on this.

With 8GB, you cannot use "bytea" on PostgreSQL.

There is support for PostgreSQL Large Objects with ora2pg; you'd have to read the
documentation to find out more.

I cannot help you with the memory allocation error, mostly because you
didn't tell us what exactly you did to trigger it.

Yours,
Laurenz Albe



RE: Ora2pg migration

От
"Clay Jackson (cjackson)"
Дата:

You might find this section of the ora2pg documentation relevant.  Note that the size of a bytea object is limited to < 1Gb,

Importing BLOB as large objects

By default Ora2Pg imports Oracle BLOB as bytea, the destination column is created using the bytea data type. If you want to use large object instead of bytea, just add the --blob_to_lo option to the ora2pg command. It will create the destination column as data type Oid and will save the BLOB as a large object using the lo_from_bytea() function. The Oid returned by the call to lo_from_bytea() is inserted in the destination column instead of a bytea. Because of the use of the function this option can only be used with actions SHOW_COLUMN, TABLE and INSERT. Action COPY is not allowed.

If you want to use COPY or have huge size BLOB ( > 1GB) than can not be imported using lo_from_bytea() you can add option --lo_import to the ora2pg command. This will allow to import data in two passes.

1) Export data using COPY or INSERT will set the Oid destination column for BLOB to value 0 and save the BLOB value into a dedicated file. It will also create a Shell script to import the BLOB files into the database using psql command \lo_import and to update the table Oid column to the returned large object Oid. The script is named lo_import-TABLENAME.sh

2) Execute all scripts lo_import-TABLENAME.sh after setting the environment variables PGDATABASE and optionally PGHOST, PGPORT, PGUSER, etc. if they do not correspond to the default values for libpq.

You might also execute manually a VACUUM FULL on the table to remove the bloat created by the table update.

Limitation: the table must have a primary key, it is used to set the WHERE clause to update the Oid column after the large object import. Importing BLOB using this second method (--lo_import) is very slow so it should be reserved to rows where the BLOB > 1GB for all other rows use the option --blob_to_lo. To filter the rows you can use the WHERE configuration directive in ora2pg.conf.

Also note:
LONGREADLEN

Use this directive to set the database handle's 'LongReadLen' attribute to a value that will be the larger than the expected size of the LOBs. The default is 1MB witch may not be enough to extract BLOBs or CLOBs. If the size of the LOB exceeds the 'LongReadLen' DBD::Oracle will return a 'ORA-24345: A Truncation' error. Default: 1023*1024 bytes.

Take a look at this page to learn more: http://search.cpan.org/~pythian/DBD-Oracle-1.22/Oracle.pm#Data_Interface_for_Persistent_LOBs

Important note: If you increase the value of this directive take care that DATA_LIMIT will probably needs to be reduced. Even if you only have a 1MB blob, trying to read 10000 of them (the default DATA_LIMIT) all at once will require 10GB of memory. You may extract data from those table separately and set a DATA_LIMIT to 500 or lower, otherwise you may experience some out of memory.

 

Good luck!

 

Clay Jackson

Database Solutions Sales Engineer

clay.jackson@quest.com

office  949-754-1203  mobile 425-802-9603

 

From: Narendran .j <jnarendran3@gmail.com>
Sent: Monday, March 25, 2024 6:47 AM
To: Laurenz Albe <laurenz.albe@cybertec.at>
Cc: pgsql-admin@lists.postgresql.org
Subject: Re: Ora2pg migration

 

CAUTION: This email originated from outside of the organization. Do not follow guidance, click links, or open attachments unless you recognize the sender and know the content is safe.

 

Hello 

It's almost 8gb, could you explain me how can I transfer as binary file?.I'm new on this.

 

On Fri, 22 Mar 2024, 3:33 pm Laurenz Albe, <laurenz.albe@cybertec.at> wrote:

On Fri, 2024-03-22 at 11:08 +0530, Narendran .j wrote:
> I have export one table which has blob data type by ora2pg tool.while Importing insert file in postgres I'm getting this error .
> psql:ERROR: out of memory 
> DETAIL: Cannot enlarge string buffer containing 1073741808 bytes by 8191 more bytes.
>
> Please give me any suggestions and I adjust shared_buffer it didn't work.

1GB is a hard limit for memory allocations.

How big are those BLOBs?  The theoretical limit in PostgreSQL is 1GB, but in
practice things start getting difficult around 500MB, unless you use binary
transfer.

Yours,
Laurenz Albe

Вложения

Re: Ora2pg migration

От
Ron Johnson
Дата:
I can't help asking: what in the heck are you storing that's eight gigabytes???

On Mon, Mar 25, 2024 at 2:33 PM Narendran .j <jnarendran3@gmail.com> wrote:
Hello 
It's almost 8gb, could you explain me how can I transfer as binary file?.I'm new on this.

On Fri, 22 Mar 2024, 3:33 pm Laurenz Albe, <laurenz.albe@cybertec.at> wrote:
On Fri, 2024-03-22 at 11:08 +0530, Narendran .j wrote:
> I have export one table which has blob data type by ora2pg tool.while Importing insert file in postgres I'm getting this error .
> psql:ERROR: out of memory 
> DETAIL: Cannot enlarge string buffer containing 1073741808 bytes by 8191 more bytes.
>
> Please give me any suggestions and I adjust shared_buffer it didn't work.

1GB is a hard limit for memory allocations.

How big are those BLOBs?  The theoretical limit in PostgreSQL is 1GB, but in
practice things start getting difficult around 500MB, unless you use binary
transfer.

Yours,
Laurenz Albe