Importing BLOB as Large Objects

By default ora2pgpro imports Oracle BLOB as bytea, the destination column is created using the bytea data type. If you want to use large object instead of bytea, just add the --blob_to_lo option to the ora2pgpro command. It will create the destination column as data type oid and will save the BLOB as a large object using the lo_from_bytea() function. The OID returned by the call to lo_from_bytea() is inserted in the destination column instead of a bytea. Because of the use of the function, this option can only be used with actions SHOW_COLUMN, TABLE and INSERT. Action COPY is not allowed.

If you want to use COPY or have huge size BLOB ( > 1GB) than can not be imported using lo_from_bytea(), you can add option --lo_import to the ora2pgpro command. This will allow to import data in two passes.

  1. Export data using COPY or INSERT will set the OID destination column for BLOB to value 0 and save the BLOB value into a dedicated file. It will also create a Shell script to import the BLOB files into the database using psql command \lo_import and to update the table OID column to the returned large object OID. The script is named lo_import-TABLENAME.sh.

  2. Execute all scripts lo_import-TABLENAME.sh after setting the environment variables PGDATABASE and optionally PGHOST, PGPORT, PGUSER, etc. if they do not correspond to the default values for libpq.

You might also execute manually a VACUUM FULL on the table to remove the bloat created by the table update.

Note

Limitation: the table must have a primary key, it is used to set the WHERE clause to update the OID column after the large object import. Importing BLOB using this second method (--lo_import) is very slow so it should be reserved to rows where the BLOB > 1GB for all other rows use the option --blob_to_lo. To filter the rows you can use the WHERE configuration directive in ora2pgpro.conf.

pdf