2.1. Creating a Data Transfer #
In the navigation panel, go to Transfers.
In the top-right corner of the page, click Create transfer.

Enter parameters of the new data transfer (parameters marked with an asterisk are required):
Name.
Data Source • Type: The source database and its type for the data transfer. The possible values are the database connection names entered while creating a database connection.
Data Target • Type: The target database and its type for the data transfer. The possible values are the database connection names entered while creating a database connection.
Launch Parameter: The ProGate utility to use for the data transfer.
Possible values:
ProSync. If selected, change data capture (CDC) is started after the initial loading.
ProCopy. If selected, the initial loading is performed. This parameter cannot be disabled.

Click Create.
Edit initial load parameters and replication parameters, if applicable.
If only ProCopy is used, you can edit the following parameters (default values are entered automatically):
Number of read threads: The number of parallel read processes. Each of them processes one task at a time until it is finished.
Number of write threads: The number of parallel write processes. Each process retrieves batches from the queue and applies them to the destination.
Data block size limit in bytes: The global limitation on the size of one batch.
The parameter value must be greater than
0.Need to clear target tables: Specifies whether all target database tables are truncated before the data loading.
If set to No, target database tables are not truncated.
Possible values:
Yes
No
To set more parameters, click All parameters.

The window with the following additional parameters will be displayed:
Disable sorting of records by unique key: Disables loading of the data sorted by the unique primary key. The whole table is retrieved through a cursor.
Possible values:
Yes
No
Log level.
Possible values:
info
debug
Number of decimal places for monetary types: The number of fractional digits in the source database.
LOB size threshold for binary protocol: The threshold for the LOB size that defines how to fetch LOBs.
If the size of a LOB is less than this value, the binary protocol is used and all the data in the LOB is loaded to the random-access memory (RAM).
If the size of the LOB is greater than this value, the LOB is read in parts.
Number of data block sending attempts on write error: The number of batch sending attempts in case of error.
Time between data block sending attempts on error: The time between batch sending attempts in case of error.
Transfer bfile: If this parameter is specified, all the
BFILEobjects are copied as a whole from Oracle during the data loading.By default, only identifiers are copied, that is, the directory aliases and filenames.
Possible values:
Yes
No
Sliding window reading size in rows: The size of the sliding window. Number of rows selected at a time.
Disable index hints usage when reading data: Specifies whether to disable index hints when creating
SELECTqueries.Possible values:
Yes
No
Maximum data block reading time: The maximum time to read a data block.
Number of records in one subtask: The number of rows contained in one subtask of a large table.
Type conversion options to string.
Enter the key word and its value.
You can add multiple options by clicking + Add option.
If ProSync is also used, you can edit the following parameters (default values are entered automatically):
Data block size for reading from source in bytes: The batch size for ProSync reading from Oracle
LogMinerto be used.Reading changes from Oracle online REDO logs: Specifies whether to read the changes from archive and online
REDO LOGS. This parameter enables getting the changes as soon as they occur in online logs. Not recommended to use for reliability. Because of how Oracle writes logs, some operations can be missed.Possible values:
Yes
No
Operations count threshold within transaction for intermediate cache usage: The number of operations in one transaction that causes writing of intermediate data to disk.
To set more parameters, click All parameters.
The window with the following additional parameters will be displayed:
Maximum data block size when reading from source: The maximum size to which the batch size can be increased when reading.
If the maximum size is reached, ProSync terminates with an error. This can happen if there is an open transaction that does not close. The user must resolve this issue.
Data block size for processing: The batch size for processing a transaction.
This parameter is for internal tuning of the application.
Data block size for writing: The batch size for inserting data.
The value of
0means that the data is inserted as new data occurs.Validate record sequence: Specifies whether to validate a sequence of LogMiner log files.
If a log file is missing, ProSync terminates with an error.
If you are prepared to lose data, set this parameter to No.
Possible values:
Yes
No
Data volume in bytes to process in one iteration for LOB type data: The maximum number of bytes to write at a time when writing LOBs.
Click Save.
Add replication tasks.

In the top-right corner of the page, click + Add task.
Enter parameters of the new replication task (parameters marked with an asterisk are required):
Task Type.
Possible values:
SQL-Query
Schema-Schema
Name.
Source Schema: The schema of the replication source database.
Target Schema: The schema of the replication target database.
Target Table: The table in the schema of the replication target database. This field is displayed if you select the SQL-Query task type.

If you select the SQL-Query task type:
Type the SQL query for the data transfer task. The console window supports syntax checking and autocompletion based on keywords, field names, and table names. If the query is incorrect, an error is displayed.
To enable or disable query validation, use the Query validation toggle in the top-left corner above the console window.

Map fields from the SQL query with target table fields:
Manually by selecting target fields from drop-down lists next to source fields.
Automatically by clicking Map objects.

If you select the Schema-Schema task type:
Map tables from the source schema with target schema tables:
Manually by selecting target tables from drop-down lists next to source tables.
Automatically by clicking Map objects.
To view or hide table fields for manual field mapping, click
or
.

Click Add task.
Click Finish setup.