On 5/3/19 6:09 AM, Matthias Apitz wrote:
> El día Friday, May 03, 2019 a las 07:38:23AM -0500, Ron escribió:
>
>> On 5/3/19 6:56 AM, Matthias Apitz wrote:
>>> Hello,
>>>
>>> We're investigating the migration of our LMS (Library Managment System)
>> >from Sybase ASE 15.7 to PostgreSQL 10.6. The used database in field have
>>> around 400 columns, some of them are also containing BLOB (bytea) data.
>>> The DB size vary upto 20 GByte. The interfaces contain any kind of
>>> language one could imagine :-) ESQL/C, JDBC, Perl DBD, ...
>>>
>>> Re/ the migration of the data itself, are there any use case studies
>>> which could we keep in mind? We plan to export the tables with our own
>>> tool which produces CSV with delimiter '|' (and \| if the char | is in
>>> char columns too) and with hex representation of the BLOB data. This seems
>>> to fit nicely with PostgreSQL's COPY command.
>>>
>>> Any known pitfalls?
>>
>> Do you have many stored procedures, functions, etc?
>
> We have in Sybase triggers on some tables calculating the next value for
> an integer "serial" based on helper tables because Sybase does not know
> (or did not know in 11.9) about serials. But, these will be replaced by native
> "serial" on PG.
>
> Sybase also has a so called SYB_IDENTITY_COLUMN in each table with a
> unique number for each row (may have gaps) and as Sybase does not know
> SCROLLED CURSOR we simulated these in our DB layer reading-in all
> SYB_IDENTITY_COLUMN numbers of a hit list after SELECT and can read
> backwards in this in memory list presenting the requested row with a new
> SELECT based on the SYB_IDENTITY_COLUMN number. This is somewhat
> clumsy but certain features in upper layers want to read backwards (and
> we came from INFORMIX-SE, later INFORMIX-ONL some 30 years ago).
>
> I was deeply impressed by the COPY command, loading ~35000 rows in the
> time one needs to close and open the eyes. As well a SELECT returns in
> a table with ~35000 without any INDEX in very short time. How PG does this?
In the COPY case the entire thing is done in a single transaction. The
downside to this is that a single error in the data will roll back
everything.
As to SELECT you are seeing the query planner at work. See the sections
below for more info:
14.1
https://www.postgresql.org/docs/11/using-explain.html
14.2
https://www.postgresql.org/docs/11/planner-stats.html
Indexes are still important.
>
> matthias
>
--
Adrian Klaver
adrian.klaver@aklaver.com