Thread: Speed of Postgres with Java

Speed of Postgres with Java

From
ALPESH KOTHARI
Date:
Hello,

I am using Postgres-6.5 as my database. I have to
create and query db using Java (with JDBC). I am using
P-III with 128 MB of RAM.
Consider that following data structure is being stored
in db.
(int4,int4,varchar(50),int2,int4)
To store such 1000 data it takes as large as 90
seconds. I don't have any pre-experience using db.
So, is this much time OK?
I expect someone to comment who have used db with Java
thoroughly.

Thanking You,
With Reards,
Alpesh

=====
KOTHARI ALPESH D.
STUDENT M. TECH.
CEDT
INDIAN INSTITUTE OF SCIENCE
BANGALORE-560 012
INDIA
__________________________________________________
Do You Yahoo!?
Thousands of Stores.  Millions of Products.  All in one place.
Yahoo! Shopping: http://shopping.yahoo.com

************




Re: [INTERFACES] Speed of Postgres with Java

From
Brad Pepers
Date:
ALPESH KOTHARI wrote:
> 
> Hello,
> 
> I am using Postgres-6.5 as my database. I have to
> create and query db using Java (with JDBC). I am using
> P-III with 128 MB of RAM.
> Consider that following data structure is being stored
> in db.
> (int4,int4,varchar(50),int2,int4)
> To store such 1000 data it takes as large as 90
> seconds. I don't have any pre-experience using db.
> So, is this much time OK?
> I expect someone to comment who have used db with Java
> thoroughly.

You've turned off auto-commit right and just do one commit at
the end?  Otherwise its doing a lot of work for each record
added.  In general when you are doing bulk inserts, you want
to turn off some of the database features to gain speed.

Make sure auto-commit is off and then post the results.  I
bet they are *way* faster!

-- 
Brad Pepers
Linux Canada Inc.            Home of Linux products in Canada!
http://www.linuxcanada.com   Proud supporter of Cyclades, Red
brad@linuxcanada.com         Hat, and Caldera.

************




Re: [INTERFACES] Speed of Postgres with Java

From
Tom Lane
Date:
Brad Pepers <brad@linuxcanada.com> writes:
> ALPESH KOTHARI wrote:
>> To store such 1000 data it takes as large as 90
>> seconds. I don't have any pre-experience using db.
>> So, is this much time OK?

> You've turned off auto-commit right and just do one commit at
> the end?  Otherwise its doing a lot of work for each record
> added.  In general when you are doing bulk inserts, you want
> to turn off some of the database features to gain speed.

Other commonly used tricks for speeding up bulk inserts are:

1. Use "COPY FROM STDIN" to load all the records in one command, instead
of a series of INSERT commands.  This reduces parsing, planning, etc
overhead a great deal.  (If you do this then it's not necessary to fool
around with autocommit.)

2. If you are loading a freshly created table, the fastest way is to
create the table, bulk-load with COPY, then create any indexes needed
for the table.  Creating an index on pre-existing data is quicker than
updating it incrementally as each record is loaded.  This isn't useful
for adding to an existing table, of course.
        regards, tom lane

************




Re: [INTERFACES] Speed of Postgres with Java

From
Brad Pepers
Date:
ALPESH KOTHARI wrote:
> 
> Hello,
> 
> I am using Postgres-6.5 as my database. I have to
> create and query db using Java (with JDBC). I am using
> P-III with 128 MB of RAM.
> Consider that following data structure is being stored
> in db.
> (int4,int4,varchar(50),int2,int4)
> To store such 1000 data it takes as large as 90
> seconds. I don't have any pre-experience using db.
> So, is this much time OK?
> I expect someone to comment who have used db with Java
> thoroughly.

You've turned off auto-commit right and just do one commit at
the end?  Otherwise its doing a lot of work for each record
added.  In general when you are doing bulk inserts, you want
to turn off some of the database features to gain speed.

Make sure auto-commit is off and then post the results.  I
bet they are *way* faster!

-- 
Brad Pepers
Linux Canada Inc.            Home of Linux products in Canada!
http://www.linuxcanada.com   Proud supporter of Cyclades, Red
brad@linuxcanada.com         Hat, and Caldera.

************




Re: [INTERFACES] Speed of Postgres with Java

From
Tom Lane
Date:
Brad Pepers <brad@linuxcanada.com> writes:
> ALPESH KOTHARI wrote:
>> To store such 1000 data it takes as large as 90
>> seconds. I don't have any pre-experience using db.
>> So, is this much time OK?

> You've turned off auto-commit right and just do one commit at
> the end?  Otherwise its doing a lot of work for each record
> added.  In general when you are doing bulk inserts, you want
> to turn off some of the database features to gain speed.

Other commonly used tricks for speeding up bulk inserts are:

1. Use "COPY FROM STDIN" to load all the records in one command, instead
of a series of INSERT commands.  This reduces parsing, planning, etc
overhead a great deal.  (If you do this then it's not necessary to fool
around with autocommit.)

2. If you are loading a freshly created table, the fastest way is to
create the table, bulk-load with COPY, then create any indexes needed
for the table.  Creating an index on pre-existing data is quicker than
updating it incrementally as each record is loaded.  This isn't useful
for adding to an existing table, of course.
        regards, tom lane

************




type for binary data

From
Ales Pour
Date:
Hello everyone,
I cannot find a suitable type for a chunk of binary data (about 1400
bytes) in PostgreSQL 65.3. database which I access via ODBC (in MySQL
I'm usinng BLOB type and in MS Access LONGBINARY type). I'm using
SQL_C_BINARY binding for reading the data and
SQL_C_BINARY/SQL_LONGVARBINARY binding for updating it. Status after
updating corresponds to the size of the data (1400) stored, but I read
it immediately after it says that only 5 bytes read (I hope I'm
interpreting the value of status well). I tried to change
SQL_LONGVARBINARY to SQL_LONGVARCHAR but still I do not get back what I
store.
What type should I use, please? Or is it something with ODBC? I'm using
unixODBC-1.8.3 (a patch that recently appeared on this list concerning
BLOBs and ODBC has been applied int this release and I believe the
Postgres driver is pretty up-to-date).
Thank you a lot.

Regards,   Ales Pour