On 2/10/2010 2:06 AM, Dann Corbit wrote:
>> SQLite can
>> deliver several thousand inserts per second if inside of a transaction.
>> If you were only getting a few inserts per second, then you were not
>> using transactions, thus SQLite was on "autocommit" mode, and thus
>> committing after every insert, thus the performance penalty.
>
> For this application, SQLite has no chance to compete. I would not want
> to rewrite applications unless there were a dire need. I am not sure
> that it would make sense to do 400 MB in one big transaction either,
> so some kind of fiddling would be needed. PostgreSQL flies like an
> arrow right out of the box. Problem solved.
If you're inserting one row per transaction, PostgreSQL shouldn't be all
that fast either, because it has to wait for data to fsync() to disk. If
you're using a storage controller with write-back caching (usually
battery backed cache) this doesn't apply, but otherwise postgresql
usually lands up waiting a disk rotation or two, so you should be seeing
insert rates below 100/s on most storage systems if it's working properly.
Are you using synchronous_commit=off and/or a commit delay?
Do you have fsync=off set in your postgresql.conf? If so, I hope you're
aware of the serious data integrity risks.
http://www.postgresql.org/docs/8.3/static/runtime-config-wal.html
If neither of these are true and you're not using battery-backed cache,
check to make sure your storage subsystem is honouring fsync requests.
Some SSDs are known to ignore fsync, which *will* cause data corruption
if you have an OS crash or power loss.
--
Craig Ringer
Tech-related writing at http://soapyfrogs.blogspot.com/