Re: Simple (hopefully) throughput question? - Mailing list pgsql-performance

From Pierre C
Subject Re: Simple (hopefully) throughput question?
Date
Msg-id op.vlm2ilzpeorkce@apollo13
Whole thread Raw
In response to Simple (hopefully) throughput question?  (Nick Matheson <Nick.D.Matheson@noaa.gov>)
List pgsql-performance
> Is there any way using stored procedures (maybe C code that calls
> SPI directly) or some other approach to get close to the expected 35
> MB/s doing these bulk reads?  Or is this the price we have to pay for
> using SQL instead of some NoSQL solution.  (We actually tried Tokyo
> Cabinet and found it to perform quite well. However it does not measure
> up to Postgres in terms of replication, data interrogation, community
> support, acceptance, etc).

Reading from the tables is very fast, what bites you is that postgres has
to convert the data to wire format, send it to the client, and the client
has to decode it and convert it to a format usable by your application.
Writing a custom aggregate in C should be a lot faster since it has direct
access to the data itself. The code path from actual table data to an
aggregate is much shorter than from table data to the client...

pgsql-performance by date:

Previous
From: Andy Colson
Date:
Subject: Re: Simple (hopefully) throughput question?
Next
From: Nick Matheson
Date:
Subject: Re: Simple (hopefully) throughput question?