Re: stored proc and inserting hundreds of thousands of rows - Mailing list pgsql-performance

From Joel Reymont
Subject Re: stored proc and inserting hundreds of thousands of rows
Date
Msg-id 4881277622113933088@unknownmsgid
Whole thread Raw
In response to Re: stored proc and inserting hundreds of thousands of rows  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
Responses Re: stored proc and inserting hundreds of thousands of rows  ("Pierre C" <lists@peufeu.com>)
Re: stored proc and inserting hundreds of thousands of rows  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
List pgsql-performance
Calculating distance involves giving an array of 150 float8 to a pgsql
function, then calling a C function 2 million times (at the moment),
giving it two arrays of 150 float8.

Just calculating distance for 2 million rows and extracting the
distance takes less than a second. I think that includes sorting by
distance and sending 100 rows to the client.

Are you suggesting eliminating the physical linking and calculating
matching documents on the fly?

Is there a way to speed up my C function by giving it all the float
arrays, calling it once and having it return a set of matches? Would
this be faster than calling it from a select, once for each array?

Sent from my comfortable recliner

On 30/04/2011, at 18:28, Kevin Grittner <Kevin.Grittner@wicourts.gov> wrote:

> Joel Reymont <joelr1@gmail.com> wrote:
>
>> We have 2 million documents now and linking an ad to all of them
>> takes 5 minutes on my top-of-the-line SSD MacBook Pro.
>
> How long does it take to run just the SELECT part of the INSERT by
> itself?
>
> -Kevin

pgsql-performance by date:

Previous
From: "Kevin Grittner"
Date:
Subject: Re: stored proc and inserting hundreds of thousands of rows
Next
From: "Pierre C"
Date:
Subject: Re: stored proc and inserting hundreds of thousands of rows