Re: Fast insert, but slow join and updates for table with 4 billion rows - Mailing list pgsql-performance

From Scott Marlowe
Subject Re: Fast insert, but slow join and updates for table with 4 billion rows
Date
Msg-id CAOR=d=2oWn4RQ_CxyD19-P+HeMT8=dnW8GaonkTZKWt7mbCvtQ@mail.gmail.com
Whole thread Raw
In response to Re: Fast insert, but slow join and updates for table with 4 billion rows  (Lars Aksel Opsahl <Lars.Opsahl@nibio.no>)
Responses Re: Fast insert, but slow join and updates for table with 4 billion rows  (Lars Aksel Opsahl <Lars.Opsahl@nibio.no>)
List pgsql-performance
On Mon, Oct 24, 2016 at 2:07 PM, Lars Aksel Opsahl <Lars.Opsahl@nibio.no> wrote:
> Hi
>
> Yes this makes both the update and both selects much faster. We are now down to 3000 ms. for select, but then I get a
problemwith another SQL where I only use epoch in the query. 
>
> SELECT count(o.*) FROM  met_vaer_wisline.nora_bc25_observation o WHERE o.epoch = 1288440000;
>  count
> -------
>  97831
> (1 row)
> Time: 92763.389 ms
>
> To get the SQL above work fast it seems like we also need a single index on the epoch column, this means two indexes
onthe same column and that eats memory when we have more than 4 billion rows. 
>
> Is it any way to avoid to two indexes on the epoch column ?

You could try reversing the order. Basically whatever comes first in a
two column index is easier / possible for postgres to use like a
single column index. If not. then you're probably stuck with two
indexes.


pgsql-performance by date:

Previous
From: Lars Aksel Opsahl
Date:
Subject: Re: Fast insert, but slow join and updates for table with 4 billion rows
Next
From: Lars Aksel Opsahl
Date:
Subject: Re: Fast insert, but slow join and updates for table with 4 billion rows