Re: 2 million queries against a table - Mailing list pgsql-general

From Ron
Subject Re: 2 million queries against a table
Date
Msg-id b6f93099-9c28-58ea-4f70-46b9ca42ade2@gmail.com
Whole thread Raw
In response to 2 million queries against a table  (Adam Sanchez <a.sanchez75@gmail.com>)
List pgsql-general
On 7/15/20 10:10 AM, Adam Sanchez wrote:
> Hi
>
> I need to run 2 million queries against a three columns table t
> (s,p,o) which size is 10 billions rows. The data type of each column
> is string.  The server has 512G RAM, 32 cores and 14T SSD (RAID 0)
>
> Only two types of queries:
>
> select s p o from t where s = param
> select s p o from t where o = param

What's the index cardinality of "s" and "o" (about how many records per key)?

What kind of indexes do you have on them?  Is the table clustered on one of 
the keys?

What version of Postgresql?

> If I store the table in a Postgresql database takes 6 hours using a
> Java ThreadPoolExecutor.

How many threads?

What values of:
shared_buffers
temp_buffers
work_mem

> Do you think Postgresql itself can speed up the queries processing
> even more? What would be the best strategy?

2M queries in 6 hours is 93 queries/second.  Over 32 cores, that's only 
three per second.   Not very much.

> These are my ideas:
>
> 1. Use Spark to launch queries against the table (extracted from
> Postgresql) loaded in a dataframe
> 2. Use PG-Strom, an extension module of PostgreSQL with GPU support
> and launch the queries against the table.
>
>
> Any suggestion will be appreciated

IO is -- as usual -- the bottleneck, followed closely by cache efficiency.  
Are you issuing the queries in a random order, or sequentially by key (which 
would enhance cache efficiency)?

-- 
Angular momentum makes the world go 'round.



pgsql-general by date:

Previous
From: Adam Sanchez
Date:
Subject: 2 million queries against a table
Next
From: Tom Lane
Date:
Subject: Re: 2 million queries against a table