Re: Should we optimize the `ORDER BY random() LIMIT x` case? - Mailing list pgsql-hackers

From Nico Williams
Subject Re: Should we optimize the `ORDER BY random() LIMIT x` case?
Date
Msg-id aCez2Uz/yx2DTwPv@ubby
Whole thread Raw
In response to Re: Should we optimize the `ORDER BY random() LIMIT x` case?  (Vik Fearing <vik@postgresfriends.org>)
List pgsql-hackers
On Fri, May 16, 2025 at 11:10:49PM +0200, Vik Fearing wrote:
> Isn't this a job for <fetch first clause>?
> 
> Example:
> 
> SELECT ...
> FROM ... JOIN ...
> FETCH SAMPLE FIRST 10 ROWS ONLY
> 
> Then the nodeLimit could do some sort of reservoir sampling.

The query might return fewer than N rows.  What reservoir sampling
requires is this bit of state: the count of input rows so far.

The only way I know of to keep such state in a SQL query is with a
RECURSIVE CTE, but unfortunately that would require unbounded CTE size,
and it would require a way to query next rows one per-iteration.

Nico
-- 



pgsql-hackers by date:

Previous
From: Nico Williams
Date:
Subject: Re: Should we optimize the `ORDER BY random() LIMIT x` case?
Next
From: Paul A Jungwirth
Date:
Subject: Foreign key isolation tests