Hi,
On 2022-08-12 18:49:58 +0000, Nico Heller wrote:
> WITH aggregation(
> SELECT
> a.*,
> (SELECT array_agg(b.*) FROM b WHERE b.a_id = a.id) as "bs",
> (SELECT array_agg(c.*) FROM c WHERE c.a_id = a.id) as "cs",
> (SELECT array_agg(d.*) FROM d WHERE d.a_id = a.id) as "ds",
> (SELECT array_agg(e.*) FROM d WHERE e.a_id = a.id) as "es"
> FROM a WHERE a.id IN (<some big list, ranging from 20-180 entries)
> )
> SELECT to_jsonb(aggregation.*) as "value" FROM aggregation;
> Imagine that for each "a" there exists between 5-100 "b", "c", "d" and "e"
> which makes the result of this pretty big (worst case: around 300kb when
> saved to a text file).
> I noticed that adding the "to_jsonb" increases the query time by 100%, from
> 9-10ms to 17-23ms on average.
Could we see the explain?
Have you tried using json[b]_agg()?
> This may not seem slow at all but this query has another issue: on an AWS
> Aurora Serverless V2 instance we are running into a RAM usage of around
> 30-50 GB compared to < 10 GB when using a simple LEFT JOINed query when
> under high load (> 1000 queries / sec). Furthermore the CPU usage is quite
> high.
We can't say much about aurora. It's a heavily modified fork of postgres. Did
you reproduce this with vanilla postgres? And if so, do you have it in a form
that somebody could try out?
Greetings,
Andres Freund