Thread: Conditional query plans.

Conditional query plans.

From
"Michael Richards"
Date:
Hi.

This whole message might be a giant brain fart but I had an 
interesting idea today.

I was confronted by an obscene query plan. I have a table of logins 
that shows when webmail accounts were created. So a spammer went and 
set up 20 or so spam accounts. So I got a list by his IP and the time 
when he set them up. Now to batch cancel them I hacked up a quick 
query:
update users set enabled='f',disablereason='We do not allow our 
system to be used for SPAM.' where id in (select id from users where 
loginid in (select distinct loginid from logins where 
ip='123.123.12.12'));

This is a horrible way to do it and the query plan is even worse:
NOTICE:  QUERY PLAN:

Seq Scan on users  (cost=0.00..612996782699.54 rows=18180 width=172)SubPlan ->  Materialize
(cost=33718194.83..33718194.83rows=18180 width=4)  ->  Seq Scan on users  (cost=0.00..33718194.83 rows=18180 width=4)
SubPlan     ->  Materialize  (cost=1854.65..1854.65 rows=48 width=12)      ->  Unique  (cost=1853.44..1854.65 rows=48
width=12)      ->  Sort  (cost=1853.44..1853.44 rows=482 width=12)        ->  Index Scan using logins_ip_idx on logins

(cost=0.00..1831.97 rows=482 width=12)

Given that the first and second subplan actually return only 25 rows, 
there are 2 possibly distillations of this plan:

update users set enabled='f',disablereason='We do not allow our 
system to be used for SPAM.' where id in 
(27082,27083,27084,27085,27086,27087,27088,27089,27090,27091,27092,270
97,27098,27099,27101,27102,27103,27104,27094,27096,27095,27106,27100,2
7105,27093);

Which comes up with a plan:
NOTICE:  QUERY PLAN:

Index Scan using users_pkey, users_pkey, users_pkey, users_pkey, 
users_pkey, users_pkey, users_pkey, users_pkey, users_pkey, 
users_pkey, users_pkey, users_pkey, users_pkey, users_pkey, 
users_pkey, users_pkey, users_pkey, users_pkey, users_pkey, 
users_pkey, users_pkey, users_pkey, users_pkey, users_pkey, 
users_pkey on users  (cost=0.00..57.04 rows=2 width=172)

Basically it's going through each of the 25 as though they were 
separate updates.

The second and probably less optimal plan would be to create a hash 
of these 25 answers and do a sequential scan on users updating rows 
where id is found in that hash.


For these 2 query plans, 1 would be optimal in the event there is a 
small list to update, and the other would be ideal in the event there 
is a large list to update. 

Why attempt to formulate a complete query plan at the outset. Could 
you not break the query into smaller parts and re-optimize after 
every subplan completes? This way you would have an exact number of 
rows provided from the subplans so more accurate choices could be 
made farther down the line? This becomes especially relevant on large 
joins and other complex queries.

Maybe I just gave away an idea I could have sold to Oracle for 
millions, and maybe everyone is already doing this. Anyway, it's just 
thoughts and if anyone makes it this far it might be worthwhile for a 
little discussion.

-Michael
_________________________________________________________________    http://fastmail.ca/ - Fast Free Web Email for
Canadians

Re: [HACKERS] Conditional query plans.

From
Tom Lane
Date:
"Michael Richards" <michael@fastmail.ca> writes:
> The second and probably less optimal plan would be to create a hash 
> of these 25 answers and do a sequential scan on users updating rows 
> where id is found in that hash.

Given the presence of the "materialize" nodes, I don't think this query
plan is quite as nonoptimal as you think, especially for ~25 rows out of
the subplan.  It's a linear search over a 25-entry table for each outer
row, but so what?  With hundreds or thousands of rows out of the
subquery, it'd be nice to have a smarter table lookup method, agreed,
but here it hardly matters.

Something that's been on the todo list for a long time is to try to
convert WHERE foo IN (SELECT ...) queries into some kind of join,
instead of a subselect.  With that approach we'd be able to use merge
or hash strategies to match up inner and outer rows, which'd work a lot
better when there are large numbers of rows involved.  It might actually
happen for 7.2...
        regards, tom lane


"too big" transactions

From
"Edmar Wiggers"
Date:
How does PostgreSQL handles a "too big" transaction?

By that I mean a transaction which, after a certain point, there will be no
way to roll back. On PgSQL, maybe that only happens when the disk fills. Is
there a configurable "size" limit for a single transaction?

In addition, what happens if the disk fills up? Postgres is able to roll
back, right?

I'm assuming you can prevent the disk from actually filling up (and crashing
the whole server) by turning on quotas for the postgres super user, so that
only pgsql would complain. Please correct me if I'm wrong.



Re: Conditional query plans.

From
"Continuing Technical Education"
Date:
> update users set enabled='f',disablereason='We do not allow our
> system to be used for SPAM.' where id in (select id from users where
> loginid in (select distinct loginid from logins where
> ip='123.123.12.12'));

Would it run better as:

update users set enabled='f',disablereason='We do not allow our
system to be used for SPAM.' where id in (select distinct loginid from
logins where
ip='123.123.12.12');

Or perhaps even:

update users set enabled='f',disablereason='We do not allow our
system to be used for SPAM.' where id in (select unique id from users,logins
where
users.loginid=logins.loginid where ip='123.123.12.12');

I don't know if that helps the query plan, but it looks prettier :)